Project idea – sensor integration, value judgement, and cognitive architecture

The subtitle of this project idea is “Orange vomit and dead Smurf” … I will explain later how we ended up with a subtitle like this.

The last project idea about value judgement suggested to test if we can evolve Markov Brains (MB) to distinguish if a retina or a single bit channel contains more zeros than ones (or vice versa). I think it would be interesting to combine both problems with each other. We discussed the following experiment in class: If shown a screen with red and green dots you can tell which color has more dots. Similarly you can hold a vibrating device in each of your hands and tell which one vibrates with a higher frequency or intensity. When holding only one device and seeing a screen with dots of one color you would still be able to pass a judgement about the question “Do you see more dots than the device vibrates?”. This might require you to define the maximum vibration and the maximum number of dots, but never the less you could give an answer. This answer will make a statement about two very different types of perception and your judgement about each sensors relative intensity to each other, but you will be able to do it.

Similarly I suggest to evolve an MB to judge on two separate sensorial inputs, and perform the cross comparison after the brain was evolved. Will a MB be able to make such comparison? The sensor modalities for the MB will be a little different than the last experiment, but that is a minor problem.

However, it would be interesting to find out if selecting a brain to perform each of the two judgements alone is sufficient to allow the cross sensor judgement or if one has to select for this ability at least somehow?

2 Comments

  1. Peter March 12, 2014 10:14 pm Reply

    So there’s been some evidence recently (going along with the push from ’embodied cognition’ people) that evidence accumulates in a brain area that corresponds to how you’re going to make a response – e.g. if you have to move your eyes to respond, you should see activation in the frontal eye field relative to which direction you’re going to choose – as well as the sensory areas being stimulated (Gold & Shadlen, 2007 below). But as it turns out there are areas that are active during decision-making regardless of how you’re going to respond (e.g. if you have to respond using your fingers, eyes, or toes or something – Liu & Pleskac, 2011). There is some work out there that shows that there are areas common to different types of stimuli when you have to make decision between them (the most famous is deciding “is this a face or a house?” while an image is slowly revealed from noise – see Heekeren et al, 2008) but I’m not sure if anyone has examined decisions that contrast between different ‘senses.’

    This isn’t to say that a separate module to compare the two stimuli is necessary or would evolve under all conditions. But it at least seems like, in humans, there is something else going on besides independent sensory evidence accumulation.

    Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annu. Rev. Neurosci., 30, 535-574.

    Liu, T., & Pleskac, T. J. (2011). Neural correlates of evidence accumulation in a perceptual decision task. Journal of Neurophysiology, 106(5), 2383–2398. doi:10.1152/jn.00413.2011

    Heekeren, H. R., Marrett, S., & Ungerleider, L. G. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9(6), 467-479.

  2. Jory Schossau March 13, 2014 1:43 pm Reply

    Maybe knowing more about the dots experiment will help, but I suspect there should be a tradeoff between guessing right and waiting to get enough information.

Leave a Reply