Thursday, September 10, 2009

Does your brain balance prediction and observation?

Sorry for the slight infrequency in my posts. Things are hectic in graduate school, as can be expected. I do still plan to update this blog with thoughts as frequently as I can find time.

So I just watched a talk by Moshe Bar of the Harvard Medical School. The thrust of his research program is that the mind (and neural substrates thereof) do not passively respond to environmental stimuli, but constantly attempt to predict what is about to happen -- what the eyes are about to see happen next, for example. Or, if seeing a blurry outline of an object, inferring what that object is from contextual and shape cues. Bar showed very neat MEG evidence showing that the time course for activation of the (prefrontal) brain area supposed to do the prediction is about right for his hypothesis. In other words, it activates before the area associated more directly with conscious recognition of an object does. Curiously, this means that the prefrontal cortex is involved directly in fairly low level vision. This, in and of itself, is interesting data, and Bar's hypothesis seems plausible. But to my mind, it says too little about the cognitive mechanisms involved. Here is a proposal about what could be going on, from a computational perspective.

This all struck me as very similar to the AI mechanism known as a Kalman filter. Without getting into the math, the basic idea behind a Kalman filter is that it adjusts the balance between prediction and observation in the model of the world that the organism dynamically builds. So, for example, a Kalman-filter-equipped robot that is navigating a ship could rely either on observations of the nearby shoreline combined with the speed reported by its engines to calculate its predicted position in the future. Alternatively, it could rely on "dead reckoning" -- knowing that it left harbor in a particular location, and headed in a particular direction with a particular speed. Which one the robot wants to rely on depends on how noisy each set of information is. If, for example, the robot is in a deep fog where the shoreline is hard to make out, and the engine speed-reporting device is malfunctioning, relying on dead reckoning may be a good idea. If, on the other hand, there is a strong but unpredictable current in the water (say, the ship passed through some whirlpool and came out facing a slightly different direction), then the robot probably wants to rely much more on the shoreline and engine speed readings.

The Kalman filter plays a role in all this by calculating the (mathematically provably) optimal balance between which set of information to rely on (prediction or observation), dependent on the noise of each, such that the model is maximally accurate.

The point of all of this is -- could it be that the neural architecture Bar provides evidence for is actually a neural instantiation of the Kalman filter? One way to test for this might be to see if activation of the prefrontal "predictive" area Bar identifies is lessened when the environmental input is clearer, and strengthened when it is noisier. Of course, even if the neural system is some sort of instantiation of a Kalman-filter-like device, it would not have to behave in this way. Perhaps the prefrontal area Bar identified is just the prediction element of the Kalman filter design, with a further "selective" element being present, which performs the actual computation of determining how the organism should balance relying on prediction versus observation.

Making specific predictions is complicated in this case, but it might also be worthwhile. The idea of a computational mechanism originally proposed in AI being instantiated in neural architecture unites the two fields in a pretty exciting way, shows exactly the kind of thing AI has to contribute to the study of the mind, and might even suggest that we instantiate a computationally, mathematically, theoretically optimal (!!!) mechanism.

No comments:

Post a Comment