In about half a second, the human brain (specifically the superior colliculus) will analyze its current environment, and then decide whether or not one thing or another is worth taking any notice of. Exactly how the brain does this is still somewhat a mystery, but we do know that the more sensory input provided, the more likely the brain will pay attention. (For example, in a crowd, if you wave at someone he may or may not noticebut, if you wave and shout, chances are better that hell pay attention.)
Researchers are now hard at work building computer programs that can function in the same way. Now theres a camera in the works that uses a computer simulation of this specific brain process, and is close to mimicking it.
Funded by the Office of Naval Research, researchers at the University of Illinois have built a movable video camera that is aimed at targets detected by a stationary video camera that watches for motion, and a microphone pair that listens for sound.
These in turn are linked to a standard desktop computer that has been programmed with a simulated neural network. This neural network mimics how the human brains superior colliculus does its mental mapping, and then uses sound and sight to put in all in perspective.
Gathering sensory input, processing it, and deciding what to do on the basis of that processing is an important part of real brain function, says researcher Tom Anastasio. So is learning to make better decisions. Although it is extremely simple, the Self-Aiming Camera operates in a brain-like way. Theoretically, the system continually learns writing and re-writing its software code as it gathers more and more data.
This learning provides the camera with several useful abilities including discrimination between a man or an automobile, for example, depending on whether its been programmed to look at men, or at automobiles, explains ONR Program Manager Dr. Joel Davis.