The researchers found that when sensory cues from the hands and eyes differ from one another, the brain effectively splits the difference to produce a single mental image. The researchers describe the middle ground as a "weighted average" because in any given individual, one sense may have more influence than the other. When the discrepancy is too large, however, the brain reverts to information from a single cue - from the eyes, for instance - to make a judgment about what is true.
The findings, reported Friday, Nov. 22, in the journal Science, could spur advances in virtual reality programs and remote surgery applications, which rely upon accurately mimicking visual and haptic (touch) cues.
In a series of experiments, the researchers divided 12 subjects into two groups. One group received two different types of visual cues, while the other received visual and haptic cues. The visual-haptic group assessed three horizontal bars. Two appeared equally thick to the eye and hand in all instances, while the third bar alternately appeared thicker or thinner to the eye or hand. The group with two visual inputs assessed surface orientation, with two surfaces appearing equally slanted according to two visual cues, while a third appeared more slanted according to one cue and less slanted according to the other.
To manipulate the sensory cues, the researchers used force-feedback technology to simulate touch, and shutter glasses to simulate 3-D visual stimuli. Participants in the visual-haptic group inserted their thumb and forefinger into the device to "feel" an object projected onto a computer monitor. Through the devices, they see and feel the virtual object.