Oviatt and her team are now modeling the results from these and future experiments so they'll be able to reliably predict how speech will vary under certain circumstances. Such quantitative modeling, they say, will lead to a new science of audio interface design, which will be important for developing new mobile systems that are capable of processing users' speech reliably in natural settings.
"Ideally, we want to develop high-functioning applications and interface designs for specialized populations," said Oviatt. "This is a tall order for computers right now."
Most people now have one way to communicate with their computer: through a keyboard. Oviatt and collaborator husband, Phil Cohen, Ph.D., are pioneers in the research and design of multimodal interfaces -- that is, computer systems that enable people to communicate with them using more than one modality. The pair have made breakthroughs that empower people to interact with computers via speech, pen, touch and gesture.
"Currently humans adapt to the limitations of computers," said Oviatt. "But future computers need to be more adaptive to us. They should be smaller and embedded so we don't have to carry them. They should be able to combine pen and speech input to express themselves naturally and efficiently, and with information tailored to an individual's communication patterns, usage patterns, and physical and cognitive needs."