"Keyboard-and-mouse interfaces currently pose the main bottleneck limiting the functionality and usability of modern computation," says Dr. Sharon Oviatt, firmly. Oviatt’s new book, The Design of Future Educational Interfaces (Routledge, 2013), examines the nature and effects of that bottleneck, which extend well beyond simply slow or inefficient interaction with the computer.
Oviatt’s work shows that the ubiquitous keyboard can limit not just what a student (and by extension, any user) can do, but how well they think. "The central theme," she writes, "is that computer interfaces that encourage expressing information in different representations, modalities, and linguistic codes can stimulate ideas, clarity of thought, and improved performance."
The book represents something of a summing-up of Dr. Oviatt’s career and work up to this point. She has received a National Science Foundation Special Creativity Award for pioneering work on mobile multimodal interfaces, and has contributed to leading journals and books in the field, including The Human-Computer Interaction Handbook (CRC Press, 2007). She’s also the director of Incaa Designs, a nonprofit that studies and designs educational interfaces.
"I have a very multidisciplinary background," she says. "It includes psychology, learning sciences, computer science, and also linguistics and communication. At a certain point it became clear to me that an increasing amount of human communication is now mediated by computers. I’ve also been interested in next-generation interfaces that more closely leverage human communication patterns. In the last 10 to 15 years I have been doing a lot of work on educational interfaces, so the intersection of these interests is represented in the book."
Modes of communication
Oviatt points out that human communication is multimodal; it’s not at all limited to spoken words, or to vocalizations at all. For example, explaining to another person where something is or how to get there frequently involves gestures. That’s an example of spatial communication, which in turn stimulates spatial thinking. Keyboards, however, don’t support spatial content very well.
Some of Oviatt’s research compared students’ abilities to generate solutions to written science problems using one of four tools: pencil and paper; digital pen and paper; a pen tablet, and a graphical tablet with a keyboard and mouse available. The results showed that the students that employed diagramming (a form of spatial communication) generated both more hypotheses and more correct solutions than those who used just the keyboard. "Spatial communication helped them think about verbal statements," says Oviatt.
That result wasn’t surprising; what was surprising is that the students using digital pens did better than those using traditional pencil and paper. "When I first started doing this work, I didn’t necessarily assume that the input mode per se would have a major influence on kids being able to solve math or science problems. I didn’t expect that we would find a 10–40-percent difference in even being able to solve problems correctly."
Oviatt ascribes the difference to the expectations students have of computers. "We believe that computers and input devices have affordances," which she explains as people’s expectations about what they can do with an object, based on their experience and beliefs. "If you look at a doorknob, its appearance leads you to expect to act on it in a certain way. Likewise, the affordances most people experience are that computers are communication devices—email, texting, and so on. So now when people use such a device, it encourages them to communicate more than they would with a non-digital tool. In all the studies we’ve run, students communicate more copiously when they’re performing the same tasks when they’re using a computer than a non-digital tool."
Her extensive use of studies is characteristic of Oviatt’s work, according to Dr. John Sweller, Professor Emeritus of the School of Education at the University of New South Wales. "Unusually for this field, she is an expert on running randomized controlled trials testing the effectiveness of new techniques rather than merely accepting the dictates of whatever the latest fad might be," he says. "Her book puts forward a coherent argument based on the results of her experiments."
Choosing the right interface
At the same time, Oviatt’s findings suggest computers aren’t just communication tools; they’re thinking tools, and restrictive interfaces lead to restrictive thinking. "New research reveals that some computer interfaces can substantially improve students’ performance beyond the level supported either by existing keyboard-and-mouse interfaces or by non-digital pen and paper tools," she writes. In response, she calls for "more expressively rich and flexible interface tools" that support multiple modalities. She cites a study in which elementary school students could "converse" with a digital fish while studying marine biology; they asked 100 to 300 questions an hour during the lesson, indicating a high level of engagement.
There’s still a problem, though: students will reach for standard interfaces even when they don’t support the task. "The evidence indicates that students most often prefer to use interfaces that look like a traditional computer, even though the correctness of their problem solutions drops a whole grade point when they do," she writes. She points out "teaching computers" is generally only procedural, and calls for educators to also discuss how to evaluate the proper tool for the task.
The increasing use of mobile devices is helping the development of more flexible interfaces. "These days, people are increasingly seeing alternative means of communication," she says. "Speech, touch, pens, sensors, virtual keyboards are all being used and combined." So far, though, such multimodal interfaces are mostly limited to manipulating the device, not the content.
"I’d like to think of the future as moving in the direction of combining touch-based gestural interfaces with things like pen-based input. The ideal way to do that is to have two-handed or bimanual input. People naturally hold a piece of paper with their non-dominant hand and rotate it or adjust it while, with their dominant hand, they may be sketching, writing, and entering content. Researchers have been working on two-handed interfaces, and also screens that can differentiate between gestural manipulation and pen input to be entered into the application."
Oviatt reiterates that her goal is not to simply make computers easier to learn and use. "Some of these alternatives also provide much better thinking tools," she says, "and nobody in the computer science field has really analyzed or thought about that. If my book does nothing else, I want it to at least bring to the computer science and engineering community the message that what we need in the future are better thinking tools. I’m not saying keyboards and mice need to disappear completely, but we need to be more judicious in using them only for the tasks for which they’re well-suited."
Logan Kugler is a freelance technology writer based in Silicon Valley. He has written for more than 60 major publications.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment