Modern office work often demands juggling many tasks while responding quickly to diverse demands, from both machines and coworkers. Staying productive in the face of this onslaught of distractions is a growing challenge, and tools to help workers do this are a perennial topic in computer-human interface research. In this chaotic environment, knowledge workers have to act more like the conductor of an orchestra than someone playing a single instrument, says Aaron Quigley of the University of St. Andrews, U.K.
Quigley and his colleagues have found that they can actually reduce distraction by providing a user with more information about new events, as long as they present the new information in the right way. In a paper presented this spring at the International Conference on Intelligent User Interfaces (IUI), they described a system that highlights only what has changed recently, when the user is not attending to the display. "Lots of people have been looking at different ways of managing attention," such as notifications of urgent messages, says Quigley’s colleague in the University of St. Andrews’ School of Computer Science, Per Ola Kristensson. "What we’re doing here is focusing on how to manage people’s inattention."
For their initial exploration, the researchers used a workstation with several large screens, a setup that has become rather common. To determine which screen the user was looking at, graduate student Jakub Dostal used commercial cameras mounted on each display, and developed software that was 98% accurate in a controlled setting.
One well-known technique for reducing distraction, based on gaze direction, is to completely dim the displays at which the user is not looking. However, this does not take advantage of peripheral vision. In the new system, which the researchers call Diff Displays, they explored a less-extreme approach, using four different visualization methods to highlight changes that occurred most recently.
In the simplest method, when the user returns their gaze to a previously unattended screen, the view simply dissolves from what they last saw there to an updated view over a couple of seconds. In the other methods, the system continually updates the unattended screens, highlighting pixels or windows that have changed recently. "We want them to know what they don’t know, but without them actively having to do something to gain that information," says Quigley.
A major motivation for this scheme is to reassure the user that he or she will be informed if something important happens, says Quigley. With ordinary systems, by contrast, "people are constantly shifting their gaze and shifting their attention to look at the other display, just to see if something actually has changed. So it can be very distracting."
Testing whether this subtle highlighting really works is challenging, however. Although the gaze-tracking software makes it easy to determine where the user’s attention lies at any moment, the effect on distraction "has to be tested in a real work environment," says Kristensson, as a user is juggling multiple real-world tasks.
For this reason, rather than present many subjects with idealized tasks, the researchers followed a single knowledge worker through a week of work. "It was a difficult decision" to use only one subject, says Kristensson, but "as an initial study, I think this is the best way to do it in this case."
The user spent the first day viewing an ordinary always-on display, while the other days successively employed the four Diff Displays visualization methods. Although there was some variation, all four methods significantly reduced how often the user switched between displays, from almost 300 times per hour to 200 or fewer. "I think it is indicative that this kind of constant window-switching behavior might be a bigger problem than [most people] think it is," says Kristensson.
Unfortunately, says Mary Czerwinski of Microsoft Research, the use of a single subject makes any conclusions about this particular scheme untrustworthy, because people vary widely in the way they work. Still, she notes that interface designers have long recognized the importance of controlling users’ attention to changes.
Roel Vertegaal of Queens University in Kingston, Ontario, also is dubious, althought "as a general topic area I think it’s really interesting," he says. Back in 2003, Vertegaal edited a special issue of CACM on "attentive user interfaces," and although the specific technique seems novel, he is not convinced that it is a major advance over other recent work.
Kristensson cautions that the initial, non-intrusive camera-based technique may prove inadequate for the ultimate goal of tracking attention shifts between monitors and smartphones, for example. "If you want to expand this to heterogeneous environments," he says, "you would probably want to use a different set of technologies."
In addition, Kristensson says, a practical application should allow designers or users to determine which applications or events lead to which kind of interruption. "Everything should be personalized," agrees Czerwinski. The technology to learn user preferences and customize notifications has been around for at least a decade, she says, but she thinks it is close to being widely adopted. "There’s no reason why this technology shouldn’t be in our products today." Gaze-based techniques, by contrast, may have to wait a bit longer.
Don Monroe is a science and technology writer based in Murray Hill, NJ.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment