Countless people interface with assistive technologies today either because they use them, develop them, or both. Some technologies have existed for years, but many more are rapidly emerging, motivated by fast-paced developments in science and engineering and by the allure of enormous potential markets.
Newly emerging technologies include mobile video phones for people who use sign language in combination with texting; enhanced optical character recognition and speech-synthesis tools that read books aloud; machine-learning algorithms and positioning sensors that enable a person in a wheelchair to better navigate an environment; improved speech recognition hardware for more accurately inputting verbal commands to a computer, wheelchair, or handheld device; and tools for designing more accessible Web sites.
More than 40 million Americans identify themselves as having a physical disability, of which 12 million use a computer and 17 million work full time, according to the U.S. Census Bureau. Globally, the United Nations estimates more than 700 million people have a physical disability. That figure is expected to grow due to improved health care and other factors that are increasing overall life expectancies. Factoring in the family members of these hundreds of millions, the market for assistive technologies encompasses several billion persons, and universities, companies, and governments are ramping up to meet the demand.
Profound changes are taking place in the assistive technology industry due to advances in compute power, signal processing, data compression, materials science, miniaturization, cognitive research, and the algorithms of artificial intelligence, along with a host of legal mandates and a growing awareness that full access to technology makes the world a happier, smarter, and more productive place. Along with these technological advances, a 21st-century lexicon has emerged as well. People today talk about accessibility technology, rather than assistive technology.
Accessibility technology guru Richard E. Ladner, a professor in computer science and engineering at the University of Washington and winner of the 2008 A. Nico Habermann Award, notes that people don’t want assistance; they want fair and equal access to computers, the Internet, consumer devices, and other aspects of 21st-century life no matter their preferences or needs. Ladner is also quick to point out that if anyone expects to work in the field of accessibility technology, they must understand the accompanying terminology and the mindset.
There are no homogenous populations of accessibility technology users who can be lumped together by a common disability, Ladner says. There are only individuals who will evaluate the various accessibility tools made available and pick for themselves. "There are lots of examples of accessibility technology that were creative or inventive, but were never accepted," says Ladner. "People just want to live their lives, to succeed, and be happy. They will be the ones to decide if any particular technology is part of that equation, so one of the biggest challenges is to find solutions that work and will also be adopted by a community."
Hence, a growing focus today is on universal design, making the human-machine interface fully configurable and responsive to everybody’s needs with technology so customizable that it’s accessible to all. That’s the goal of today’s dynamic, constantly evolving landscape of accessible technology research initiatives and commercial products.
"It’s a Wild West out there," Ladner says. "In terms of the engineering alone, accessible technology research is a wide-open field, with an infinite number of solutions."
Accessible Text
In Japan, a great deal of effort has gone into text captioning to make video broadcasting more accessible to people who are hearing impaired. At Kyoto University, various projects emphasize speech recognition and language processing for spoken text. At NHK Laboratory, part of Japan Broadcasting Corp., work focuses on real-time captioning in which a TV announcer’s words are repeated by a speaker to produce a higher, more stable rate of speech recognition for translation into text.
Speech-to-text conversion is far from perfect, however, as it is affected by factors such as audio devices, speaking style, and ambient noise. At the IBM Tokyo Research Laboratory, Takashi Saito, manager of the Accessibility Center, leads a group focused on correcting speech recognition errors. "This is tedious work," says Saito. "First you listen to the audio to find errors in the text, then you delete the wrong characters, and then you input the correct characters. Our goal is to minimize the total time required for this process by simplifying the correction operations and minimizing the necessary keystrokes."
Improving the quality of speech recognition is also playing a role in a proof-of-concept wheelchair at MIT. Finale Doshi, a graduate student in computer science, has designed a voice-activated wheelchair command system that uses machine learning to create and navigate a map of its environment. The wheelchair-bound person issues verbal commands to the guidance system to move from point to point on the map. High quality, easily trainable speech recognition devices that operate reliably are the key to implementing the wheelchair.
"People who use wheelchairs often have a lot of shaking, even people who don’t have several degenerative conditions," says Doshi. "It takes far less mental concentration to maneuver a wheelchair if you can issue commands verbally rather than manually. This is a very active area of research at MIT."
In Seattle, Ladner and his students in the Department of Computer Science and Engineering at the University of Washington have their own active areas of research. Their MobileASL project combines enhanced video compression with a cell phone configured as a video phone—the video lens is on the same side of the device as the phone’s screen, which has two panels, one of which displays the remote translator while the other panel displays the cell phone user—to provide more effective communication between people who sign and remote translators who provide American Sign Language (ASL) and text relay services.
Ladner insists the raison d’être for all accessibility technology is to optimize people’s lives. "Accessible technology is about accepting, for instance, that people use sign language and making the phone adapt to their needs. It’s not about a prosthesis or replacing something that’s taken million of years to evolve. Not everybody wants a cochlear implant, which requires major surgery and can cause problems with balance."
At MIT, a voice-activated wheelchair command system uses machine learning to navigate a map of its environment.
In conjunction with the Rochester Institute of Technology, Ladner’s group is also working to establish a DHH (deaf or hard of hearing) Cyber-Community between universities to increase enrollment of students who are deaf or hard of hearing in science, engineering, and mathematics from undergraduate levels through doctoral programs.
Also related to student access, Ladner’s group is developing a tool that translates textbooks, so a person who is blind can fully understand the content. "Between Braille and optical character recognition, words in textbooks are fairly accessible, but the figures are still difficult. We’re replacing figures with textures through an automated process using our Tactile Graphics Assistant," he says.
Ladner’s students are also contributing to the growing worldwide effort to improve Internet accessibility. "Unfortunately, a lot of Web pages are not all that accessible for people who are blind or dyslexic," he says. "Web designers use commercial development tools to make things look good, but don’t create a logical structure behind the page that’s navigable with a screen reader. Frequently, there’s also no alternative text inserted for figures."
In response, Ladner’s group has developed the WebInSight tool to infer the contents of a Web page and automatically insert alternative text. In addition, students Jeffrey P. Bingham and Craig M. Prince at the University of Washington are spearheading WebAnywhere, a low-cost, Web-based browser and self-voicing screen reader. (Commercial screen readers typically $1,000.) WebAnywhere can also be used by developers evaluate the accessibility of their Web designs.
World Wide Access
When it comes to the World Wide Web, a host of accessibility technologies are in play or under consideration around the globe. The ACM Special Interest Group on Accessibility, SIGACCESS, has been showcasing novel ideas about computers and accessibility at their annual ASSETS conference for more than 10 years.
University of Manchester researcher Simon Harper is chair of this year’s conference, which will be held in Halifax, Canada. "What we’re doing is not just for a small subset of people, but for everybody," says Harper. "Global positioning systems, for instance, got started as speech recognition and positioning systems for people who are blind."
Harper is among those at the Human Centred Web Lab at the University of Manchester working to increase Internet accessibility. "Web designers make a lot of mistakes when they’re designing Web sites, so we are studying how users interact with a dynamically updating page and where their attention is drawn to on the page," says Harper. "We believe by understanding how users who are blind interact with a page, we can create novel methods of making obfuscated structures, information, and semantics more explicit in the design. We can help designers better understand which things on a page should be spoken and which should be more silent."
Like many accessibility researchers, Simon Harper believes accessibility starts with the design.
Like many accessible technology researchers, Harper believes accessibility starts with the design. "It would cost nothing and would be very easy to make a Web site from the outset that’s supportive of accessible technology."
Vicki Hanson, chair of ACM SIGACCESS and a researcher at IBM’s T.J. Watson Center in New York, agrees. She adds, however, that the decision to design for accessibility is more than just a matter of cleaning up the Internetit’s a matter of law.
"Section 508 of the Americans with Disabilities Act pertains to all businesses that the U.S. Government works with," Hanson says. "Every Web site for those businesses, and for all governmental agencies, has got to be designed for accessibility. Of course, if the costs are too prohibitive, it won’t happen for small businesses, so people in SIGACCESS are working to make accessibility features in software the standard, not something separate or different."
Cynthia Waddell, executive director of the International Center for Disability Resources on the Internet (ICDRI), says the move toward accessibility is a matter of international law. "When the U.S. governmentthe largest procurer of technology in the worldadopted Section 508 in 1998, people around the world started to realize they had better start to comply with best practices regarding accessibility. As of today, 126 countries have signed the 2006 U.N. Convention guaranteeing access to Information and Computer Technology (ICT) for people with disabilities. So much has happened over the last 10 years, it’s almost unbelievable!"
ICDRI chair Mike Burks says accessible technology is about economics. "Some people maintain that pursuing accessible technology is too expensive, but people in the U.S. who have disabilities have an approximately 70% unemployment rate," says Burks. "That’s a huge price for any society to pay for ICT not being accessible to all."
Simon Harper, however, says accessible technology is about choice. "Every one of us is bizarrely unique, and in the real world we do things in many different ways," he says. "There is no single solution to accessibility technologies. The solution is to have a whole menu of solutions from which each of us can pick and choose.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment