Recall the last time you took a trip out of town. Perhaps you were traveling to a conference far from home. Remember the many forms of transportation you endured: cars, buses, airplanes, and trains. Not only were you responsible for moving yourself over a great distance, you had to move your things as well, including books and baggage. Remember the cramped spaces, sharp elbows, body aches, and exhaustion. Feel again your desire to simply be at your destination with your possessions intact . . .
Key Insights
- Ability-based design is a new design approach for interactive systems that focuses on people’s abilities in context, on what people can do, rather than on what they cannot do.
- Ability-based design scrutinizes the “ability assumptions” behind the design of interactive systems, shifting the responsibility of enabling access from users to the system.
- People’s abilities may be affected not just by disabilities but by disabling situations; designing for abilities in context leads to more usable, accessible systems for all people.
Such journeys remind us of our physical embodiment in the physical world, that much of our lived experience is fundamentally physical, and that we must contend with the world on physical terms. As computing professionals, we might be tempted to forget this, as our keystrokes summon data instantly from across the globe. But as humans, we still interact with that data through physical devices and displays using our physical senses and bodies. We and the world interact physically.
Civilization’s story of technological progress is in no small part the story of an increasingly built physical environment, from the pyramids to roads to skyscrapers to sanitation systems. Much of our energy, collectively and individually, goes into moving and shaping material for such purposes, altering the physical landscape and our movement through it. Some of our most thrilling experiences come by way of changing our bodies’ relation to that landscape: bungee jumping, skydiving, scuba diving, and riding a rollercoaster all provide radically new experiences for our bodies in the world.
As designers and builders of interactive systems for human use, we also play a central role in defining people’s relationship to and experience of the physical world.2,13,30 When we design things, we take mere ideas, things without form, and embody them in the world, whether simple sketches or cardboard mockups. They could be pixels on a screen or functioning digital devices. Regardless of the medium, to design and build things is to embody ideas that are then encountered and used by other embodied people.
This design-and-build activity is profound. It was not long ago in human history that giving form to the formless was considered the purview of the divine. In fact, the English verb “to create” comes from the Latin “creare,” which means to bring “form out of nothing.” When we design and build systems, we bring form out of nothing.
Unfortunately, unlike the divine, we cannot anticipate all the ways our designs will affect the people who encounter them. And when a mismatch arises, the world can become a very rigidly embodied place (see Figure 1).
Figure 1. A person in a wheelchair facing a flight of concrete stairs.
Many of the great breakthroughs in interactive computing have come as improved embodiments capable of transforming the way people experience the digital world. Sutherland’s interactive display and light pen in SketchPad,31 Engelbart’s and English’s mouse in NLS,4 and Apple’s iPhone all represent breakthrough embodiments. But a vital engineering insight is that they, as with all interactive technologies, include certain “ability assumptions” that must be met by human users. These assumptions are often unstated but alienating if they cannot be met.
An everyday example makes the point. In the student union building at the University of Washington in Seattle, wall-mounted touchscreens function as information kiosks for visitors (see Figure 2). In the on-screen operating instructions, a particular word stands out—”just,” as in, “just touch the screen.” In fact, touching the screen requires many abilities, including closing one’s hand, extending one’s index finger, elevating one’s arm, seeing the target, landing accurately, holding steady, and lifting without sliding—along with the ability to read and understand the instructions in the first place. There is clearly no “just” about it.
Figure 2. A wall-mounted touchscreen instructing users to “just touch the screen,” though a great many abilities are required to do so.
Where do ability assumptions come from? Designers and developers make assumptions from their own abilities, from the ones they imagine other people have, or the ones of the supposed “average user.”22 Unfortunately, each source of such assumptions is flawed. The first two are prone to bias and unrepresentative; the third, insidious for its statistical façade, does not reflect the diversity of human life.
On that point, Rose25 offered an anecdote from the U.S. Air Force. After World War II, it frequently lost pilots and planes in peacetime crashes—incredibly, 17 on one particular day—so it decided to redesign its cockpits to reduce “pilot error.” Air Force engineers measured 4,063 pilots along 140 dimensions, averaging these values to create cockpits to fit the mathematically average pilot. But a young Air Force scientist, Lt. Gilbert Daniels, questioned this approach. He took just 10 of the most important dimensions, added a tolerance of 30% of their ranges around their means, and compared every individual pilot to them to see how many of the 4,063 pilots aligned. The surprising result? Zero. Even among pilots recruited for their congruity, human diversity dictated that individual differences ruled. Only when the Air Force created pilot-configurable cockpits covering the 5th to 95th percentile of pilot measurements did the crashes decline.
Motivated by a need to make interactive computing systems that better match users’ abilities, we formulated “ability-based design,”37,38 aiming to create accessible technologies for people with disabilities and for people in disabling situations (such as in the dark or while walking in the cold or encumbered). Following our work on adaptive user interfaces9,10,11 and technologies for people on the go,15,24,32,33 ability-based design pursues an ambitious vision—that anyone, anywhere, at any time can interact with systems that are ideally suited to their situated abilities, and that the systems do the work to achieve this fit. Here, we expound this vision and describe the steps we have taken toward achieving it.
Ability and Disability
It helps to be explicit about the term “ability.” For our purpose, a useful definition comes from the Oxford dictionary: “Possession of the means or skill to do something”a (emphasis ours). The focus is on acting in the world, not just thinking about it.
Defining the term “disability” is thornier. In 1976, the World Health Organization (WHO) defined disability as, “Any restriction or lack … of ability to perform an activity in the manner or within the range considered normal for a human being“39 (emphasis ours). Thankfully, in 2001, this normative language yielded to the International Classification of Functioning, Disability, and Health,b authored and adopted by WHO, identifying disability as a complex interaction among an individual, activity, society, and the environment, both social and physical. Indeed, research has illuminated just how much social factors play a role in the experience of disability.28,29
When considering disability, ability-based design goes further. If “ability” is about having the means or skill to do something, then “disability” means simply being unable to do something. Disability becomes something one experiences rather than something someone has or is. Following such a view, everyone experiences disability, because everyone lacks the means or skill to do quite a few things, at least in certain circumstances. Designing for abilities applies to all people.
We call this perspective the “positive affirmation of ability,” namely that all people have abilities, some more than others, and designers and developers ought to create systems for people with abilities of all kinds and degrees. Likewise, Newell22 referred to “extra-ordinary abilities,” saying, “common sense and observation show us that every human being has . . . abilities, some of which can be described as ‘ordinary’ and some of which are very obviously extra-ordinary.” The focus is not on disability but on the diversity of human ability.
Ability is thus like weight or height—it is positive-valued only. Nobody has dis-weight or dis-height; neither are there disabilities, only abilities. Any experience of disability is not attributable to a person but to a mismatch between a person’s abilities and the ability assumptions of the environment. Like the proverbial water in a glass half full, abilities are only present and “designed for,” not absent and “filled in.”
This view of “design for” rather than “fill in” is not the historical view. Filling in for lost abilities has been the norm. From early human history through World War II and after, the approach has been to restore whatever was lost (such as an arm or a leg). People were expected to adapt themselves to the environment, whether physical or social, as they found it, with little hope that society would meet them halfway.
Although such attitudes have improved, designers and developers still often take a similar stance with interactive computing systems. When users’ abilities fail to match the ability assumptions underlying today’s interactive computing systems, the burden usually falls on the users to make themselves amenable to those systems, and the systems remain oblivious to the users doing it (see Figure 3).
Figure 3. Users adapting themselves to the ability assumptions of their input devices—keyboards and trackballs—which are oblivious to their contortions.
Ability and Situation
The experience of disability applies to us all. With the proliferation of smartphones, tablets, and wearables, we increasingly interact with systems in situations that challenge our abilities.
Consider how the physical environment of “the computer user” has changed from the 1980s to today. A typical computer user in the 1980s would have been seated at a stable work surface with ample lighting, controlled temperatures, quiet surroundings, and relatively few distractions. Today, with computing pervading so many aspects of life, “computer users” interact off-the-desktop while adapting to dynamic, distracting environments and their movements through them.7 An example is how users interact in “four-second bursts”24 when walking with smartphones, constantly diverting their attention from and returning to their screens. And yet, with the exception of a few research prototypes (such as in Mariakakis et al.19), smartphones are oblivious to users’ behaviors, unchanging from the street to the cafe to the library to the office.
Researchers have identified “situational impairments” caused by changing situations, contexts, and environments, using the language of disability and accessibility.7,22,27,33,38 Sears and Young27 said, “Both the environment in which an individual is working and the current context . . . can contribute to the existence of impairments, disabilities, and handicaps.”
This observation has grown even more relevant in the 15 years since it was made. In Stockholm, Sweden, city officials have erected street signs alerting drivers to watch out for people texting while walking. In Seoul, South Korea, some sidewalks are divided into two lanes, one for those intent on walking while staring at their phones, and the other for those who promise to refrain. In the U.S., the Utah transit authority imposed a $50 fine for “distracted walking,” including walking while texting. And the city of Honolulu adopted the Distracted Walking Law, banning even just looking at a screen while in a crosswalk. Alarmingly, the Federal Communications Commission estimates that at any daytime moment in the U.S., 660,000 people are interacting with their smartphones while driving.c
If we are to design for human ability, disabling situations must be addressed. Unfortunately, our interactive computing systems know little about their users’ abilities, attention, situations, contexts, and environments. A great many factors can impair use (see Table 1), yet few of them are detected, accommodated, or used as a basis for discouraging or deferring interaction.
Table 1. Situational factors that can limit our physical and cognitive abilities and affect our interactions with technology.
Toward Ability-Based Design
Addressing such concerns while providing a unified approach to designing for people of all abilities is why we pursued ability-based design,37,38 a design approach in which the human abilities required to use a technology in a given context are scrutinized, and systems are made operable by or adaptable to alternative abilities. Emerging from our work on adaptive user interfaces,9,10,11 ability-based design is characterized by the designer’s focus on what people can do, rather than on what they cannot do, and on systems and environments adapting to users rather than the other way around. Examples include desktop interfaces that customize their designs based on how a user moves a mouse,10 touch surfaces that observe complex motor-impaired touch sequences and resolve intended touch points,21 and mobile touch keyboards that sense and accommodate walking to improve accuracy.12
Strategies
Ability-based design is pragmatic, concerned with abilities insofar as they are useful for design. It is thus strategy-agnostic, embracing multiple methods for achieving successful user-technology fits. Strategies include automatic ability-based adaptation; high configurability by the end user; ability-specific customization by a third party; and having multiple designs for alternative abilities. Regardless of which one is employed, ability-based systems do the work to match users’ abilities, not burdening users with having to satisfy a system’s rigid ability assumptions.
Employing a visual language developed by Edwards,3 we outline a successful user-system fit in Figure 4a, where a user’s abilities match a system’s ability assumptions. In traditional assistive technology, when they do not match, as in Figure 4b, the burden falls on the user to become amenable to the system by procuring an adaptation. The adaptation fits and makes the user “seem normal” to the system. With ability-based design, this burden is reversed (see Figure 4c); it is the user’s abilities that dictate what the system must do to make itself amenable to the user. For example, the system might adapt or be adapted to match the user’s abilities.
Figure 4. User abilities and a system’s ability assumptions: (a) user abilities match a system’s ability assumptions; (b) in assistive technology, the user acquires an adaptation to remedy a mismatch; and (c) in ability-based design, user abilities drive changes in the system.
Ability-based design differs from traditional assistive technology by eschewing user-procured adaptations like the one in Figure 4b in favor of on-board adaptability. When on-board adaptability is not possible or practical, assistive technologies can still meet the objectives of ability-based design if they are well matched to the user’s abilities and not burdensome to procure. In cases where assistive technologies are used, ability-based systems should be aware of their use and do whatever they can to make that use as uninhibited as possible.
Ability-based design also relates to universal design.18 Arising from the field of architecture, universal design readily applies to built structures and spaces and has been extended to physical and digital products as well. Universal design is the process of designing places and things so they are usable by people with the greatest range of abilities possible. Ability-based design creates designs that match the abilities of individual users to the greatest extent possible. Ability-based design is thus one way to realize the ambitions of universal design. Unlike universal design, however, we created ability-based design with interactive computing in mind, so sensing, adapting, and configuring are presumed technology possibilities. While ability-based design might not natively apply to immutable concrete stairs, as in Figure 1, it would thus ask how future stairways (or wheelchairs) might use sensing, adapting, and configuring to prevent accessibility barriers.
Other strategies for designing for diverse abilities exist and are similar to ability-based design insofar as they consider users’ abilities and the role of the environment. For example, inclusive design16,23 seeks to eliminate design choices that cause exclusion by revealing designer biases through participatory methods, field observations, and empathy building. Among the foci of inclusive design is understanding user capabilities, similar to ability-based design.
A key difference between ability-based design and both universal design and inclusive design is one of focus and approach. Universal design and inclusive design focus on creating designs that are for general widespread use, including by people with specific interface needs. Ability-based design promotes creating general interfaces with the flexibility to address a range of users, as well as tailored interfaces specific to subgroups or even to an individual user. Ability-based design potentially has broader reach since it embraces both flexible-general and tailored-specific interfaces in its scope and approach.
With ability-based design, there is also a subtle but important difference in focus by the researcher, designer, or developer. With universal design or inclusive design, the focus is on creating an interface that can accommodate as many people as possible. With ability-based design, the focus is on the abilities of the individual user. All three approaches might at times produce similar designs, but with ability-based design, the focus is on optimizing the experience for individual users according to their abilities and contexts.
Contexts Limiting Technology Use
Ability-based design considers a broad range of contexts that impair technology use. We define a space with two axes: location and duration (see Figure 5). The location of a limitation ranges “from within the self” to “from outside the self.” Limitations arising from within the self are present in almost any context. Examples are a spinal cord injury, a toddler’s undeveloped psychomotor control, and being asleep. Changing a person’s context has little effect on the limitations arising from such internal states.
Figure 5. Contexts that impair one’s ability to use technology are defined by location and duration. What advances in sensing and computing might enable systems to better serve their users across a range of contexts?
In contrast, limitations arising from outside the self are present primarily due to context, and therefore changeable. Astronauts have remarkable physical abilities, but while spacewalking, expressing many of those abilities is quite difficult. Even an Olympic athlete can do little when confined to a prisoner’s straightjacket. The external context severely limits the person’s expressible abilities.
Intermediate points also exist on the location axis, where the mixture of self and environment limit ability. One example of a mixed-location limitation is photosensitive epilepsy, where a flashing light might induce seizures. If not for the flashing light, seizures would not be triggered. In this example, a part of the person and a part of the environment combine to pose a possible limitation.
On the other axis, the duration of a limitation ranges from “ephemeral” to “enduring.” An ephemeral limitation lasts only briefly and changes quickly; one example is the lack of a usable arm because a person is carrying an infant. Next, short-term limitations can arise from many causes, including inebriation, illness, and an ankle sprain. Limitations might even be enduring or even lifelong, as with, say, those caused by age-related declines, spinal cord injuries, incurable diseases, lifetime imprisonment, or irreversible brain damage.
Our argument is not that the lived experience of a person with one arm is the same as that of a person carrying an infant. Situational impairments are neither subjectively nor objectively anything like long-term limitations. Rather, the argument is that technology designs that are useful to people with certain long-term limitations might also be useful to people in certain disabling situations. A technology design for a person with one arm also might be useful for a person carrying an infant. Using an ability-based lens helps one recognize such design opportunities.
Assistive technology focuses mainly on compensating for long-term limitations within a person, as in Figure 5, bottom right. Ability-based design considers a larger space of limitations that impair technology use.
Design Principles
By adopting ability-based design in numerous projects, we have formulated and refined seven design principles to guide our work (see Table 2). The first three are required of any ability-based design project and relate to the designer’s attitude and approach, or “stance.” The next two relate to adaptive or adaptable user interfaces, and the final two to sensing and modeling users and contexts. Taken together, they can help guide designers and developers creating ability-based systems.
Table 2. Seven principles of ability-based design, updated and revised from previous versions.37,38
Example Projects
Our development of ability-based design was and continues to be highly iterative and inductive, arising from research projects that both preceded and followed its initial formulation. Here, we highlight a number of projects to illustrate the possibilities for ability-based design:
SUPPLE. SUPPLE9,10,11 was an automatic user-interface generator that used decision-theoretic optimization to help choose interface widgets and layouts that were optimized for a user’s preferences, visual abilities, and motor abilities. For optimizing motor performance, SUPPLE first presented the user with a series of basic pointing, clicking, dragging, and list-selection tasks.10 It then built regression models capturing the relationship between task parameters and user performance, using these models to guide the optimization process such that the interface being generated was predicted to be the fastest to operate by the user. Each user thus received a custom user interface, optimized for that user’s particular abilities.
In a quantitative study in 2008 involving people with motor impairments,11 SUPPLE’s custom interfaces were 26% faster and 73% more accurate to use than the default interfaces provided by manufacturers of popular desktop software applications. SUPPLE thus helped close more than 60% of the performance gap between people with and people without motor impairments, making access more equitable. Qualitatively, it was apparent how SUPPLE was optimizing interfaces based on different abilities; for example, SUPPLE gave people with muscular dystrophy interfaces with small, densely packed targets able to support slow, short, deliberate movements. In contrast, SUPPLE gave people with cerebral palsy interfaces with large, spread-out targets divided among different tabs, compatible with fast but error-prone movements. SUPPLE had no declarative knowledge of either muscular dystrophy or cerebral palsy, generating its user interfaces solely from observed input performance.
The SUPPLE approach was used in subsequent projects. For example, in SPRWeb,6 SUPPLE’s personalized optimization approach was used to recolor websites, adapting them to the individual color-vision abilities of users with color-vision deficiencies. SPRWeb also aided users in color-limiting or color-altering situations, including glare and low-light conditions.
SUPPLE exhibited the first six principles of ability-based design and was the original system that inspired many of the ideas now found throughout ability-based design.
Slide Rule. Slide Rule14 was a mobile screen reader that made touchscreens accessible to blind users by leveraging multi-touch gestures and audio feedback. It was an example of making systems usable to people with abilities different from what device manufacturers originally intended. Slide Rule addressed a pressing challenge emerging in 2007 from the advent of touchscreen smartphones: How would a blind person interact with a phone having buttons that person could not feel? At the time, smartphones had little or no accessibility support, and many people presumed touchscreens could not be made usable for blind people. Slide Rule developed a set of gestures and the first finger-driven screen-reading techniques to enable blind people to access and control smartphone touchscreens.
We became aware from a personal communication in 2010 that Slide Rule inspired aspects of Apple’s VoiceOver screen reader for iOS. Indeed, Slide Rule’s finger-driven screen reading, swipe gestures, and second-finger tap can all be found in VoiceOver today.
Slide Rule exhibited the first three principles of ability-based design; it also exhibited the fourth and sixth principles, as its screen reader could adapt to the speed of users’ movements, tailoring its performance to theirs. The underlying principles demonstrated in Slide Rule have survived into today’s touchscreen systems.
Walking user interfaces. Today’s smartphones are portable but not truly mobile because they support interaction only poorly while moving; for example, walking divides attention,24 reduces accuracy,17 slows reading speed,26 and impairs obstacle avoidance.32 We conducted multiple projects to improve interaction while walking, focusing on people’s abilities while on the go.
In our early exploration of walking user interfaces,15 we studied level-of-detail (LoD) adaptations, where the interface shown while a user was standing had high detail and the interface shown while a user was walking had low detail, with larger fonts and bigger targets. When a user moved from standing to walking and vice versa, the interface changed. We compared this adaptive interface to component static interfaces for both walking and standing, finding that walking increased task time for static interfaces by 18%, but with our adaptive interface, walking did not increase task time. We also found that the adaptive interface performed like its component static interfaces; that is, there was no penalty for the LoD adaptation.
In our subsequent project, called WalkType,12 we made mobile touch-based keyboards almost 50% more accurate and 12% faster while walking. Touch-based features like finger location, duration, and travel were combined with accelerometer features like signal amplitude and phase to train decision trees that reclassified wayward key-presses. WalkType effectively remedied a systematic inward rotation of the thumbs caused by whichever foot was moving forward as the user walked.
Performing input tasks is only one challenge while walking. Consuming output is another. In SwitchBack,19 an attention-aware system for smartphones, a smartphone’s front-facing camera was used to track eye-gaze position on the screen to aid task resumption. For example, when a user was reading and looked away, SwitchBack remembered the last-read line of text; when the user’s gaze returned to the screen, that same line was highlighted to draw the user’s attention for easy task resumption.
These three walking user interfaces exhibited all seven principles of ability-based design to varying degrees.
Global Public Inclusive Infrastructure
Ability-based design has been applied mostly at the level of individual systems and applications, but for greater impact, a new infrastructure that extends beyond the user’s own device is needed. Although the Global Public Inclusive Infrastructure (GPII),34,35 with its cloud-based auto-personalization of information and communication technologies, was formulated independent of ability-based design, its objectives are the same—enable interfaces to be ideally configured to match each user’s situated abilities.
The GPII is built on three technological pillars.35 The second, “auto-personalization,” is the one of interest here.d Its long-term goal is to ensure that any digital interface a person encounters instantly changes to a form that can be understood and used by that person. The GPII’s auto-personalization capability uses a person’s needs and preferences, which are stored in the cloud or on a token, to automatically configure the interface of each device for that individual.34,36 Its “one size fits one” approach is designed to help each person have the “best fit” interface possible. Since interface flexibility on current devices and software is limited, GPII auto-personalization uses both built-in features and assistive technologies (AT) (on the device and in the cloud) to achieve each best-fit interface. For example, accessibility features located in five layers—operating system features, installed AT, browser features, cloud AT, and Web app features—can be configured to work together to provide best-fit user interfaces, with features at each level being invoked (or not) in order to meet the user’s needs and preferences.
GPII auto-personalization supports interfaces that self-adapt, as well as configuration of interfaces and adaptations, to match a user’s needs. By combining auto-adjusting interfaces, preference-configured interfaces, and user-selected-and-configured AT, the GPII can function as a bridge among these approaches, maximizing the utility of each one for an individual at any point in time. The GPII also supports auto-configuration based on contextual changes.40 The GPII thus meets all seven principles of ability-based design.
Taking Up the Challenge
Pursuing these and other projects, some patterns have emerged for us. For example, we noticed a perspective shift as we began to actively seek out the abilities people have, inspiring an openness to consider how we could create or change technologies to suit different abilities. We also noticed a seamlessness between designing for people with limited abilities and designing for people in ability-limiting situations. We realized accessibility is indeed a worthy goal for all users. Because we were looking to modify systems, not users, we deemphasized assistive hardware add-ons. Customization arose from a powerful sequence of sensing, modeling, and adapting; it also arose from support for end-user configurability, as with the U.S. Air Force cockpits mentioned earlier. We thus made our interactive systems more aware of their users and contexts.
Where does ability-based design go next? One way to answer is to treat the vision of ability-based design as a grand challenge and ask what it would take to create a world in which anyone, anywhere, at any time could interact with technologies that are ideally suited to his or her situated abilities. Achieving the “anyone anywhere any time” part will require systemwide infrastructure of the kind pursued by the GPII. Ability-aware operating systems infused with SUPPLE-like user-interface generators could help create personalized applications. Improved sensing and modeling of users’ abilities and contexts, as in walking user interfaces, could enable mobile and wearable systems to better support diverse contexts of use. One challenge is to avoid explicit task-based training and calibration in favor of implicit observation and modeling from everyday use, as in Evans and Wobbrock5 and Gajos et al.8
What would it take to create a world in which anyone, anywhere, at any time could interact with technologies that are ideally suited to his or her situated abilities?
To date, ability-based design has focused primarily on single-user experiences, but the social lives of users could also lend themselves to collaborative support. How should the abilities of a pair, group, team, crowd, or organization be considered? For service arrangements, what would it look like to have an ability-based design for services?
Moreover, abilities exist on many levels, from low-level sensorimotor and cognitive abilities, to mid-level abilities for daily living, to high-level social, occupational, professional, and creative abilities. Such abilities form a hierarchy paralleling Maslow’s hierarchy of needs,20 whereby each need corresponds to an ability to meet it. Ability-based design seems applicable throughout such a hierarchy, but the range has yet to be explored.
Concerning “adaptivity,” providing each individual with a unique user interface raises several pragmatic issues, as in, say, authoring help documentation, provision of customer support, and making the design process of personalized experiences consistent with accepted design practice. These challenges are real but, as we discuss else-where,9 solvable.
With the vast range of human abilities from which to draw, adaptivity based on sensing and modeling is a powerful way to realize custom designs that, while inevitably imperfect, nonetheless provide good usersystem fits at scale. Adaptive interfaces can remember users’ abilities and preferences and draw on them when generating interfaces for both familiar and unfamiliar systems, providing more satisfying and effective access for each individual user. We thus see an important and continuing role for adaptivity and personalization within ability-based design.
We close with a quote from Frank Bowe (1947–2007), professor and disability-rights activist who helped instigate the Americans with Disabilities Act of 1990 (https://www.ada.gov/). Writing in MIT Technology Review in 1987, he emphasized the importance of focusing on what people are able to do, not on what holds people back:1 “When society makes a commitment to making new technologies accessible to everyone, the focus will no longer be on what people cannot do, but rather on what skills and interests they bring to their work. That will be as it always should have been.”
We could not agree more.
Acknowledgments
We wish to thank our co-authors on the projects we covered here, especially Jeffrey Bigham, Leah Findlater, Jon Froehlich, Mayank Goel, Susumu Harada, Alex Mariakakis, Shwetak Patel, and Daniel S. Weld. This work was supported in part by the Mani Charitable Foundation and the National Science Foundation under grants IIS-0952786 and CNS-1539179. Any opinions, findings, conclusions or recommendations expressed in this work are those of the authors and do not necessarily reflect those of any supporter or collaborator.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment