Research and Advances
Computing Applications Contributed articles

A Blind Person’s Interactions with Technology

Meaning can be as important as usability in the design of technology.
Posted
  1. Introduction
  2. Background
  3. Method
  4. Analysis
  5. Insights
  6. Conclusion
  7. Acknowledgments
  8. References
  9. Authors
  10. Footnotes
  11. Figures
  12. Tables
BrailleNote from HumanWare
BrailleNote from HumanWare

Current practice in computer interface design often takes for granted the user’s sightedness. But a blind user employs a combination of other senses in accomplishing everyday tasks, such as having text read aloud or using fingers along a tactile surface to read Braille. As such, designers of assistive technologies must pay careful attention to the alternatives to sight to engage a blind user in completing tasks. It may be difficult for a sighted designer to understand how blind people mentally represent their environment or how they apply alternative options in accomplishing a task. Designers have responded to these challenges by developing alternative modes of interaction, including audible screen readers,11 external memory aids for exploring haptic graphs,20 non-speech sounds for navigating hypermedia,16 two-finger haptic interfaces for touching virtual objects,22 haptic modeling of virtual objects,13 and multimodal (auditory, haptic, visual) feedback for simple computer-based tasks.10 The effectiveness of these alternative modes of interaction is studied primarily through a usability framework, where blind and visually impaired users interact with specific devices in a controlled laboratory environment. These developments in assistive technology make a point to take advantage of the alternative modes of interaction available to blind users.

Physical obstacles are not the only considerations affecting interaction between blind users and everyday artifacts. As we found in this study, elements of meaning, such as socialization, efficiency, flexibility, and control, strongly influence the use of both digital and non-digital artifacts by blind users. Taken-for-granted factors, such as an individual’s social ties or busy schedule, might determine whether and how an object is used. Therefore, designers may need to pay close attention to the external factors that influence an individual’s choice and use of technology. Conversely, and equally as important, designers must also consider how an individual’s internal values and desires affect their technology preferences.

The study described here is an in-depth exploratory and descriptive case study24 of a blind individual using various technologies in her home. Previous studies in lab settings compared interactions against a set of heuristics or with a control group, allowing researchers to isolate events in order to understand how users interact with specific technologies on a narrow range of tasks. We took this study out of the lab and into the home to get a better sense of the nuances of everyday life influencing how a blind user interacts with technology. It differs from the usability approaches in several ways. First, we wanted to look across a range of technologies for common kinds of task failure and workarounds, rather than on a single technology or task. Second, because emerging technologies involve a choice of what to place in hardware and what to place in software, such as whether to have physical or virtual buttons on a cellphone, we wanted to investigate user interaction with both digital and physical objects to better understand the trade-offs in hardware vs. software design choices. Third, the investigation was situated within the individual’s home rather than in the laboratory to better understand artifact use in a naturalistic setting. And fourth, our interviews concerned not only usability but aesthetics, affect, meaning, historical associations of use in context, and envisioning of future technologies. Overall, we were concerned about what technologies were most valued and used, when they were used and for what purpose, the difficulties experienced in their use, the workarounds employed, and the meanings and interpretations associated with their use.

Without careful consideration for both the limitations in usability and the meaning of the interactions affecting blind users, sighted technology designers may unwittingly create interfaces with the wrong affordances or that are dissonant with a user’s personal preferences, resulting in task failure. Already known is that the visually impaired must make alternative accommodations to accomplish the same tasks day in and day out. What is little known is how much of an influence an individual’s personal values and surroundings have on the choice of where, when, and how technology is used. Observations in a user’s home of interactions with existing technologies may provide insight into the way surroundings and personal preferences are drawn on to help complete daily tasks.

As we suggest in the study, the combination of functionality and socially situated meaning determines for the user the actual usability of a technology to accomplish specific tasks. These technologies hold meaning that affects the ways individuals understand themselves in relation to the communities to which they belong.

Back to Top

Background

Developing the study, we drew on a number of literatures, including in assistive technology for people with visual impairments, task breakdowns and workarounds, and design ethnography in the home:

Design ethnography. The study design reflects Clifford Geertz’s view that “man is an animal suspended in webs of significance he himself has spun.”8 Significance is constructed not only from behavior and discourse, but in the materials with which people interact. Many are mundane objects—measuring cups, cellphones, sticky notes. And yet, as Csikszentmihalyi and Rochberg-Halton6 wrote, these objects become infused with meaning through use and association. “Humans display the intriguing characteristic of making and using objects. The things with which people interact are not simply tools for survival or for making survival easier and more comfortable. Things embody goals, make skills manifest, and shape the identities of their users. Man is not only homo sapiens or homo ludens, he is also homo faber, the maker and user of objects, his self to a large extent a reflection of things with which he interacts. Thus objects also make and use their makers and users.”6 If, as technology designers, we desire to improve the human condition through our intentional acts of design, then our central concern should be the ways in which technologies are woven into human webs of significance.

In order to elicit a more holistic perspective on the usability of artifacts for a blind individual, we extended the study of human-machine interaction from the workplace into the home, as have other recent researchers.2,4 Drawing on traditional ethnographic methods used in the social sciences,8,18 this research examines the situated, physical interactions between people and artifacts, as well as the meanings people attribute to specific technologies and the personal perspectives they bring to their interactions. As Bell et al.2 note: “The potential situated meanings of domestic technology are fluid and multiple, connecting with a range of discourses, such as work, leisure, class, religion, age, ethnicity, sex, identity, success. Meaning may also be embodied in artifacts through the historical contexts of use.”

Though undertaking this investigation from the comfort of our university lab would have been convenient for us as researchers, doing so would have undervalued the importance of place in evoking the meaning of everyday things. Homes are not just shelter, but places where people dwell, where one finds the “‘lived relationships’ that people maintain with places.”1 Our intention was that by observing and interviewing in our informant’s place of dwelling, deeper associations of significance would be evoked related to the objects found there. As the ethnographer Keith Basso1 points out, “places possess a marked capacity for triggering acts of self-reflection, inspiring thoughts about who one presently is, or memories of who one used to be, or musings on who one might become. That is not all. Place-based thoughts about the self lead commonly to thoughts of other things—other places, other people, other times, whole networks of associations that ramify unaccountably within the expanding spheres of awareness that they themselves engender.”

Breakdowns and workarounds. We are interested in both the success and failure a nonsighted person experiences in interaction with technological artifacts. We are particularly interested in understanding the task failures, what Winograd and Flores21 called “breakdowns” in that they reveal what is often invisible during successful artifact use. Task failures are unsurprising, given that many of the artifacts used daily by people who are blind have been constructed in a coevolved biological and social world in which sight is the norm. Task failures are also not failures in the sense that they are merely the stopping point at which the user has implemented an alternative means of continuing the task. This alternative includes other methods of task completion, receiving outside help, or choosing to discontinue the task entirely. We are interested in the reasons for breakdowns and in the adaptive strategies, or workarounds, that are developed to carry out necessary tasks. “New design can be created and implemented only in the space that emerges in the recurrent structure of breakdown.”21 By focusing on the point at which a blind user detours from the designer’s intended interaction, we begin to understand what motivates each workaround. We thus focus our data collection and analysis on the kinds of workarounds a non-sighted person adopts in carrying out everyday tasks and their implications for design.

Assistive technology. General guidelines exist for providing universal access to computing technology. One of the most influential is the W3C’s Web Accessibility Guidelines (WCAG) 1.0,23 which includes “Provide equivalent alternatives to auditory and visual content” and “Ensure user control of time-sensitive content changes.” They sensitize designers to the fact that people interacting with their Web sites might not all have the same physical and cognitive abilities. Still, universal guidelines can easily obscure differences between people with different abilities, providing little guidance for designing for different interactional needs. For example, the guideline “Don’t rely on color alone” from WCAG 1.0 is of little use in designing Web sites for people with total blindness.

Research focused on people with visual impairments has yielded a number of guidelines tailored more specifically to this population, such as “Non-speech sounds should be used to provide information and feedback about commands or events rather than verbal message”16 and “[provide] multimodal feedback as a means of improving task performance, especially for individuals with visual impairments.”10 By employing a more ethnographically centered approach, we expand on previous studies, building on and complementing that research. Doing so, we hope to further understand how a blind user’s experience and social context in addition to physical limitations affect the use of technology. Moreover, by more narrowly focusing on a computer user among the 0.03% of people in the U.S. with congenital blindness5 (as opposed to, say, someone from among the 3.33% of the U.S. population with age-related macular degeneration10,19), we hoped to develop insights more specific to this smaller population. This is consistent with Newell and Gregor’s concern17 that “…except for a very limited range of products, ‘design for all’ is a very difficult, if not often impossible task” since “[p]roviding access to people with certain types of disability can make the product significantly more difficult to use by people without disabilities, and often impossible to use by people with a different type of disability.”


Our interviews concerned not only usability but also aesthetics, affect, meaning, historical associations of use in context, and envisioning of future technologies.


We focused on someone with congenital blindness for two reasons: The first was personal, since the first author has a close friend with congenital blindness, and the project was inspired by informal discussion and interaction with this friend. The second was our belief that working with someone who had never had even residual sight would help highlight our taken-for-granted knowledge as researchers. Moggridge15 captures this perspective when he wrote, “When we want to learn about people, it is important to include some who represent critical extreme values of the relevant characteristics, to avoid the trap of designing only for the average.”15 We undertook this research as sighted “outsiders,” not as members of a blind community. In this regard, our perspective in relation to the research subject is much like that in a contextual inquiry,3 where the design researcher seeks to understand the situated work practices in a particular setting through observation and discussion during the performance of the practices in situ.

Back to Top

Method

This case study of a congenitally blind college student, Sara (name changed to maintain confidentiality) took place in six sessions of approximately two hours each over a four-week time period in February and March 2006. The first author conducted all interviews, which were tape-recorded during each meeting as cotemporaneous notes were taken. These sessions were conducted in Sara’s home, where she demonstrated tasks and shared her feelings about the artifacts used. Adapting Blythe et al.’s “Technology Biographies”4 in these sessions, particularly “Technology Tours,” “Personal History,” and “Guided Speculation,” we asked Sara to choose software and non-software artifacts to demonstrate and discuss.

She shared her BrailleNote, a chordal keyboard combined with refreshable Braille display and voice output, and demonstrated how she reads and writes using the device. She showed how she uses her cellphone to send and receive phone calls. She demonstrated her use of a Braille labeler to create embossed Braille tape she used to label the buttons on a microwave. She showed how she uses her Language Master (a voice-output electronic dictionary), including how she uses it as a thesaurus, to play games, and to look up words. She discussed her CD collection and demonstrated how she searches for and plays CDs. She demonstrated her screen-reading software (Job Access With Speech, or JAWS) by navigating a class-discussion Web site. She demonstrated how she uses plastic measuring cups for everyday cooking tasks and how she tells time using her tactile wristwatch.

We applied the Technology Tours, which involves asking how these items are used (“How would you go to a link in a Web page using JAWS?)” and observing her use of the object while concurrently listening to (and recording) her descriptions of her own actions. Using Personal History questions, we asked her to recall early memories about each object, as well as how she felt when she used it. Finally, using Guided Speculation, we asked her to describe any desire she had for each object or task in the future. Adapting Technology Biographies to our setting, we borrowed a protocol suited to a context-specific elicitation of technologies-in-use that are part of Sara’s everyday world.

Sara demonstrated and discussed a variety of digital and non-digital artifacts she selected herself. We asked her to select them for two reasons: First, it allowed her to choose those she felt comfortable with and that were personally important to share in the context of the study. Second, allowing this breadth of artifacts extended the range of observations and topics discussed, contributing to the depth of the analysis of her interactions overall. In this sense, we can be confident that insights we draw across these digital and non-digital artifacts are representative of Sara’s character and intentions.


We focus our data collection and analysis on the kinds of workarounds a nonsighted person adopts in carrying out everyday tasks and their implications for design.


Back to Top

Analysis

Throughout our note-taking and debriefings of interview sessions, we worked iteratively to capture our insights about limitations and workarounds, validating early conjectures with the subsequent data we collected. We shared our insights with Sara in subsequent interviews, soliciting her feedback and asking for additional clarification. We provide a brief example of this type of analysis for both a digital and a non-digital object in the remainder of this section.

We also summarize a sample of the data and insights from Sara’s interactions in the table here. For most of the actions demonstrated, Sara had a workaround in situations where the default method failed her. For instance, with JAWS, Sara implemented a method of retracing her steps again until she was able to accomplish her task. Similarly, she navigated her extensive CD collection through a mix of spatially memorized locations and linear search. In each interaction, she negotiates efficient ways to accomplish her tasks. Other objects, such as tactile watch, cellphone, and labeler, reflect the importance of social context and independence on her choice of object and task or as a cause of frustration.

Tactile watch. Sara’s tactile watch has Braille-like dots to mark the time on a clock face and a clear glass cover over it to protect the dots and watch hands. She easily flips open the lid to feel the time the hands point to. Interviews revealed her desire to avoid the kind of attention a talking digital watch might attract.

Sara: “I have a couple of talking watches, too. I just feel, I don’t know like, I have a weird thing, I don’t want to say that it’s a bad thing or in any way put those things down, but I personally feel sort of embarrassed when I have to push a button and then people hear it and they’re all like ‘What is that?’ and it just kind of draws attention to me, I feel like, in the wrong way—in a way I don’t want attention drawn to me. So I just kind of try to avoid that as much as possible.”

Unlike her talking watches, Sara found her tactile watch to be quiet, unobtrusive, and efficient at helping her tell the time. She also said that while the tactile watch was convenient and discreet enough for telling time, it lacked a built-in alarm function; instead, she relied on other electronic timekeeping devices for her morning alarm. She also described the delicate nature of the watch’s physical makeup, sharing anecdotes of how easily the glass cover cracked or how frequently the batteries died and the inconvenience it caused. She also talked about preferring the aesthetic appeal and comfort of the tactile watch compared to her talking watches:

Interviewer: “Are there other preferences you have to your [tactile] watch, as opposed to the talking watches?”

Sara: “It’s more comfortable… The other ones kind of look like big clunky sports watches. Sort of chunky. I just feel like it’s more comfortable.”

JAWS screen reader. Sara’s JAWS screen reader works alongside her Windows operating system. She uses it to read aloud the text in the applications on her monitor by controlling an on-screen cursor through a series of hotkeys. Sara uses JAWS as a means to use her software applications: instant messaging, email, browsing the Internet, word processing, and backing up CDs.

Although JAWS increases her access to her computer, many interaction issues remain. For example, because JAWS is a text screen reader, it does not recognize pictures and graphics (ranging from chat emoticons to navigation tools on Web sites) and often gives vague feedback in describing where a graphic is placed in a document or Web page. One of the biggest challenges of using a screen reader is orientation and navigation. If Sara moves to another task or accidentally hits the wrong hot key, she might find herself in an unfamiliar virtual setting that requires her to suspend the current task, reorient herself, then resume where she left off. Her tenacity in the face of these obstacles is illustrated in the following transcript segment in which she is trying to navigate through frames on a course home page to get to a discussion board.

Sara: “…I’m going to go back into the links list [JAWS speaks through the links in order: “communication, assignments, rules, contacts…”] no, silly, I wanted to go to discussion board. [tries a few more links, and the computer says them out loud] okay, it’s not in the right place where I thought it was. Let me try that again, I’m sorry. [starts through the links list again] Okay, discussion board, I’m tabbing through this time and not going through the links [the computer talks] I’m on there, c’mon go back to the discussion board [silence, then the computer speaks] come on… Go to the discussion board. Now. [the computer speaks again] okay, let’s try this again. I’m going to right-click it. I press the right mouse button, which is this one… Okay, press Enter and see if I go anywhere. Why is it misbehaving?”

At this point, having tried all that she could think to do, Sara is frustrated and anxious to move on. It is only on starting over by reentering the URL of the Web page and carefully stepping through each action that she finally is able to find the discussion board.

Sara: “Okay, now it’s taking me back to the home page. So let’s try this again. Okay, I’m going back into the links list. Pressing Enter. Come on… [quiet for some time; the computer appears to be opening the link and suddenly speaks again] Oh, here we go. [the links list appears on the screen] Okay, now let’s go to the discussion board. ‘D’ for discussion board. [the computer goes through the details for the page, a heading, the number of links, and more].”

Sara employs two specific, strategic workarounds here. First, she tries all the options available to her. When none lead to the expected outcome, she aborts the original operation and begins again. Both tactics are brute-force, when-all-else-fails solutions that are time-consuming and sometimes frustrating but that are most likely to yield desired results. As multiple programs are always running on the computer, just making a diagnostic check of where things are “located” can be time-consuming and difficult. This sometimes poses limitations on the usability of JAWS; Sara’s workaround here is to repeatedly try different operations until her intentions are fulfilled. From this experimentation and practice, she is able to learn pragmatically what works and what doesn’t.

Though the quotes indicate the considerable usability problems Sara encounters in using JAWS, they also affect issues of socially situated meaning. As a student, she relies on the computer and Internet for social connections and course-related communication. Raising the cost of performing relatively simple operations that are error-prone and resistant to efficient workarounds affects Sara’s ability to access the information required to be a full participant in courses and engage in social interaction online.

Back to Top

Insights

Having understood limitations and workarounds in isolation, we identified recurrent themes common across objects and tasks. Sara’s actions and associations with objects and tasks were guided by both the usability of the object and the meaning she accorded to the task. A stronger personal preference or significant item or task often motivated her to overcome physical obstacles at almost any cost. In the table, the “Workaround” column lists the alternatives Sara employed to get around limitations. Additionally, we added personal assessments Sara might have developed on each workaround, showing her distaste for very inconvenient workarounds, such as getting a friend to help. Often these unsatisfactory workarounds were avoided, generally indicating the task was also avoided. When Sara was required to type in the letters shown in a picture to gain access to a Web site, her frustration with the workarounds (call tech support or get sighted help) led her to drop any Web site that required such actions. Items or actions that held little personal significance were easier to pass up if the physical interaction was too time-consuming or made her uncomfortable.

Our focus on usability and socially situated meaning generated several insights based on Sara’s workarounds into what motivates her use of an artifact. The following sections provide a general classification of the issues Sara faced when interacting with artifacts and technology. Specific, personal preferences included her motivation to seamlessly engage with her environment, a world often contextualized for sighted people. Facets of design included those areas of interaction that caused her frustration, such as lack of control, or that created or eliminated barriers to content, such as tactile feedback.

Socialization within a predominantly sighted community. Several of Sara’s decisions reflect her desire to be included in her community of sighted friends and family and to include others in her life as smoothly as possible. Some choices she makes include using a tactile watch and prominently displaying a bulletin board of print-labeled photographs on her wall. Asked why she had these labeled photographs, she said it was as a conversation piece for when sighted friends visit. She also said “I’m the only blind person on campus and I don’t know, I just try to integrate myself into the world and in that sense, you know, as much as possible.” It is important to consider design ideas supporting cohesive socialization with the people within Sara’s social sphere. Showing off her BrailleNote, she said she prefers reading Braille, as opposed to listening to talking software, because it is quiet.

She also said that carrying around her awkwardly shaped labeler makes her feel self-conscious and expressed frustration when she is not acknowledged in casual social situations due to her blindness. A concrete design modification she suggested is to allow a Braille labeler to make print labels. A dual print labeler would allow her to create labels so she could better share mixed CDs she makes for friends.

Independence. Sara is independent and tackles issues from multiple sides until she reaches a solution. Object design should support her ability to be independent and not require sighted help. Sara’s independence was highlighted when she talked about taking a cab when needed, rather than relying on friends and relatives for transportation. She relishes the freedom her cellphone gives her, providing easy access to others only if in need.

Control. For Sara to be able to maintain her independence, she must be able to control significant factors that ultimately help her accomplish her tasks. Design should grant the user full control of as many functions as possible and allow switching between interaction modes in different contexts. Evidence of Sara’s desire to be in control came from her tendency to stick with tasks and objects she finds comfortable, avoiding things she can’t do, such as going to Web sites with special accessibility pages. In working with JAWS software, she showed tenacity in trying all possible options before asking for sighted help.

Efficiency. Compensating for sight is often time-consuming; for example, if Sara does not remember where she has placed a CD, she must carry out a linear search—pulling out a single disc case from its position on her CD shelf, reading the Braille label on the case, replacing the CD, and moving on to the next. For enhanced usability, efficiency is an important factor to consider in an object’s design. Sara’s accurate memory and learned procedures help her use certain items quickly. This efficiency allows her to focus on the enjoyment of certain items and tasks, such as using her CD player, rather than on the mechanics of carrying out the tasks. Conversely, inefficiency increased her frustration and time to completion, such as when she had to reorient herself while using JAWS.

Portability. Sara’s strong ties to her cellphone can be attributed, in part, to its small, easy-to-carry size. In contrast, she expressed annoyance toward objects that were not as easily portable, such as her large and awkwardly shaped Braille labeler. Object portability increases efficiency, supports independence, and eases integration within Sara’s social world.

Distinguishability of similars. Usability increases when Sara is able to distinguish among similar item features. Where a sighted user distinguishes similar features, such as among the buttons on a CD player or CD cases in a collection, due to written labels on the items or by seeing each item’s position in a larger spatial context, Sara had much more difficulty. She labeled items in Braille that were otherwise difficult to distinguish and used her hands, fingers, feet, nose, and ears to see what her eyes could not. Each opportunity for sensing and distinguishing can be exploited by technology designers. Design that aids identification and distinguishability of otherwise similar features, such as CD cases with preprinted Braille identifiers and cellphone keys with textured surfaces, enhances ease of use, flexibility, and efficiency. Sara showed how, with deft fingers, she is able to distinguish different-size measuring cups held together by a common ring. She complained that some cellphones lacked keys she could identify by touch.

Brute-force backup. One fallback problem-solving technique is to exhaustively try all possibilities. When Sara became disoriented while demonstrating her Language Master and JAWS software, she tried all possible options as much as possible. Due to her self-described disorganization, her linear search method for CDs is the most effective tactic, but also very time-consuming.

Flexibility and interoperability. Sara took notes and read books on her BrailleNote, but her model lacks external storage, except for a floppy drive, and does not include access to the Internet. By considering how people will use particular devices when carrying out larger task landscapes,14 such as that Sara might not only want to take class notes but share these notes with friends using the Internet, designers increase opportunities for use. Despite the many uses of her BrailleNote, Sara wanted a laptop computer due to its flexibility for Internet access, communication, and storage.

Back to Top

Conclusion

We draw two main conclusions from the study. The first is methodological. We used an ethnographically inspired investigation of a single, nonsighted individual interactions with a variety of artifacts in her own home. Our concern with not only usability but socially situated meaning contrasts with experimental designs focused on measurement and control and with lab-based usability studies. Although we can state our conclusions as principles associated with this individual’s preferences and beliefs, such statements are animated in the ways in which we observed them. We are unsurprised that, for instance, Sara’s sense of self is intimately tied to her relationships in her social network. How could it not be? We are, however, surprised in the specific ways in which this was embedded in her choices about artifacts and interactions. Such surprise—in her wall of textually labeled photographs as conversation pieces for her sighted friends, her preference for a tactile watch because of how it looks and feels, and the use, when she was a child, of her talking dictionary as an ice-breaker with friends—undoubtedly reflects our situatedness (as researchers and as people) within our own social worlds, the taken-for-granted assumptions we carry with us as sighted people.


Simply replacing one interaction mode, such as the display of text on a screen with a functionally equivalent mode, as in speaking the text aloud, is not necessarily equivalent from the point of view of user experience.


To what extent are we able to generalize these results to other contexts and other people? Though single-person case studies are rare in human-computer interaction, they have a long history in the social and behavioral sciences. For example, studies with single participants that have been influential were undertaken by Freud in psychoanalysis,7 Harper in sociology,9 and Luria in brain/mind studies.12 One goal of case-study research is to develop theoretical propositions that can be used to guide subsequent research studies and design efforts. However, it is important to understand the difference between these analytic generalizations and the statistical generalizations common in experimental study designs. Emphasizing this distinction, Yin24 writes, “‘How can you generalize from a single case?’… The short answer is that case studies, like experiments, are generalizable to theoretical propositions and not to populations or universes. In this sense, the case study, like the experiment, does not represent a ‘sample,’ and in doing a case study, your goal will be to expand and generalize theories (analytic generalizations) and not to enumerate frequencies (statistical generalization).” He further points out, “Scientific facts are rarely based on single experiments; they are usually based on a multiple set of experiments that have replicated the same phenomenon under different conditions.”

We do not claim that our results generalize to all people in any particular group, not even provisionally. Neither do we claim that Sara is typical of any group, such as people who are blind or people with physical disabilities. However, there is nothing so particular about Sara that precludes these results from applying to others who might be similarly situated within their physical and social worlds.

Our focus on a single individual across a range of tasks and artifacts allowed us to seek coherence in the themes and patterns that spanned many aspects of her life, something we might have missed had we instead looked at a small range of tasks and artifacts across several individuals. Also, the range of tasks, including those involving both computational and non-computational artifacts, increases our confidence that we captured the key issues characterizing Sara’s interaction with technology. Enabling more access to technology may in fact require that we look at increasingly specific populations so as to tailor technologies more closely to people’s needs. However, our case study, with its rich data across a variety of interactions, provides a set of hypotheses that can be used comparatively in other studies.

Our second conclusion reiterates that it is the combination of functionality and socially situated meaning that determines who will use technology and how it will be used. Simply replacing one interaction mode, such as the display of text on a screen with a functionally equivalent mode as in speaking the text aloud, is not necessarily equivalent from the point of view of user experience. This is because functional equivalence might not account for the meaning of the mode of interaction for particular users in specific contexts. Efforts to provide multimodal support for people with perceptual and/or motor disabilities10,22 are encouraging, not simply because of the increased physico-cognitive support they provide. Rather, multimodal support offers the possibility of using different modalities on different tasks and in different contexts, but only if the designer allows this degree of user control of interaction mode.

Paradoxically, increasing and universalizing access to technology might require attending to the specific and situated meanings of technology use by particular populations in particular settings. Because technologies invisibly embed taken-for-granted assumptions concerning trade-offs in functionality, usability, and situated meaning, developing an understanding of these trade-offs for particular people and populations can improve technology access for increasing numbers of people.

Back to Top

Acknowledgments

We thank Sally Fincher for initial project brainstorming, key bibliographic references, and feedback on drafts; Genevieve Bell for helping overcome challenges in subject recruitment and for suggestions on carrying out ethnographic fieldwork; Lisa Tice for utilizing her professional network to help with recruitment; Mark Blythe for providing information on his protocols; Skip Walter and Eli Blevis for feedback and encouragement as we were completing our fieldwork; Donald Chinn and Carol Hert for support throughout; and Youn-kyung Lim for her many suggestions for improving the reporting of these results. And thanks most especially to Sara for her benevolence, sincerity, and willingness to share her life and to Laurie Rubin for her inspiration, motivation, and support.

This is a revised version of an earlier paper presented at the Ninth Interactional ACM SIGACCESS Conferences on Computers and Accessibility (Tempe, AZ, Oct. 15–17, 2007); http://www.sigaccess.org/assets07/ and appears at http://portal.acm.org/citation.cfm?doid=1296843.1296873.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

UF1 .Figure. BrailleNote from HumanWare;

UF2 Figure. Braille watch with retractable glass cover and tactile numbers and hands.

UF3 Figure. Braille labels help identify objects and content.

Back to Top

Tables

UT1 Table. Limitations and workarounds.

Back to top

    1. Basso, K. Wisdom Sits in Places. University of New Mexico Press, Albuquerque, NM, 1996.

    2. Bell, G., Blythe, M., and Sengers, P. Making by making strange: Defamiliarization and the design of domestic technologies. ACM Transactions on Computer-Human Interaction 12, 2 (June 2005), 149–173.

    3. Beyer, H. and Holtzblatt, K. Contextual Design. Morgan Kaufmann Publishers, San Francisco, CA, 1998.

    4. Blythe, M., Monk, A., and Park, J. Technology biographies: Field study techniques for home use product development. In CHI02 Extended Abstracts on Human Factors in Computing Systems (Minneapolis, MM, Apr. 20–25). ACM, New York, 2002, 658–659.

    5. Bouvrie, J. and Sinha, P. Visual object concept discovery: Observations in congenitally blind children and a computational approach. Neurocomputing 70, 13–15 (Aug. 2007), 2218–2233.

    6. Csikszentmihalyi, M. and Rochberg-Halton, E. The Meaning of Things: Domestic Symbols and the Self. Cambridge University Press, Cambridge, U.K., 1981.

    7. Freud, S. Dora: An Analysis of a Case of Hysteria. Collier Books, New York, 1963.

    8. Geertz, C. The Interpretation of Cultures. Basic Books, New York, 1973.

    9. Harper, D. Working Knowledge: Skill and Community in a Small Shop. University of Chicago Press, Chicago, 1987.

    10. Jacko, J., Moloney, K., Kongnakorn, T., Barnard, L., Edwards, P., Leonard, V., and Sainfort, F. Multimodal feedback as a solution to ocular disease-based user performance decrements in the absence of functional visual loss. International Journal of Human-Computer Interaction 18, 2 (2005), 183–218.

    11. Kurniawan, S., Sutcliffe, A., Blenkhorn, P., and Shin, J. Investigating the usability of a screen reader and mental models of blind users in the Windows environment. International Journal of Rehabilitation Research 26, 2 (June 2003), 145–147.

    12. Luria, A. The Man with a Shattered World: The History of a Brain Wound. Harvard University Press, Cambridge, MA, 1987.

    13. Magnusson, C., Rassmus-Grohn, K., Sjostrom, C., and Danielsson, H. navigation and recognition in complex haptic virtual environments: Reports from an extensive study with blind users. In Proceedings of Eurohaptics 2002 (Edinburgh, U.K., 2002).

    14. Mirel, B. Interaction Design for Complex Problem Solving. Morgan Kaufmann Publishers, San Francisco, CA, 2004.

    15. Moggridge, B. Designing Interactions. MIT Press, Cambridge, MA, 2006.

    16. Morley, S., Petrie, H., O'Neill, A., and McNally, P. Auditory navigation in hyperspace: Design and evaluation of a non-visual hypermedia system for blind users. Behaviour & Information Technology 18, 1 (Jan. 1999), 18–26.

    17. Newell, A.F. and Gregor, P. User-sensitive inclusive design—In search of a new paradigm. In Proceedings on the 2000 Conference on Universal Usability (Arlington, VA, Nov. 16–17). ACM Press, New York, 2000, 39–44.

    18. Spradley, J. Participant Observation. Thomson Learning, 1980.

    19. U.S. Census Bureau. Population Estimates 2000 to 2006; http://www.census.gov/popest/states/NSTannest.html.

    20. Wall, S. and Brewster, S. Providing external memory aids in haptic visualisations for blind computer users. In Proceedings of the Fifth International Conference on Disability, Virtual Reality, and Associated Technologies (Oxford, U.K., Sept. 2004).

    21. Winograd, T. and Flores, F. Understanding Computers and Cognition: A New Foundation for Design. Ablex Publishing Corporation, Norwood, NJ, 1986.

    22. Wood, J., Magennis, M., Arias, E., Gutierrez, T., Graupp, H., and Bergamasco, M. The design and evaluation of a computer game for the blind in the GRAB haptic audio virtual environment. In Proceedings of Eurohaptics 2003 (Dublin, U.K., July 2003).

    23. World Wide Web Consortium. Web Content Accessiblity Guidelines 1.0, 1999; http://www.w3.org/TR/WCAG10/.

    24. Yin, R. Case Study Research, Design and Methods. Applied Social Research Methods Series, Vol. 5, Third Edition. Sage Publications, Thousand Oaks, CA, 2003.

    DOI: http://doi.acm.org/10.1145/1536616.1536636

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More