The increasing use of artificial intelligence (AI)-based technologies in everyday settings creates new opportunities to understand how disabled people might use these technologies.2 Recent reports by Whittaker et al.,11 Trewin et al.,9 and Guo et al.3 highlight concerns about AI’s potential negative impact on inclusion, representation, and equity for those in marginalized communities, including disabled people. In this Opinion column, we summarize and build on these important and timely works. We define disability in terms of the discriminatory and often systemic problems with available infrastructure’s ability to meet the needs of all people. For example, AI-based systems may have ableist biases, associate disability with toxic content or harmful stereotypes, and make false promises about accessibility or fail to accessibly support verification and validation.2 These problems replicate and amplify biases experienced by disabled people when interacting in everyday life. We must recognize and address them.
Recognizing and Addressing Disability Bias in AI-Based Systems
AI model development must be extended to consider risks to disabled people including:
Unrepresentative data. When groups are historically marginalized and underrepresented, this is “imprinted in the data that shapes AI systems.”11 Addressing this is not a simple task of increasing the number of categories represented, because identifiable impairments are not static, or homogeneous, nor do they usually occur singly. The same impairment may result from multiple causes and vary across individuals. To reduce bias, we must collect data about people from multiple contexts with multiple impairments over multiple timescales.
Missing and unlabeled data. AI models trained on existing large text corpora, risk reproducing bias inherent in those corpora.2,3 For example, the relative lack of accessible mobile apps8 makes it more likely AI-generated code for mobile apps will also be inaccessible.
Measurement error. Measurement error can exacerbate bias.9 For example, a sensor’s failure to recognize wheelchair activity as exercise may lead to bias in algorithms trained on associated data. These errors exist for every major class of sensing.3
Inaccessible interactions. Even if an AI-based system is carefully designed to minimize bias, the interface to that algorithm, its configuration, the explanation of how it works, or the potential to verify its outputs may be inaccessible (for example, Glazko et al.2).
Disability-Specific Harms of AI-Based Technologies
Even the most well-designed of systems may cause harms when deployed. It is critical that technologists learn about these harms and how to address them before deploying AI-based systems.
Defining what it means to be “human.” As human judgment is increasingly replaced by AI, “norms” baked into algorithms that learn most from the most common cases11 become more strictly enforced. One user had to falsify data because “some apps [don’t allow] my height/weight combo for my age.”4 Such systems render disabled people “invisible”11 and amplify existing biases internal to and across othering societal categories.3 AI-based systems are also being used to track the use and allocation of assistive technologies, from CPAP machines for people with sleep apnea, to prosthetic legs,11 deciding who is “compliant enough” to deserve them.
Defining what “counts” as disabled. Further, algorithms often define disability in historical medical terms.11 However, if you are treated as disabled by those around you, legally you are disabled—the Americans with Disabilities Act does not require a diagnosis (42 U.S.C. § 12101 (a)(1)). Yet, AI-based technologies cannot detect how people are treated. AI-based technologies must never be considered sufficient, nor required as mandatory, for disability identification or service eligibility.
Exacerbating or causing disability. AI-based systems may physically harm humans. Examples include activity tracking systems that push workers and increase the likelihood of work-related disability11 and AI-based systems that limit access to critical care resources, resulting in an increased risk of hospitalization or institutionalization.5
Privacy and security. Disability status is increasingly easy to detect from readily available data such as mouse movements.12 Any system that can detect disability can also track its progression over time, possibly before a person knows they have a diagnosis. This information could be used, without consent or validation, to deny access to housing, jobs, or education, potentially without the knowledge of the impacted individuals.11 Additionally, AI biases may require people with specific impairments to accept reduced digital security, such as the person who must ask a stranger to ‘forge’ a signature at the grocery store “ … because I can’t reach [the tablet].”4 This is not only inaccessible, it is illegal: kiosks and other technologies such as point-of-sale terminals used in public accommodations are covered under Title III of the Americans with Disabilities Act.
Reinforcing ableist policies, standards and norms. AI systems rely on their training data, which may contain biases or reflect ableist attitudes. For example, Glazko et al.2 describe both subtle and overt ableism that appeared when trying to generate an image and summarize text. These harms also affect disabled people who are not directly using AI, such as biased AI-rankings for resumes that mention disability.1
Recommendations
First and foremost, do no harm: algorithms that put a subset of the population at risk should not be deployed. This requires regulatory intervention, algorithmic research (for example, developing better algorithms for handling outliers)9 and applications research (for example, studying the risks that applications might create for disabled people). We must consider “the context in which such technology is produced and situated, the politics of classification, and the ways in which fluid identities are (mis)reflected and calcified through such technology.”11
The most important step in avoiding this potential harm is to change who builds, regulates, and deploys AI-based systems. We must ensure disabled people contribute their perspective and expertise to the design of AI-based systems. Equity requires that disabled people can enter the technology workforce so they can build and innovate. This requires active participation in leadership positions, access to computer and data science courses, and accessible development environments. The slogan “Nothing about us without us” is not just memorable—it is how a just society works.
Organizations building AI systems must also improve equity in data collection, review, management, storage, and monitoring. As highlighted in President Biden’s AI Bill of Rights,10 equity must be embedded in every stage of the data pipeline, including motivating and paying participants for accessible data and metadata that does not oversimplify disability, ensuring disabled peoples’ data is not unfairly rejected when minor mistakes occur or due to stringent time limits,7 to ensuring disabled stakeholders participate in and understand their representation in training data, through transparency about and documentation of what is collected and how it is used.9 Community representation can improve the breadth of participation in data collection and guide the design of data collection systems and prioritization of what data to collect and what not to use.
Legislators and government agencies must enact regulations for algorithmic accessibility. Algorithms should be subject to a basic set of expectations around how they will be assessed for accessibility, just like websites. This will help to address basic access constraints, reduce the types of errors that enforce “normality” rather than honoring heterogeneity, and eliminate errors that gatekeep who is “human.” Consumer consent and oversight concerning best practices are both essential to fair use. AI-based systems should be interpretable, overrideable, and support accessible verifiability of AI-based results during use.2
All parties must work together to promote best practices for accessible deployments, including accessible options for interacting with AI. Just as accessible ramps or elevators that are hidden or distant are not acceptable for accessibility in physical spaces, accessible AI-based systems must not create undue burdens in digital spaces nor segregate disabled users.
To gauge progress and identify areas in need of work, the community must develop assessment methods to uncover bias. Many algorithms maximize aggregate metrics that fail to both recognize and address bias.3 Further, we must consider intersections of disability bias with other concerns, such as racial bias.6 Scientific research will be essential to defining appropriate assessment procedures.
Conclusion
Accessible AI is ultimately a question of values, not technology. Simple inclusion of disabled people is insufficient. We must work to ensure equity in data collection, algorithm access, and in the creation of AI-based systems, even when equity may not be expedient.
The fight for accessible computing provides lessons for meeting these ambitious goals. As the disability rights movement of the 1970s converged with the dawn of the personal computer era, activists urged the computing industry to make computing more accessible. The passage of the Americans with Disabilities Act (ADA) in 1990 provided legal recourse, and the advent of GUIs and the Web in the mid-1990s led to the development of new accessibility tools and guidelines for building accessible systems. These tools made computing more robust, helping users with disability and others alike, while advocates successfully used the ADA to ensure accessibility of many websites.
This combination of advocacy, engagement with industry, regulation, and legal action can be applied to make AI safer for disabled people, and the rest of us. The opacity of AI tools presents unique obstacles, but the AI Bill of Rights10 and more technical federal efforts detailing steps toward appropriate AI design provide initial directions. The pushback from those who hope to profit from AI will undoubtedly be significant, but the costs to those of us who are, or who will become, disabled will be even greater. We cannot train AI on a mythic 99%.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment