Sign In

Communications of the ACM


AI and Accessibility

robotic hand and head, illustration

Credit: Shutterstock / Andrij Borys Associates

According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real-time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered.

Back to Top


The inclusivity of AI systems refers to whether they are effective for diverse user populations. Issues regarding a lack of gender and racial diversity in training data are increasingly discussed; however, inclusivity issues with respect to disability are not yet a topic of discourse, though such issues are pervasive.4,16 For example, speech recognition systems, which have become popularized by virtual assistants, do not work well for people with speech disabilities such as dysarthria, deaf accent, and other conditions since training data does not typically include samples from such populations.6 Advances in computer vision have led several groups to propose using such algorithms to identify objects for people who are blind, but today's vision-to-language algorithms have been trained on datasets comprised of images taken by sighted users, limiting their efficacy when applied to images captured by blind users, which tend to have far lower quality.2 These inclusivity issues threaten to lock people with disabilities out of interacting with the next generation of computing technologies. Proposed methodologies for increasing awareness of the provenance and limitations of datasets3 are a starting point to increasing dataset transparency. Efforts to directly source data from under-represented user groups, such as the VizWiz dataset,5 which contains thousands of images and related questions captured by people who are blind, are a step in the right direction.


No entries found

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.