Sign In

Communications of the ACM

Research highlights

Technical Perspective: Eyelid Gestures Enhance Mobile Interaction

boy with eyes closed and arms extended

Credit: Getty Images

Technologies have a huge potential to increase access to services. The Internet democratized access to information. Social media democratized access to audiences and communities of interest. Smartphones democratized access to computing and communication resources. Conversely, technology is also a factor of exclusion. Historically, when new technologies emerge, a portion of the population is excluded from access due to the abilities they assume. As these technologies become relevant and structural in our day to day, the greater is their negative impact on excluded communities.

Assistive technologies (AT) are created to bridge this gap. In the past, physical ATs tended to be bulky, expensive, and not designed with social acceptability in mind. Mobile devices, on the contrary, can be used in a variety of contexts to assist people. They have become the best ATs: small, lightweight, portable, and resourceful. By using a mobile device, people can access information, communicate, be productive, have fun, and even access other assistive services through this technology Swiss Army knife. One thing still needs to be guaranteed: that people can interact with the mobile device.

Access to touch phones is representative of what happened with other technologies (for example, personal computers, the Web). When they became prevalent, several people with disabilities got more excluded. These shiny new devices were less accessible than their previous tactile-rich counterparts (that is, feature phones). In the last few years, access has been recovered and built upon. Screen readers were established on smartphones enabling visually impaired people to use mobile devices and, through them, several empowering services like location and navigation, and object recognition apps. Voice recognition became usable and enabled people who are deaf or hearing impaired to use a mobile device to communicate with co-located people. Virtual scanning interfaces enabled people with motor impairments to use peripherals or the touchscreen as input to step-by-step interface navigation. People found their own adaptations, repurposed apps, repurposed accessories, and became empowered. These are just examples of a multitude of interactions with, and enabled by, mobile devices.

The following paper by Fan et al. addresses the accessibility of mobile devices to people with motor impairments. What I find most relevant about this paper is that it goes beyond physical access (one of many layers of access) and tries to increase input dimensionality and enable a more efficient interaction. Current operating systems include accessibility services that allow directed or dwell-based navigation of interface elements, and their control with peripherals (for example, button switches). The paper presents and evaluates nine eyelid gestures for people with motor impairments to control their mobile devices, which is a significant increase in the number of actions a person can perform today. This is a good example of why and how we should continue to go beyond the first layer of access.

This [paper] is a good example of why and how we should continue to go beyond the first layer of access.

When discussing mobile interaction, it is common to put forward challenges like situationally induced impairments and disabilities, that is, when by means of context someone becomes impaired in using a device. An example is trying to look at a phone when there is direct sunlight hitting the screen, or you are wearing gloves due to cold weather and trying to input your PIN. We have only just started considering situationally augmented impairments and disabilities, that is, when people with disabilities are subject to contexts beyond the strict ones ATs were designed for. To make technologies and services accessible means going deeper into other layers of access and consider efficiency, resilience and adaptation to context, usage in social environments, among many others.

Understanding these layers and letting go of stereotypical ATs (what technology researchers and developers think users need and value, rather than what they actually need and value) can only be done with the participation of representative users. The sooner the better, the more (and more diverse) the merrier. There is much to like about the following paper, and one of those things is the feedback collected from people with motor impairments. It becomes clear that there are great opportunities for eyelid gestures to be recognized and used in real settings, but there are also challenges for customization and mapping. It is not a matter of just having these options there but working with users to understand how to deliver them in a way that people can sustain and benefit from them in real life.

Back to Top


Tiago Guerreiro is a professor of computer science in the Faculdade de Ciências at the Universidade de Lisboa and a researcher at LASIGE, in Portugal. He is Co-Editor-in-Chief for ACM Transactions on Accessible Computing.

Back to Top


To view the accompanying paper, visit

Copyright held by author.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.


No entries found