Abstract
Augmentative and alternative communication (AAC) devices enable speech-based communication, but generating speech is not the only resource needed to have a successful conversation. Being able to signal one wishes to take a turn by raising a hand or providing some other cue is critical in securing a turn to speak. Experienced conversation partners know how to recognize the nonverbal communication an augmented communicator (AC) displays, but these same nonverbal gestures can be hard to interpret by people who meet an AC for the first time. Prior work has identified motion through robots and expressive objects as a modality that can support communication. In this work, we work closely with an AAC user to understand how motion through a physical expressive object can support their communication. We present our process and resulting lessons on the designed object and the co-design process.
Introduction
Augmented communicators (ACs) with motor disabilities that affect speech production may use augmentative and alternative communication (AAC) devices to speak. AAC devices include picture or letter boards that people can point to or speech-generating devices people can use to compose messages.2 Commercial speech-generating AAC systems are currently only customizable at the word selection and speech production levels, and they do not yet support augmentations that can increase non-verbal communication. Nonverbal communication is key in helping regulate turn-taking, convey personality, and execute actions that increase social agency,12 all of which are current challenges for ACs.15,22 For instance, ACs are compelled to respond within the synchronous timing constraints of in-person interactions even though they use an asynchronous text-based medium.10 ACs have to compose a message on their device using text and then they share their message with text-to-speech while a non-augmented conversation partner responds synchronously using speech without needing to compose a message.
Prior work identified motion-based AAC as a viable and under-explored modality for increasing ACs’ agency in conversation.21 We build on this prior-work to dig deeper into a particular case study on motion-based AAC by co-designing a physical expressive object, or sidekick, to support ACs during conversations. We carried out participatory design17 to co-design a bespoke technology with an augmented communicator and their close conversation partners. We carried out research through design, in which we engaged in designing as a research activity to obtain design knowledge in the process,24 positioning design activities as data collection and analysis opportunities.13 Our activities included information gathering through interviews, surveys, proto-typing sessions, and diary entries over a period of 12 months.
Our design team involved Mark, an expert augmented communicator who is also a co-author of this work, Mark’s close conversation partners (T and D), and university HCI researchers. Mark has used AAC for over 19 years and has long advocated for AAC users by working at a renowned AAC device company, serving as a student government senator, and advocating in a variety of disability rights campaigns. Mark has cerebral palsy and is a wheelchair user. Mark’s AAC device scans through each option on the device (e.g., word, letter, shortcut) until Mark presses his head switch to select a target. In addition to his device, Mark uses facial expressions and eye blinks to communicate. T and D facilitated our remote design meetings, sharing their experiences communicating with Mark, and facilitating the exchange of materials needed to carry out this work. HCI researchers and authors with backgrounds in accessibility, fabrication, and robotics coordinated the design activities (Figure 1).
We followed a five-stage design process: (1) discovery of design opportunities for nonverbal expressive sidekicks, (2) a definition stage during which we set out to discuss design priorities for a future functional sidekick, (3) the development of different ideas and prototypes, (4) a delivery stage in which we pilot tested the device in context and refined our prototype, and (5) the evaluation stage during which we tested the prototype for two months, with three weeks of a diary study collection.
Discovery
Mark had been a participant in a past study and had expressed interest in exploring how motion through a physical expressive sidekick that would allow him to better capture his communication partner’s attention when needed and without interrupting the ongoing conversation. Mark also noted that he would mainly want to use the sidekick during a group conversation, in a classroom setting, or in student government meetings. Using his AAC device to communicate can make group conversations challenging as regulating turn-taking takes more effort such as knowing when to interrupt a group or how to tell others, Mark is working on a message before conversation partners assume he has nothing else to contribute and move on to the next topic. The time it takes Mark to compose a message can vary between a couple of seconds to up to minutes and therefore Mark often asks for more time to compose to remind partners to wait. At the time we started this work, Mark was completing his last semester of college and was attending student government meetings for which Mark acted as vice president. Mark invited us to observe one of the meetings so we could get a better sense of how their current group conversations developed. Unfortunately, the COVID-19 pandemic commenced during this time and the student government meetings were canceled. Mark shared via email a little more about the inperson meetings:
“The conversation is fast-paced, and sometimes it’s difficult for me to contribute. However, the other senators are tuned into how my communication system works. As vice president, I’m responsible for keeping time so I have the following pre-programmed phrases: ‘I’m sorry to interrupt, but we’re running out of time; Let’s get back on track. We can always discuss this at a later time if necessary.’ ‘I would like to add something to the conversation. It will take me a few minutes to write it. Would you call on me in a few minutes?’”
Upon further discussion with Mark and his family, we learned that Mark uses a variety of strategies to let others know Mark wants to compose a message and then later, that he is ready to share it. For instance, Mark had shared written guidelines with his professors on how to facilitate a discussion in a class where someone uses a communication device with suggestions including giving Mark some questions ahead of time so that he can prepare an answer, or allowing Mark to give a one-word answer that the professor can build on. Another solution Mark and his family came up with was to build a switch-activated LED light strip mounted right next to Mark’s wheelchair’s headrest. T explained that Mark turns on his light to indicate he is ready to participate: “He turns it on to alert the professors he has an answer, [the light] is currently acting as his own sidekick when he is in class.” The light was a nonverbal way to call for attention and indicate to others that Mark was ready but, it did not allow for more nuanced communication. We decided to explore how we could signal other social cues with a motion to support Mark in managing turn-taking in group conversations.
Definition
The design team met to define specific properties the side-kick should have to best support Mark’s interactions and conversations with different partners. In this section, we report on the identified goals and possible usage scenarios in which Mark envisioned using the sidekick. To scaffold our definition phase, the HCI researchers selected specific properties to discuss covering the function, social factors, and aesthetics surrounding the sidekick: (1) what conversational goals should the sidekick support? (2) How would the sidekick be controlled? (3) How should the sidekick be introduced in the conversation? (4) What should the sidekick do while it is inactive? and (5) What should the sidekick look like? We used these discussion points as probes to imagine different possibilities and identify design constraints.
Defining accessible sidekick controls
We learned that the preferred mode to control a potential sidekick would need to involve Mark’s head switch. The HCI researchers had brainstormed a series of controllers to discuss with Mark ranging from manual inputs to automatic sensing mechanisms, but when discussing these ideas, we learned that he had already tried and discarded many of these input modes before. T explained that Mark has mixed muscle tone due to his athetoid cerebral palsy, so other input modes that require motor precision and repetition of controlled movements such as eye tracking, facial gestures or foot pedals are not accessible.
“We have tried a lot of access points; can I go over them? we tried the elbow; we tried the knee. We tried some things with [Mark]’s hands… he can go in one direction but can’t retreat from that direction so if he were to get his hand out here it might stay there and then but really, he needs to [bring it back] to release it as a switch. [Mark]’s most functional area for selecting is his head.”
We also learned that gesture-based input could be tiring. Mark and his family had also tried a system that comprised of a wearable headband with electrodes that could sense winks and specific facial gestures that could be detected and help Mark with accessing his communication device. They were trying to use it as an alternative to eye gaze but it was impossible to find a consistent facial movement. Having to do repetitive gestures was also physically taxing. Mark currently has two head switches, one to control his AAC device and another one to turn his light on and off. Mark stated this was already the maximum number of switches he desired so he would prefer to be able to use the same switch he uses for his light, for the sidekick. So, we integrated the sidekick to be controllable by one of his head switches.
Sidekick interactions and inactive state
It was challenging to talk about sidekick properties and interaction without having a physical model of what the sidekick could look like. Mark expressed that the word “prototype” the HCI researchers kept using to describe a possible sidekick was not completely clear to him so HCI researchers clarified they meant a model of what the sidekick could be. Clarifying that nothing was set in stone yet allowed us to freely explore the possibilities and talk about the constraints of each. We discussed how we imagined a possible sidekick would behave starting from what it would do when it is not in use and how partners would discover it. Prior to our meeting, Mark had answered that similar to his headlight, the sidekick should always be present but not always active to prevent Mark from having to retrieve something that needed to be taken off and put on. We also had imagined that the sidekick could be hidden and it should appear suddenly when needed but this also begged the question as to how we could execute this mechanically. Mark indicated he would prefer for the sidekick to remain in its position when not in use. Mark shared that he would not know where to store a sidekick so he assumed that it would work best for him if left on while remaining ambient. We pinned this in our discussion and revisited it later asynchronously once we converged on a sidekick form factor.
Physical appearance and placement
Mark mentioned that he wanted the sidekick to have a smile, but he was unsure about what he wanted it to look like. We conducted a literature review on expressive robotic objects that had smiles or faces and used the images gathered plus other expressive objects. Some of the images we used are shown in Figure 2. We shared these different expressive robotic objects examples using Padlet, a digital online collaboration board maker that Mark recommended for its accessibility. We placed each image in a way that Mark could comment below it asynchronously and use thumbs up/thumbs down to prioritize the favorite ideas over ones that did not resonate with what he had in mind. We also allowed a space in the collaborative board for sketching, and for labeling where the sidekick could potentially be placed on Mark’s wheelchair. While Mark did not sketch, he indicated a preference for the flag-looking object from the Paper Signals project3 that showed a flag raising up from a boxlike container (Figure 2, right). Mark also indicated that it would be best to place the sidekick to one side, close to the AAC device, instead of other alternate options that included using the space in the back of Mark’s headrest. Once we decided on the sidekick’s form the HCI researchers started prototyping, as described in the next section.
Development
Once we decided on a flag-like form factor, we started exploring different possible motions. The HCI researchers created a low-fidelity prototype using a popsicle stick, paper, a servo motor, and an Arduino to showcase a variety of motions and share them with Mark and his family in a video. Figure 3 shows some example motions. The motions included: (1) rise and hold (to call for attention like raising a hand); (2) home position (to demonstrate the idle state); (3) rise and wave (coming up and moving forward showing enthusiasm or agreement, like nodding yes, or calling for attention); (4) there-there motion (moving from 90 to 180 degrees slowly like saying “calm down,” or “It’s OK”); and (5) the metronome motion (moving from 0 to 180 degrees and then back while Mark is composing a message to show something is in process).
The HCI researchers also modeled a 3D flag-like object to ground discussions of what could be further developed in a high-fidelity prototype (Figure 4A). We shared the video with the motions and the 3D model prototype with Mark and his family and met to discuss the device’s development further. The HCI researchers focused on learning the sidekick’s size limitations and placement constraints, according to Mark’s needs. To do this, we carried out a conversational, spatial brainstorm in which we relied on shared visual information to ground our discussion. The HCI researchers shared the 3D model they had created and Mark and his family pointed to different places on his chair while rotating their camera (used during video conferencing) around to facilitate our understanding of the space available and a potential mounting place.
Constraining the design space
Understanding technical limitations and Mark’s access preferences were key in facilitating our discussion and making decisions about the sidekick. As we brainstormed different possibilities, D and T reminded the HCI researchers about Mark’s AAC device features and limitations. For example, Mark’s AAC device has Bluetooth and infrared remote-control capabilities that can be used to interface with a computer or another device. Nonetheless, infrared and Bluetooth are not always reliable. T shared that the infrared control worked better than Bluetooth but requires a specific receiver that Mark only uses when he is working at a dedicated desk. It was also important to have a conversation on how Mark prefers to use his AAC device:
“Some people do use the computer side of [the AAC device]; Mark does not. He does not like shifting over to it because having the open computer where you have the language software on one side and the computer software on the other side, it slows down the language side. That is just one reason.”
This conversation reaffirmed our decision to use the head switch as a way to control the sidekick without needing to worry about wireless connectivity issues. These led us to a discussion about how we could enable triggering specific separate motions if we only had one head switch as our input. Our conversation turned into understanding the technical limitations of our envisioned sidekick. Could we build it to move only while the button is pressed? Should it stop once Mark released the button? Should it be voice-activated too? Mark expressed he did not want to complicate things too much, by saying no to different ideas about using voice to activate different motions, and having the sidekick rotate to show different colors that could mean something. Such features would add additional learning and work to Mark’s daily processes by requiring him to remember numerous sidekick states and controls.
Motion, a new material to explore
D has experience tinkering around Mark’s wheelchair. For example, he built the LED light. However, D had not considered using motors before and was surprised to find how dramatic and descriptive even small motions could be. D explained how Mark can use his light to say yes and no, two light flashes for yes, one for no, but with the motor, the object could move in a specific way that means something else, even when being triggered by the same switch.
“When you showed the one you showed in the video, it was quite dramatic. It was almost like waving. So, he is like waving toward himself to get somebody’s attention like saying ‘I am ready now’.”
When watching the example motion in the video we discussed how the different motions could be seen differently from different angles. For example, the metronome motion is more understandable from the side than from a front view. We decided to add a second degree of freedom that could support adjusting the sidekick to be visible at different viewing angles.
We thought that the desire to have a flag-like object could mean Mark might have been interested in adding a message to it. However, during our conversational brainstorm, we realized Mark and his family considered that motions themselves were enough. D suggested extending the part attached to the motor a couple of inches higher, and removing the rectangular flag face area as it would probably be more practical, less vulnerable to wind, and make the sidekick smaller in size. Mark agreed and shared how he would like to change the square-like-looking attachment to something like a popsicle stick and have the possibility of adding some extension to it in the future. D agreed:
“If the part that attached to the motor, that stick, is just extended a couple of inches higher, that is almost enough of a visual cue, if it moves up and down. I don’t even know if it needs a rectangular area at all.”
Making use of the space and color
We also learned how having a very large sidekick could interfere with Mark’s transfers out and unto the wheelchair. It was important to stay within Mark’s chair perimeter to avoid obstacles. Mark had an existing mount for his wheelchair that could be used to place the sidekick right to one side of his AAC device without blocking his face or his line of sight. T and D offered to drop off the mount so that we could fabricate the sidekick around it. We decided that a sidekick with an approximate total height of 6 inches would be more than enough to be seen. Mark’s family also suggested using a contrasting color to Mark’s wheelchair and his accessories which are mostly all black. We decided then the sidekick would need to be a color that could stand out as D suggested:
“I saw the video of the thing moving. That is going to catch people’s attention. You could have just the stick and people would see that. A bright-colored stick would do the job.”
Prototype Implementation
We implemented the ideas and feedback from our conversations into a fully functional sidekick prototype. We designed a mechanical structure made of 3D printed parts which are actuated by two micro servo motors with a range of motion up to 180 degrees of freedom each (Figure 4C). The arm has holes that fit M3 screws and allow for easy mounting of other desired extensions. The device is connected to a metal piece that enables mounting on the wheelchair. The electrical design uses a METRO Mini 328 microcontroller from Adafruit industries and a custom PCB that connects the adaptive button and power lines to the servo motors and to the microcontroller. The device has a micro-USB connection which brings power through an external battery pack or directly from the AAC device’s USB port. A mono audio jack adaptor on the device serves as the connector for the adaptive button. The sidekick’s gestures were authored using the Arduino Servo library that allows setting motors to specific positions. To support others in creating their own side-kicks, we have open-sourced the 3D designs, software, and electronic schematics here: https://github.com/Svsquared/AAC-sidekick.
Following video exchanges via email we programmed 3 motions for Mark to test including Mark’s preferred motion: (1) rise, pause and wave to call for attention; and two additional motions to explore and probe for ideas, (2) a return to home motion rotating the sidekick slowly inwards, and (3) pointing outwards as if pointing to an object nearby. The sidekick also included an intro motion sequence to signal to be on and receiving power: once plugged in the sidekick would turn on and move to the center itself. Each motion was programmed to be activated according to different button press durations: one fast button press triggered the rise up and wave motion, a press lasting about 3 seconds moved the sidekick back to the home position, and a longer press lasting about 6 seconds the sidekick rotated to point outwards. Each of these motions began with a “preamble” sequence which would bring the sidekick’s arm to the front and center of the device. This preamble was intended to capture people’s attention before the sidekick carried out the main motion.
Delivery: Testing and Iterating
We delivered our 3D printed design for a “test run” in which Mark would try the device for a couple of weeks and provide feedback. After trying out the prototype, Mark and his family came up with a new motion they wanted to use, called a timer. Mark often says “can you hold on a minute please” to indicate he would like to say something and needs a minute to compose his message. The idea of the timer motion is to make the sidekick’s arms act as a timer that moves from side to side at a pace of 6 degrees per second, that is 30 seconds moving from left to right and 30 seconds right to left for a total of one minute side to side (Figure 5). Another suggestion was to add the word “typing” on the sidekick’s arm to further clarify the message.
Mark also found that the “preamble” sequence before each motion was more confusing than helpful in capturing people’s attention so we removed the preamble entirely. We also decided to drop the pointing motion as it was rarely used. After some iterations and feedback through virtual meetings, we finalized a version of the sidekick that Mark would use for a longer period of time. The final version had only two main motions—the timer motion lasting 1.5 min total and the wave motion. The timer and the wave could be activated by a fast click and a 3-second press, respectively. Clicking the head button again during any of the motions stopped and reset the sidekick immediately (Figure 5), a key function we identified during the testing phases as it could help to stop the side-kick in case of an accidental press of the head button or to stop the prolonged timer motion.
Evaluation: Use and Impact
Evaluating the sidekick for two months enabled us to understand if and how it supported Mark’s communication in different real-world contexts and with different communication partners. Using the sidekick for a long period of time also provided us with ideas for improvements and allowed us to reflect on how the day-to-day tasks (i.e., technology setups) and different communication channels (virtual or in-person) impacted the sidekick’s effectiveness and use. We analyzed the diary entries provided by Mark, we graphed Mark’s circle of communication partners, and we met regularly to discuss how the sidekick was used and how it worked according to different interactions. We were interested in understanding if other people understood the sidekick’s purpose and if the sidekick enabled Mark to participate more in conversations, support him with turn-taking, and show others his intent to contribute.
We collected 11 diary entries over three weeks. We asked Mark for information on any activities during which he used the sidekick and we asked him to rate its usability and social factors including: “The sidekick worked as expected”; “it was easy to use”; “It helped me achieve what I wanted”; “it was more distracting to me than it should have been”; “partners noticed the sidekick”; “partners understood what I was conveying when using the sidekick”; and “partners seemed distracted by the sidekick.” To better understand Mark’s relationship with the communication partners he interacted with during the evaluation period, we interviewed Mark and created a circle of communication partners (Figure 6). His partners included family and friends. His three family members, T, D, and P, as well as his friends, acquaintances, service providers, and the new communities he is building through his advocacy work. The partners reported in the diary study are lightly bolded in Figure 6.
The sidekick was more useful during face-to-face conversation as compared to using it in virtual meetings. We also found close communication partners did not need the side-kick but understood its purpose right away. We also learned that there were some unexpected barriers to the sidekick use like remembering to make sure it was plugged into power and to the head switch. We present our main evaluation findings next.
Sidekick use and performance
The sidekick was reportedly used 7 of 11 days, summarized in Table 1. The sidekick was used from 2 to 5 times per conversation on 6 days and it was used from 6 to 10 times on one day when Mark was talking to his friends from an AAC conversation group. We learned that it was easy to forget to plug the sidekick in and this was the main reason for days of non-use. The sidekick was already mounted on the wheelchair every day but it needed to have the head switch cable connected to it to be operable, as well as make sure it was connected to the AAC device for power. One improvement suggested by T was to make the sidekick integrated with the device, in a way that it could be controlled with the AAC device and could not need external cables.
ID | No. of partners | Relationship | Medium |
---|---|---|---|
A | 2 | Past aides (PA) | In-person |
B | 5 | Current and past aides (CA, PA) | Online |
C | 5 | Current and past aides (CA, PA) | Online |
D | 15 | Music group friends (WL) | Online |
E | 5 | AAC friends (AC) | Online |
F | 1 | Job coach (JC) | Online |
G | 15+ | Disability advocates (DA) | Online |
Mark reported that 7 out of 7 times others noticed the sidekick during his meetings. The sidekick was easy to use and worked as expected most of the time, although, for one meeting with his job coach, the sidekick worked sporadically—requiring Mark to press the head switch twice or for a little longer to make sure it started the motion. Even though the HCI researchers could not replicate the latency problem we reflected on the importance of making sure the sidekick was sensitive enough to Mark’s press frequency and style, as it was evident from his report that the microcontroller’s internal delays were not allowing the button to be sensitive enough to his desired rhythm of use on some occasions.
Mark reported that the sidekick helped him manage turn-taking when talking to his AAC group friends and when talking to a group of his current and past aides with whom he is close. The sidekick did not really help with turn-taking that much for his other conversations. The sidekick did not really increase Mark’s participation in meetings but it was helpful in reminding partners to wait for Mark’s response and was used frequently to replace his preprogrammed message: “can you hold on a minute please?” In general, having access to a motion-based AAC sidekick enabled a shortcut in communication, allowing others to understand typing is happening and allowing Mark to not have to verbally say “one moment please” with his AAC device.
Close and new communication partners
The familiarity with AAC and also with Mark’s communication style is what makes close communication partners skilled enough to not need the help of the sidekick, nonetheless, they understood its purpose right away. During a drivein visit to some of Mark’s past aides who have known him for more than 4 years and have become friends (meeting A, Table 1), Mark shared they asked what the sidekick was and he demonstrated how it worked and they “got it right away.” T further explained:
“If they know you well, they do not need the extra help. It is sort of like a novelty but not absolutely necessary because those folks are waiting; they are patient and they are waiting. If they see Mark’s body language that he is writing they know Mark is typing.”
Nonetheless the sidekick seemed useful when Mark talked to the same group of past aides’ friends and a group of current aides (meeting B and C, Table 1) due perhaps to the fact that this meeting was online and it had more people in it, which could make turn-taking a little more challenging.
The sidekick was also useful when Mark talked with AAC friends who are familiar with the workflow involved in being an augmented speaker. Mark shared that some of them had told him they wanted one for themselves. The sidekick also seemed useful on one occasion with Mark’s mother at home. Mark was in another room and he triggered the sidekick to call his mom, she heard the sound and caught it moving and realized Mark was calling for her attention.
We hypothesized that the sidekick would be more useful with unfamiliar partners, allowing Mark to better regulate turn-taking dynamics with people who had not met him before but we learned that this was hard to measure in an online setting as unfamiliar partners needed to first be introduced to AAC and Mark’s communication style online, which brings new constraints, in addition to having to interpret the sidekick.
Better in-person
The sidekick was originally designed to support face-to-face conversation but due to the COVID-19 pandemic, the group conversations that Mark had during our evaluation period were mostly online with a couple of exceptions for when he did some drive-in visits to friend’s houses or used it at home. Mark shared that the sidekick helps more with turn-taking in person than it did online. The visuals a person can get during a video call can be limited by the number of people on the call, the positioning of one’s camera and people’s attention to the screen. This was the case when Mark was talking to the disability rights advocates. Mark has found that the best alternative when meeting with large groups on video is to use the chat instead of the sidekick; T elaborated on his answer:
“Like today he was on a call and there were like 30 people on the call. So, he is using sidekick, your picture is small and sometimes there is not a single facilitator but the speaker is shifting around. and it’s not like there is not anyone in place who is going to call on you. And in that situation, you may be better off using the chat… The visual for the zoom call in those situations where there is a lot of people and they are not 100% used to AAC, they are not going to have a reaction to the sidekick or even know what it is.”
Mark also explained that people not seeing the side-kick was also due to his camera’s position. The family has worked on positioning the camera to make sure Mark’s face is shown without cutting off the sidekick but it is sometimes challenging to get a good setup for every meeting.
“I think it is fantastic in real-time with real people. You know, face to face. There is no question. They are seeing it. It’s here, you know, but in the digital meetings, it is more difficult.”
Surprisingly we did find that using the sidekick online was useful with acquaintances or “mid-circle” partners who were familiar with Mark but who were not as skilled as those partners in the first two closest circles. For example, when talking to the design collaborators on this paper on Zoom, Mark effectively used the sidekick to help others pace themselves and their questions. The sidekick also supported Mark to talk with his group of friends and current aides, and when talking to his augmented communicator friends during his AAC group by helping Mark show others he was composing a message.
Discussion
As a case study, this project demonstrates how motion can support AAC interactions and conversations. We argue that motion-based AAC achieved through expressive objects is a promising new communication modality to continue exploring. Our work revealed specific ways in which a physical expressive sidekick supported an augmented communicator’s interactions, uncovering additional opportunities for future work. By augmenting communication through an external physical sidekick, Mark could convey to others that an action taken had communicative intent. The side-kick’s timer motion leverages a familiar cue that supports conversation partners to understand Mark is typing and that he needs time to compose his message. Similar to other modalities such as screen-based emojis or LED lights,20 motion can grab immediate attention but can additionally convey precise messages in an ambient, peripheral, and spatial fashion to overcome display resolution and visual attention limitations. Through engaging in long-term co-design with one user, Mark, we also learned valuable lessons on how to scaffold co-design activities to define design goals, collaborate as a diverse team and envision and develop a new assistive technology. In this section, we expand on these lessons and reflect on how engaging in long-term co-design with a user with disabilities taught us about purposeful design, accessibility, and barriers to the long-term use of custom-made assistive technology.
Co-designing with purpose
In co-design and ideation more generally, designers diverge to generate many different possibilities. Early in our process, we came up with many drawings and ideas of where the sidekick could be placed and how it could be accessed and controlled (voice control, facial gesture recognizer, multiple buttons, etc.). We wanted to think of anything as possible as designers but as T and Mark shared, they have tried a lot. Harrington et al. critique “blue sky” ideation with critical race theory, that the underserved, black communities they worked with know what types of structural changes might enable access and ideating things that will not come to fruition can widen gaps between what different co-designers perceive of as ideal. Instead, Harrington et al. and Bennett et al. recommend understanding stories and rich accounts to recognize the knowledge and labor co-designers have already expended.1,9 In Mark’s case, he and his family have already done the early exploration of figuring out ways to make communication easier and finding the right access modes. They have actually spent a lot of time and worked very hard, getting creative about many possible points of Mark’s body to use for access. They wanted the HCI researchers to know what is not going to work right away, and that it is very important to listen. We recommend that co-design actively incorporates “what doesn’t work” into design sessions. Further, in-depth listening to understand co-designers’ process of iteration and ultimately determining the options that were not feasible was useful in helping us to not replicate these mistakes; in other words, lists of what not to do are helpful, but engaging the iterative everyday design that got to that point gave texture to the bad ideas that kept us from developing similar past solutions and unusable possibilities.
Accessibility of long-term co-design
Carrying out our co-design collaboration over an extended period of time was beneficial, allowing us to develop valuable relationships and reflect on the process through accessible iteration and prototyping as found in prior work.6,8 To collaborate with Mark and his close communication partners T and D, enabling multiple feedback channels via email, video, drawing, collaboration boards, and video chats was key in helping us carry out the co-design process. For example, we were able to take up specific tools such as the Padlet ideation board that Mark recommended. Often, co-design is engaged with the assumption that designers provide resources during in-person workshops. We realized after the fact that we engaged privileged skillsets to move co-design online. While we may have been able to provide institutional access to digital tools and we provided the physical components necessary to prototype sidekicks, we took for granted the technical skill required to join us on these platforms. While Mark and his family were tech savvy, we did find that our explicit conversations with Mark about which communication tools would work for him were still important for us to have effective design sessions; co-design concerned the process, not only the prototype. As we recognized open communication about tools and techniques seemed to be a positive starting point, there is a need to explore the co-design of bespoke technology during the remote collaboration that can leverage different tools. For example, we learned a lot by dropping off preliminary prototypes with Mark and having the spatial conversational brainstorming sessions, where over a video call we learned the feasibility of different possibilities in fitting onto his wheelchair and into his overall space.
Barriers to assistive technology use
The use of assistive technology (AT) has been reported to be low even when people have access to AT.5,14,16 The reason for this is usually connected to usability barriers and social acceptability—social barriers that impact AT use.18,19 We identified some barriers to using the sidekick device long-term related to having to remember to plug it in and making sure the camera was set up at a right angle, both related to the daily setup routine. For instance, Mark told T: “We are going to have to remember to plug it in,” illustrating that having to add extra steps into the daily technology setup is not trivial and can be a barrier to using a new system. AT should aim to be mostly integrated into existing technology use, but this can be challenging when current AT systems such as AAC devices are not open to developers to build on and integrate new features in. Other developers have encountered the same limitation, the lack of a complete AAC functionality stack into which new developments can be built.7 During our development phase, we spent a considerable amount of time learning more about Mark’s AAC device’s capabilities and about his workflow—how he preferred to use a separate computer for Zoom and keep his AAC device mainly for communication. There were a lot of “unknowns” regarding how compatible his AAC device was with other peripheral devices. We decided to go with the stand-alone, head switch-operated sidekick to make progress and make something work but this tension illustrated the boundaries of individual co-design, making it harder to recommend how bespoke technologies like this one can reach a wider audience.
Another factor that can impact AT use is access to the proper maintenance of a device or troubleshooting over time. Maintenance of bespoke technology designs must be considered to ensure it is used. T brought up this important point: “If this turns out to be a really helpful thing, then where do we get the technical support to keep the prototype functioning?” To address this, we open-sourced our design for others and also connected Mark to local makerspaces and volunteers working on making bespoke open-source assistive technology. The HCI researchers will continue to provide support for this device but having a long-term plan in place can make sure that maintenance is possible beyond the HCI researchers is also crucial. Though this tension of impact and maintenance is unresolvable with this project, we found that empowering Mark by making the design open and by keeping clear documentation is a starting point to ensure that end-users know what to ask for when seeking technical support. In future work, we would like to draw lessons from existing online communities developing Do-it-Yourself AT11 to continue maintaining and expanding motion-based AAC solutions.
Limitations and future work
One limitation in our design process in partly produced by the pandemic was that the hardware design iteration was done mostly by the HCI researchers as prototype changes required 3D printing and 3D modeling and the motions were programmed directly to the sidekick’s microcontroller. In future work, we would like to find ways to involve co-designers in programming their own motions directly to sidekicks. The next steps should also include making the sidekick more robust to allow these customizations on the go. A future sidekick platform that enables authoring gestures via remote control could facilitate motion customizations by the user and may enable the exploration of new gestures in situ. This flexible customization platform could also be further developed to provide ways to visualize other sidekick forms. We also want to highlight that our co-designers had access to a lot of resources and had worked together for a long time to augment Mark’s communication. As such, they quickly integrated into the design team. Future research should concern activating co-design that may support co-designers with different resources and experiences with DIY. Finally, making the sidekick more integrated with the AAC device in a way that it could be controlled by it is a clear opportunity for improvement and future work.
Conclusion
We explored how motion could support augmentative and alternative communication by co-designing and evaluating a physically expressive sidekick object with and for Mark. Using bespoke sidekicks that move in physical space as a form of aided nonverbal AAC can provide augmented communicators with an additional expressive output that can support them in managing conversation dynamics. By working closely with Mark and his family we learned about the possible barriers to integrating a new device in daily life and in sharing our lessons, we look forward to future work in improving tools that support developers in building for AAC.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment