BLOG@CACM
Architecture and Hardware

Will We All Be Wearing Wearables?

Posted
Saurabh Bagchi

Wearable sensing devices started making their way into mainstream consumer devices in the early 2010s—Fitbits launched in 2009, Google Glass and Oculus Rift both launched prototypes in the early 2010s leading to renewed interest in augmented reality, while Pebble's Kickstarter campaign for a smartwatch kickstarted in 2012 and raised a whopping $10.2 million, making it the most successful campaign at the time. The most common instantiation of wearables today is in the form of smartwatches (21% of all wearables sold in 2016). This has started to catch the fascination of the academic research community about 3 years back, with a small but growing part of the community working on either medically oriented hardware or the software architecture on these devices.

In this article I speculate on three issues related to wearables:

  1. Are they really needed?
  2. If the answer to the first point is yes, what are the biggest practical challenges toward their mainstream adoption?
  3. If the answer to the first point is yes, what are the biggest research challenges toward their mainstream adoption?
What are the Compelling Use Cases?

We as a society have been getting unhealthier and our healthcare costs have been rising. This is true in the U.S., in many developed countries, and beginning in the developing world too. So one promising approach to counter this trend is to have continuous monitoring of health signals with the promise of early, and thus less costly, medical intervention. This was the original motivation behind the wearables. Back in the early days (2006), Nike + iPod fitness tracking gave users the ability to keep track of their exercise habits. This was quite an advanced device for its times. It boasted a miniature sensor that fit under the insole of a Nike+ shoe. A similarly sized receiver plugged into an iPod nano would track workouts. The software went beyond tracking steps; it also allowed users to check out statistics from past workouts and set fitness goals. Plus, they could hear how far they had run, how quickly they were running and how far they were from their destination. This was part of a movement of a loosely organized group known as the Quantified Self, whose members are driven by the idea that collecting detailed data can help them make better choices about their health and behavior.

It can and it does. But it turns out that the Quantified Self movement never made the switch from the band of wild enthusiasts to mainstream. So it seems to me that this is not a compelling use case. However, while most people are not interested in tracking their VO2 levels on a minute-by-minute basis in preparation for their half marathon, it seems many people are interested in tracking more bread-and-butter fitness issues. Such as, did I get enough number of steps of walking today, did I get enough REM sleep today or do I need to change my sleeping habits, and getting less bread-and-butter-y and more croissant-y, how is my heart rate fluctuating or even are there signs of arterial fibrillation, which is available through ECG sensors on latest generation smartwatches.

A key adoption threshold will be how reliable are these readings. The fact that the Apple Watch comes with an electrocardiogram (EKG) that is FDA-cleared seems to be good news. However, the fine print, actually not even that fine a print, is "not recommended for users with other known arrhythmias (irregular heartbeat)." Meaning only users who are already free from any medical condition leading to an irregular heartbeat can find a use for this sensor. So we are not quite past that adoption barrier.

However, the trend seems to be that the continuous monitoring of fitness and health signals is moving towards wider adoption. Consider for example that on Google Play, the Health and Fitness category is the most popular one for wearable apps. An imperfect trend line comes from the penetration of smartwatches in Finland, an uber-connected land, where it has grown to 10% in 2018, rising from 7% in 2017 and 4% in 2016.

So health and fitness appear to be the leading candidate for pushing along the adoption of the wearables, though the adoption will not be a sharp one. Rather, it has to get the overall ecosystem along—the medical community, the insurance community, some interoperability among multiple vendors' devices—and so it will be a slow adoption curve.

What are the Big Practical Issues?

There are three big practical issues that the commercial world will have to address to speed along the adoption curve. I use "practical" a little facetiously to indicate that for us academic researchers, they are not in our wheelhouse. I discuss the challenges for us in the next section.

First, the wearable apps must become more focused on our immediate activity, lest they distract us (even more than now) from our physical environ and the task that we are in the middle of doing. From the desktop/laptop world to the mobile world represented one big step in the evolution. Mobile apps had to be shallower in their interactions—no long dialog boxes, no long treatises to be typed in text boxes, and no jumping out of 3 notifications clamoring for my attention. The apps on the wearables need to take that another giant leap forward. Otherwise, our state of feeling that we are drowning in information will affect our social interactions very directly and even put us in physical danger as we step in front of that car while checking if my heart rate had increased due to the exertion.

Second, the wearables must be liberated from the tethering to mobile devices. The current state is that most apps on the wearable rely on a companion app on the mobile device. This is cumbersome because now I have to have the two in close proximity. If my smartphone runs out of battery, I am left high and dry, and in general I want to escape from the tyranny of multiple devices—why do you think smartphones became the Swiss Army knives of electronic devices?

Third, the battery lifetime of the wearable devices must improve a good bit. Yes, I know many advertise that you can go without charging for multiple days, but show me an owner who does not get jittery if she cannot plug in her wearable at the end of each day. So why is this a big deal, since we are used to plugging in our mobiles at the end of each day, or more frequently? This is due to the cobwebs of habits that we humans pick up effortlessly and let go of only with great effort. We have been wearing watches for centuries, humankind that is, not just centurions amongst us, without having to remember to plug them in. So why do I need to add this chore?

And finally, there is this pesky little detail of price. But I am not too worried about that, because the adoption curve trending up and a profusion of vendors has meant the prices have been going down and will continue to do so.

What are the Big Research Challenges?

I will start off by asking how long can this blog post be; my fictitious editor tells me I am already at the seams, so I will focus on three of the research challenges that I think are most pressing. For those looking to take a deeper look at the topics, there is nascent research literature on this, some of it by us.
[ Paper 1: DSN-2018 ] [ Paper 2: Mobisys-2017 ] [ Paper 3: ICSE-2017 ]

First, the software stack is rather fragile on these devices. So it may be OK for my device to reboot involuntarily or my app to freeze up when the device is a plaything, but when it is being used for serious stuff (telling the doctor how my heart is ticking along or guiding a rescue mission through underground caves using smartglasses), such software fragility will not cut it. We have shown in our published work that through directed injections, the device can be made to reboot—time and time again, thus making it another piece of e-waste. This can be done with carefully crafted messages sent from another component to the app. This fragility arises due to multiple reasons. For one, there is software cut and paste from the mobile world to the wearable world (the OSes for the wearables are derived from the OSes for the mobiles). The software does not always fit well within the new digital environment. Then there is the fact of multiple sensors being integrated on the small-form-factor wearables, and often the software is called on to do multiple things at the same time due to inputs from multiple sensors, perhaps while the human user is telling it to do something. And then there is the plebeian reason that the software is less mature, as it has not gone through all its testing strides.

Second, the software was not made with security or privacy in mind—who would want to disrupt my daily workout by hacking the smartwatch or be interested in my health vitals to leak them out from my wearable? But as always with digital technology, once it becomes popular and "high-value targets" (I love the James Bond-iness of this term) start to use them, security and privacy become really important. We are not there yet in terms of the threat landscape, but why not try to get ahead of the curve for once?

Third, on the hardware side, the integration of so many sensors, and more that are coming, is not yet well done. So it is fine when a single sensor (say a heart rate monitor) is clamoring for attention of the main processor on the device, but when multiple start demanding attention at the same time, we need to design the interactions more carefully. Topics that are well-known, from real-time computing literature or more classical embedded systems, need to be adapted and adopted here. For example, how do you do scheduling of multiple devices on the digital bus (SPI or I2C) when they want to talk at the same time? How do you ping a sensor to check its health status, or make sure one sensor is not hogging all your attention and starving others?

In Conclusion

Wearable devices have been nipping at our heels for a while, poised to make the leap from niche to mainstream, one where your latest smartwatch is not even remotely a topic of party conversation. Whether this market segment will cross that threshold will depend on overcoming three commercial challenges and three research challenges. On the commercial side, the devices need to help us focus our attention on the immediate task by giving directed inputs, they need to be liberated from the constraint of being paired with mobile devices, and their energy lifetime must improve. On the research side, the software architecture needs to be made more reliable (and there are concrete recommendations in the research papers referred to here), the software needs to have some security and privacy controls put in place, and the integration of multiple sensors on the small form-factor devices must be done better.

So let the vendors of the world get to work, and us academic researchers of the world get to work to take us to the bright new world, where the sparkle from the bezel of my smartwatch is right in my eyes.

Saurabh Bagchi is a professor of electrical and computer engineering, and of computer science, at Purdue University, where he leads a university-wide center on resilience called CRISP. His research interests are in distributed systems and dependable computing, while he and his group have the most fun making and breaking large-scale usable software systems for the greater good.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More