Sign In

Communications of the ACM

Last byte

Inspired by the Home of the Future


View as: Print Mobile App ACM Digital Library Full Text (PDF) In the Digital Edition Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
University of Washington Professor Shwetak Patel

Credit: Mark Stone / University of Washington

Shwetak Patel, a professor at the University of Washington (UW), director of a health technologies group at Google, and recipient of the 2018 ACM Prize in Computing, has made a career out of pushing old tools to new heights. He has leveraged existing infrastructure to make affordable energy monitoring systems; he used mobile phone sensors like cameras and microphones to help manage chronic diseases. Here, Patel talks about feedback loops, the home of the future, and the changing healthcare landscape.

What triggered your interest in ubiquitous computing?

As an undergrad, I worked in the Georgia Tech (Georgia Institute of Technology) Aware Home, which was a facility with a bunch of different technologies that we used to explore what the home of the future would look like. We built applications for healthcare, elder care, energy monitoring, and so on, and a lot of my inspiration came from that work.

In graduate school, you began to look at how to leverage existing technologies and infrastructure to build some of those same applications in a more easily accessible way.

Sometimes, if you go straight to a specific technology, it takes a while before that can scale, but if you can take these intermediate steps where you leverage existing systems in unique ways, then you can start to answer questions about viability, usability, and effectiveness, which then informs the design of how you build it out long term.


"We built applications for healthcare, elder care, energy monitoring, and so on, and a lot of my inspiration came from that work."


One of the innovations that came out of your Ph.D. work was an energy-monitoring technique that uses a single, simple sensor deployed on the electrical system to identify what devices are drawing power. Later, at UW, you pushed that concept into new domains with technology that tracks per-fixture water use from a single sensor.

The end goal was to provide feedback for people to be able understand their energy usage and improve their mental model of where energy and water are going.

Algorithmically, what you are doing is looking at the side effects of using an appliance or a water valve. When you switch an appliance on and off, there is electrical noise, or electromagnetic interference, that happens on the power lines. It turns out if you zoom into that noise source, it tells you a lot about what's happening. So our approach was to listen to all the electrical interference on the power line, then use machine learning to clarify and pattern-match to a specific device.

As it turns out, that's analogous to the water domain. When you flush the toilet or use the shower, you disrupt the water flow the moment you open and close that valve. And if you have a pressure sensor at any location, you can see a pressure wave that's indicative of the kind of valve that you just closed.

Given that most of us can't yet afford to live in "the home of the future" or put sensors onto all our fixtures and appliances, it's a refreshingly practical approach.

Sometimes, you have a scientific question where you are trying to address an algorithmic problem or find a more efficient way to do things, but at the same time you are also trying to think about how to apply it. If you come from a purely applied standpoint and you are solving an interesting problem, a lot of the scientific contributions follow, because you are now discovering new use-cases that you may not have discovered otherwise.

More recently, you have been working in healthcare, using commodity devices like mobile phones to do longitudinal and physiological monitoring.

We have looked at using the microphones to help people monitor respiratory ailmentsso instead of using a dedicated device like a spirometer, say, you use machine learning and audio processing on the microphone to detect if something's happening in your respiratory system. We have also used the camera and flash to do non-invasive blood screening. You might take a picture of a baby to figure out how much bilirubin is in the blood and whether jaundice is a concern. You can't do a blood draw every single day, so having a non-invasive screening tool can be a really effective way to tell you when you should get to the next level of screening or diagnostics, and then you can intervene much sooner.


"If you come from a purely applied standpoint and you are solving an interesting problem, a lot of the scientific contributions follow, because you are now discovering new use-cases."


Yet capturing the right data to get ahead of a major health issue is incredibly difficult.

Most people see a doctor every one to two years. There can be a lot of indicators that could help you get ahead of a problem well before you are symptomatic. The challenge is, we don't have access to that information. With the intersection of new sensing techniques and sensors that are lower in costnot to mention more capable phone and AI and machine learningwe are at a time where this can actually start to work. We can automate a lot of the work, triage it using machine learning, and escalate the cases that look like they're emergent.

In healthcare, as with energy usage, it turns out that feedback loops are an incredibly powerful way to change people's behaviorgiving them relevant information about what's happening at a time when they can actually do something about it.

Mobile phones give you both a computational platform for the interface and feedback on the device itself. At the same time, people have a huge affinity for their phones, so compliance is inherently higher. You already have this thing with you for primary reasons, so healthcare becomes a secondary use-case.

I understand that it has been an adventure to get some of this work approved by the Food and Drug Administration (FDA).

The regulatory landscape is evolving. The way the FDA looks at diagnostic tools is in terms of analytic sensitivity. If you test something using a phone, what's its absolute accuracy? But context is incredibly valuable. If I'm coughing a lot in Seattle versus in a high-risk tuberculosis region in South Africa, it means something very different, and in fact, physicians already use this context indirectly. "Are you at risk for something? Where have you been? What's your family history? What region do you reside in?" Those things aren't always built into a blood draw. So, the blood draw gives you one number, but now, machine learning can incorporate all this additional information and maybe even be more indicative of what's happening.

How have health providers reacted?

I think clinicians and clinical scientists are moving in that direction. They understand it and they see that it's where the field is heading. But health practitioners still have to think about the near termthey have to physically see patients, determine the best course of treatment, and so on. It's a challenge to bridge that gap. If you're a general practitioner who's taking care of 1,000 patients, how are you going to deal with 1,000 hemoglobin readings each day? It's just not possible, and that's why a lot of these mobile and home health technologies have not really been successful. If you can't figure out how to integrate your tools into the system we have now, the treatments are never going adapt to whatever new sensing techniques you've created.

In addition to being a professor at the University of Washington, you also spend time at Google, where you direct a health technologies group. Is there anything you can say about the work?

A lot of it is looking at new opportunities for machine learning and sensors in the healthcare space. It's still early in our explorations, but one of the exciting things about it is the opportunity to start thinking about scale. I was able to validate and prototype a lot of things in the academic world, but at Google, we can start to look at disseminating it more broadlythat's all I can say for now, but that's the high-level goal.

Back to Top

Author

Leah Hoffmann is a technology writer based in Piermont, NY, USA.


©2019 ACM  0001-0782/19/09

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from permissions@acm.org or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2019 ACM, Inc.