News
Architecture and Hardware

Improving Everyday Computer-Human Interactions

Researchers presented potential solutions to some long-thorny issues at ACM's CHI 2025 conference.

Posted
robots holding a microphone, a wrench, and a camera

An irritating problem in the nirvana of the modern smart home is the fact that remote control batteries eventually die, whether powering a TV, a smart speaker, or something more exotic like a multi-colored ambient lighting system. While you can eke a few more zaps from a remote by swapping the order of its batteries, at some point you must schlep to the shop for some AA or AAA cells.

But perhaps not for much longer.

At CHI 2025 in Yokohama, Japan, the annual ACM conference on Human Factors in Computing Systems, where researchers identify salient problems in interaction design and bring them into sharp relief t0 be fixed, an answer to this thorny problem was revealed in three intriguing research papers that were notable for tackling some everyday smart-tech problems.

Specifically, the papers described how to revive the aforementioned dead remote controls; how to resolve a decade-long user privacy risk in the smartphone photography arena; and how make expensive-but-often-idle domestic robots begin earning their keep.

However, since CHI 2025 ended in May, some experts have warned that the answers presented in these papers may not be quite as simple to implement as they sound.

The revival of dead remotes was tackled by a team lead by Pedro Lopes, who heads the Human-Computer Integration Lab at the University of Chicago, and Alex Mazursky, the project’s principal researcher. At CHI 2025, Mazursky revealed his team’s plan for a wireless power transfer system called Power-on-Touch, which is designed to power devices while they are in use.

Power at Your Fingertip

Power-on-Touch is comprised of two major components: a wearable power transmitter coil connected to a battery that can adorn a user’s wrist or fingertip, and a compatible receiver coil embedded in the object to be powered. Using various coil designs, the researchers found that power in the milliwatt range could be transferred to energize a device by grasping it, touching it, or hovering a fingertip over it.

The team tested the idea, using slightly unwieldy-looking, but effective, prototypes, on a raft of gadgets: TV remotes, door security card readers, Bluetooth ‘waiter-call’ buttons for restaurants, karaoke microphones, smart speaker remotes, and electronic door locks.

Power-on-Touch prototype apps

Credit: University of Chicago

“Power-on-Touch innovates on top of existing wireless charging ecosystems and technologies, like the ones people have started using in their phones,” said Lopes.

“We’re powering devices during interactions: you press the button in a restaurant and that interaction powers the electronics to call the waiter. So the restaurant doesn’t have to recharge the call buttons every day, and no batteries means less waste and less maintenance.”

However, getting people to don clunky coil-equipped wearables could be tough, said Paul Worgan, lecturer in mechatronics at the University of the West of England, in Bristol, U.K., who was lead researcher on PowerShake, a system unveiled nine years ago at CHI 2016 that allowed someone with a battery-depleted smartphone to pay to suck a bit of power out of someone else’s more highly-charged device.

“It seems difficult to bring power to the tip of the finger, and adding mass there would presumably be felt by the user,” Worgan said. “I can see the finger being useful for touch interactions, but getting power to the fingertip from a battery would be no easy task.”

Another challenge would be convincing remote control makers to install Power-on-Touch receiver coils, Worgan said. “At the University of Bristol, I recall seeing a project about integrating wireless power transfer into AA or AAA batteries. That would be a smart way to go,” he said.

Could the real prize with Power-on-Touch come from being able to build it into a smartphone, however? That way, a phone could simply be brought near a device in need of power, like an expired remote, to power it during the interaction, like a phone making a wireless payment via Near Field Communication coils.

“We got this question at CHI and the coils are slightly different—the NFC ones are optimized at a different frequency—but in principle I believe this is possible,” said Lopes. Project leader Mazursky did not respond to email requests for comment. However, Lopes said that Mazursky was hired by Apple after CHI 2025.

The Camera Never Lies, Until it Speaks

Apple also figured in another CHI 2025 paper, involving research on smartphone photo privacy. While it’s often said that the camera never lies, Apple’s reinvention of still photography via the Live Photo mode for its iOS-based smartphones and tablets in 2015 shows that that may not always be true.  

A Live Photo records 1.5 seconds of video and audio before a device’s shutter button is pressed, and 1.5 seconds after. The device’s camera roll then displays a still of the image taken when the shutter was pressed so it looks for all the world like a still image. But holding a fingertip on the Live Photo animates it, showing a three-second video with sound.

Here’s the thing: not everyone knows it includes sound, Zhao Zhao reminded CHI 2025 delegates. It’s not hard to see why, said Zhao, an interaction researchers at the University of Guelph in Canada. Until Live Photos arrived, stills had been silent since the daguerrotype popularized photography in the 1840s. The consequent “longstanding assumption that photographs are silent” makes it easy to forget that three seconds of audio have been captured as well, Zhao said.

As a result, iPhone users have been sending Live Photos to friends and colleagues, or sharing them on social media, without realizing that background audio may be revealed. “Users who have traditionally relied on photos as silent representations may find themselves unintentionally sharing background conversations, ambient noises, or other private sounds when they share Live Photos. This unanticipated audio layer can lead to unintended privacy disclosures and embarrassment,” Zhao wrote in her paper.

Complaints have surfaced ever since the Live Photo format arrived, as evidenced by an eight year-old Reddit thread that reveals a person’s presence thanks to Live Photo background audio. But how big a problem is this? Zhao made an effort to find out by polling 212 iPhone users about Live Photos, then interviewing 15 of them to get a closer understanding of their experiences.

Reporting her survey results at CHI 2025 (her presentation video is here), 54% of those polled had moderate or high concerns about “inadvertent sharing of private or sensitive audio content,” while 22% had actually experienced at least one incident “where the audio in a shared Live Photo led to embarrassment or unintended disclosure.”

“Common examples included overheard conversations, unintentional recording of negative remarks about others, or capturing personal discussions that were not meant for public sharing,” said Zhao.

Apple could make it easier and clearer for users wishing to quash audio in Live Photos, Zhao said. “Platforms where Live Photos are shared should ensure that audio is muted by default,” she said. “That would go a long way toward preventing unintended consequences. It’s a subtle but important design decision that affects how people interpret and trust shared media.”

Shown Zhao’s critique, a spokesman for Apple said the company did not plan to respond to her findings. “We do have the option for users to turn off Live Photos each time they use the camera or on a permanent basis in iOS settings,” he said.

However, Zhao’s point is that many people do not even know what audio-augmented Live Photos are, or where those settings are, especially the hard-to-find permanently disabling function. 

This situation needs addressing, says Thorin Klosowski, security and privacy activist at the Electronic Frontier Foundation in San Francisco. “Being clearer with users about audio being included would be a good step, as would an option to remove audio entirely. Companies should always take privacy into account when setting defaults.”

Robot: Earn Your Keep

Like Zhao, Ph.D. student Yoshiaki Shiokawa and his colleagues in the Advanced Interaction and Sensing Lab at the University of Bath in the U.K. based their CHI 2025 paper on a survey, but of a very different technology: domestic robots, like the Roomba vacuum cleaning droid and its competitors. What, Shiokawa and his colleagues asked 50 users, do their robot vacuums do all day?

The answer was: not much. Nearly half (46%) of surveyed users activate their robot vacuums once a day, between 7:00 a.m. and 11:00 a.m. That largely idle usage pattern offers scope for domestic roboticists to develop additional applications for such droids, which are expensive, mobile, sensor-carrying platforms, Shiokawa said.

To find out what other tasks robots could be modified to undertake, the Bath team interviewed 12 human-computer and human-robot interaction specialists. They found over 100 potential applications vacuum robots could be extended to address, in fields such as home maintenance, personal assistance, health monitoring, food preparation, social interaction, pet care, security, and entertainment.

robot application options

Credit: University of Bath

“A robot could track a user’s medication, sleep, water intake, and exercise, and provide reminders or advice, for example,” Shiokawa said. “They could also detect and respond to emergencies, such as when a user is falling, support a user when standing and walking, or clear a path for a user to walk while alerting them to any tripping hazards.”

In their paper, the Bath team described how this process of adding tasks might work. “At 6 am, the domestic robot activates and attaches an arm to its frame. It gently taps the user on the shoulder to wake her up. Upon waking, the robot reminds her to take her vitamin supplements. At 7 am, the robot checks the status of the indoor plants. Noticing that the leaves have started to wilt, it attaches an arm and several carts to itself, then transports the plants to the deck to be placed in the sunlight.”

Maximiliane Windl, a privacy specialist at Ludwig Maximilian’s University and at the Munich Centre for Machine Learning in Germany, whose team won an award at Usenix 24 for their work on domestic robot security, said there are significant hurdles to overcome if such robots are to have their application ranges extended. “Users are particularly uneasy about domestic robots equipped with microphones and cameras. They worry about the robot appearing unexpectedly during private moments, or accessing sensitive areas such as bedrooms or private drawers,” she said.

“Expanding a robot’s functionality to include security or healthcare features could create new attack surfaces, such as unauthorized access to video streams or medical data,” said Windl. “Healthcare monitoring, in particular, might involve highly sensitive information, requiring designers and developers to be especially cautious about data collection, processing, and sharing.”

“If designers want to expand a domestic robot’s roles, they must prioritize transparency and user control by clearly communicating the robot’s capabilities so users know what it can see or hear, and provide obvious physical controls, such as a privacy switch that allows users to disable or pause certain functions at any time,” she concluded.

Paul Marks is a technology, aviation, and spaceflight journalist, writer, and editor based in London, U.K.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More