Sign In

Communications of the ACM

ACM TechNews

Virtual Assistants Provide Disappointing Advice When Asked for First Aid, Emergency Information: Study


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
The responses of Cortana and Siri were of particularly low quality.

A study at Canada's University of Alberta found some virtual assistants are far better than others at providing users reliable, relevant information on medical emergencies.

Credit: menafn.com

Researchers at the University of Alberta in Canada have found that virtual assistants do not live up to their potential in terms of providing users with reliable, relevant information on medical emergencies.

The team tested four commonly used devices—Alexa, Google Home, Siri, and Cortana—using 123 questions about 39 first aid topics, including heart attacks, poisoning, nosebleeds, and splinters.

The devices' responses were measured for accuracy of topic recognition, detection of the severity of the emergency, complexity of language used, and how closely the advice given fit with accepted first aid treatment and guidelines.

Google Home performed the best, recognizing topics with 98% accuracy and providing relevant advice 56% of the time.

Alexa also scored well, recognizing 92% of the topics and giving accepted advice 19% of the time.

The quality of responses from Cortana and Siri was so low that the researchers could not analyze them.

From Folio
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
ACM Resources