acm-header
Sign In

Communications of the ACM

ACM TechNews

AI-Generated Deepfake Voices Can Fool Humans, Smart Assistants


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
A woman using a smart home assistant.

The deepfake voices werent only successful against computer systems. In a separate experiment, the researchers asked 200 people to identify whether voices were fake or real, with the fakes fooling them around half the time.

Credit: RossHelen/Shutterstock

Freely available voice-mimicking software can deceive people and voice-activated tools like smart assistants, according to University of Chicago scientists.

The researchers used two deepfake voice synthesis systems from GitHub to mimic voices: the AutoVC tool requires up to five minutes of speech to generate a passable mimic, while the SV2TTS system needs just five seconds.

The researchers employed the software to unlock speaker recognition security systems used by Microsoft Azure, WeChat, and Amazon's Alexa system.

AutoVC fooled Azure about 15% of the time, compared to SV2TTS's 30%, and SV2TTS could spoof at least one of 10 common user-authentication trigger phrases Azure requires for 62.5% of the people the team tried.

SV2TTS further fooled both WeChat and Alexa about 63% of the time.

Deepfakes more successfully spoofed women's and non-native English speakers' voices, and also tricked 200 people into thinking they were real about half the time.

From New Scientist
View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account