Sign In

Communications of the ACM

ACM TechNews

Can an Algorithm Detect a Speaker's Mood?


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Volunteers in the study wore a Samsung Simband equipped with sensors that capture a variety of physiological data.

Massachusetts Institute of Technology researchers have developed an algorithm to determine a speaker's mood in real time based on their speech and vital signs.

Credit: Jason Dorfman/MIT CSAIL

Researchers at the Massachusetts Institute of Technology (MIT) have developed an algorithm to determine a speaker's mood in real time by registering not only their speech, but also their vital signs.

MIT's Mohammad Mahdi Ghassemi and Tuka Alhanai fed the algorithm snippets of dialogue tagged as positive or negative so it could deduce telltale patterns that it could later apply in its own labeling. The algorithm also was trained on word definitions.

The researchers tested its abilities by having 10 volunteers tell a tale that was happy or sad, while Ghassemi and Alhanai asked questions to approximate a dialogue. A wristband computer worn by the participants collected physiological and movement data transmitted to the algorithm.

The algorithm inferred whether a conversation was happy or sad with 83% accuracy, and provided a helpful evaluation every five seconds at a rate 14 percentage points better than chance.

From The Wall Street Journal
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found