Sign In

Communications of the ACM

ACM TechNews

­ of T Experts ­se Deep Learning AI for Predictive Animation


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
A list of visemes with groups of phonemes and corresponding lower face rig outputs produced by the software.

Researchers at the University of Toronto and the University of Massachusetts Amherst used deep learning algorithms to develop produce a network for predicting visemes (the appearance of mouth shapes) that map to vocal cord sounds, for animation and gaming applications.

Credit: Yang Zhou et al.

Researchers at the University of Toronto (U of T) in Canada and the University of Massachusetts Amherst have used deep learning algorithms to improve software for the animation and gaming industry.

The team tapped insights from psycho-linguistics literature to produce VisemeNet, a network for predicting visemes (the appearance of mouth shapes), which map to vocal cord sounds.

They blended the results of phonemes and facial movements, using an actor's voice as audio input to predict speech motion curves that are fully editable in animator software.

VisemeNet was developed as a component of jaw and lip integration, which enables animators to create realistic and expressive speech animations of computer-generated characters.

From U of T Engineering News
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account