News
Computing Applications

Fast Facial Analysis Software Set For Release

Posted
Identifying the "landmarks" of a face.
New software holds the promise of interpreting a person's facial expression and position to understand their emotional state.

The eyes may be the window to the soul, but they may also be joined by the way a person smiles or grimaces, raises an eyebrow, or tilts his or her head. Automatically analyzing those facial gestures through computation, however, has been difficult and time-consuming, but that may be changing.

Researchers from Carnegie Mellon University (CMU)’s Human Sensing Laboratory are set to release a new facial attribute, tracking, and expression recognition software package they say is fast, computationally efficient, and robust enough for state-of-the-art results across multiple disciplines and multiple platforms, including mobile devices. Scheduled for release by the end of this month, it will be licensed free for research purposes.

The software’s developers hope it will be an easier-to-use package version of the lab’s IntraFace platform. It could be particularly useful in fields such as cognitive science and psychotherapy, where subjective interpretation has been the norm and even the best computational analysis has been labor-intensive to the point of being prohibitive to scaling or portability.

"If you look at therapy supervision or training, it’s incredibly labor-intensive," said IntraFace co-developer Jeffrey Cohn, professor of psychology and psychiatry at the University of Pittsburgh  and an adjunct professor of computer science at CMU’s Robotics Institute. "We’re using the same methods we used when I was in graduate school, and everywhere else, there’s technology."

Duke University researcher Guillermo Sapiro, who is participating in an autism study on Apple’s ResearchKit platform using a combination of IntraFace and in-house technology from Duke, said the IntraFace technology his team integrated with theirs is crucial to the project.

"We are going into clinics and homes, so we need much more robust things than the type of thing where you have someone sit in a conference room and look straight into a screen, and do emotion analysis," Sapiro said.

Fast Foundational Technologies

IntraFace did not emerge out of whole cloth; Cohn said it is a component of a larger corpus of work on facial recognition at the Robotics Institute that goes back 20 years. However, new approaches developed for IntraFace have greatly improved the automated capabilities of facial attribute and expression technology. The foundational algorithms are:

  • Supervised Descent Method (SDM), used for facial feature detection and tracking, which the research team said is able to overcome many drawbacks of second-order optimization schemes such as nondifferentiability. Moreover, they concluded after testing, "it is extremely fast and accurate. We have illustrated the benefits of our approach in the minimization of analytic functions, and in the problem of facial feature detection and tracking. We have shown how SDM outperforms state-of-the-art approaches in facial feature detection and tracking in challenging databases."

"The SDM is a breakthrough in the sense it’s an algorithm that allows you to do facial feature detection and tracking with only four matrix multiplications," Human Sensing Lab Director Fernando De la Torre said. More specifically, according to the research, during training, the SDM learns a sequence of descent directions that minimizes the mean of Non-linear Least Square functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian or Hessian matrices.

  • Selective Transfer Machine (STM), to personalize a generic classifier for facial action unit detection by attenuating person-specific biases (action units are components of the Facial Action Coding System (FACS),  which segments the visible effects of facial muscle activation into 33 such "units," each related to one or more facial muscles). STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject in an unsupervised manner.

By attenuating the influence of inherent biases in morphology and behavior, the IntraFace developers said, "we have shown that STM can achieve results that surpass non-personalized generic classifiers and approach the performance of classifiers that have been trained for individual persons (i.e., person-dependent classifiers)."

Real-world possibilities

In terms of how IntraFace technology might be put to use in practical application, Cohn said, "as long as you had to do person-dependent tracking, there was a huge upfront manual cost. It could take four hours to train a tracker. Now, it works out of the box and works indoors and outdoors. It’s very robust."

Other uses could be real-time audience sentiment feedback for speakers, analyzed by facial expressions; evaluation of product placement effectiveness by analyzing shoppers’ expressions when looking at an object on a store shelf, or even an interactive vehicle control system that could help steer a car out of danger if an embedded sensing device detects driver distraction by facial expression or direction.

The Duke project in which Sapiro is participating, Autism & Beyond, is an app-enabled screening protocol for early childhood autism signals. It is in the vanguard of technology platforms that may provide valuable data for clinicians where demand for expertise exceeds the health system’s capacity—analytic capabilities such as IntraFace’s could be used similarly to hearing and vision tests done in schools, in which children with responses that may indicate a problem can be referred to specialists.

The Autism & Beyond app is the foundation of a six-month study, which combines questionnaires and short videos to gather information about a child using Apple iOS mobile devices. The app records the child’s reactions to the videos, which are designed to make them smile, laugh, and be surprised. For example, in a short video of bubbles floating across the screen, the video algorithm is looking for movements of the face that would indicate joy. IntraFace supplies the app’s facial feature detection capabilities.

"Fernando and his team were very helpful in helping us incorporate it into our pipeline before they were actually releasing it," Sapiro said. "That kind of collaboration is very important. We needed help from them because it wasn’t plug-and-play yet. We were six months ahead of the curve."

De la Torre said he is eager to release the package version of IntraFace and hopes the new version’s automated support functions are equal to possible demand from researchers elsewhere.

"It’s time to develop new applications for this technology," De la Torre said. "We have a few of our own, but we believe there are lots of people who may have even better ideas once they get their hands on it."

"We are trying to automate everything," he added. "It’s not our business to provide customer support and actually, universities can’t provide that. We are going to try to make it as automatic as possible. Hopefully, we can handle it."

Cohn said more research needs to be done to help IntraFace’s capabilities become more granular; for example, he said, "you may detect ‘happy’ because I’m smiling, but maybe I’m smiling with traces of sadness, or maybe I’m smiling just a social smile. Or maybe I’m smiling because I’m embarrassed, and embarrassment displays about 50% of the time are identical to enjoyment displays, with the exception that the head pose is different.

"Certainly for screening, it’s a very exciting technology. It provides a way to screen for depression, anxiety—a range of disorders. In terms of measuring change over time, which is critically important in any kind of clinical context, it appears to be extremely beneficial."

Free demonstration apps that show how IntraFace can identify facial features and detect emotions can be downloaded from the lab site, or from Apple’s App Store for iPhones and from Google Play for Android phones. 

Gregory Goth is an Oakville, CT-based writer who specializes in science and technology.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More