Researchers from the Georgia Institute of Technology's (Georgia Tech) School of Interactive Computing and the Institute for Robotics and Intelligent Machines have developed a new method to train computers to recognize and comprehend a wide range of human activities in a single day.
More than 40,000 photographs were captured every 30 to 60 seconds over six months by a wearable camera, and this was fed to the computer so it could learn to categorize images across 19 activity classes. The participant wearing the camera could review and annotate the photos at the end of each day to ensure their correct categorization. The system predicted with 83-percent accuracy in which activity that person was engaged.
The research team believes it has accumulated the largest annotated dataset of first-person images to demonstrate deep learning can understand human behavior and the habits of a specific individual.
"This work is moving toward full activity intelligence," says Georgia Tech researcher Edison Thomaz. "At a technical level, we are showing that it's becoming possible for computer-vision techniques alone to be used for this."
Thomaz says the research could potentially contribute to the development of improved personal assistant applications, as well as help researchers explain connections between health and behavior.
From Georgia Tech News Center
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found