Researchers at the University of California, Berkeley have trained a neural network to reconstruct human acrobatics in YouTube video clips and manipulate a simulated humanoid to ape those movements.
The research has implications for training robotic systems to mimic human behavior.
Motion reconstruction was based on earlier Google research in which a single image of a human could be analyzed by a convolutional neural network, deciphering limb positions even though body parts were partly obscured.
By rotating the image of each frame, the computer's understanding of atypical body poses improved, enabling the assembly of a "trajectory" of limbs from one frame to the next.
Reinforcement learning refined the humanoid avatar's replication by rewarding it when its limbs increasingly approximated the filmed human motion.
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found