Sign In

Communications of the ACM

ACM TechNews

Machine-Vision Algorithm Learns to Transform Hand-Drawn Sketches Into Photorealistic Images


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Line drawings used to develop photorealistic portraits.

Researchers at Radboud University in Denmark have taught a neural network to turn hand-drawn sketches of faces into photorealistic portraits.

Credit: Technology Review

Researchers at Denmark's Radboud University have trained a deep convolutional neural network to convert hand-drawn sketches of faces into photorealistic portraits.

They used a dataset of 200,000 faces derived from the Internet, applying standard image-processing algorithms to render the images as line drawings, grayscale sketches, and color sketches.

"We found that the line model performed impressively in terms of matching the hair and skin color of the individuals even when the line sketches did not contain any color information," the team reports.

They suggest the model not only can leverage luminance differences in the sketches to deduce coloring, but also can learn color properties frequently associated with high-level face features of different ethnicities.

The next test assessed the neural net on a dataset using hand-drawn sketches produced in a way the net was not trained on, and it still produced photorealistic portraits, according to the researchers. The net ran into difficulty when pencil strokes in sketches were not accompanied by shading, but "this can be explained by the lack of such features in the training data of the line sketch model," the researchers note.

The final test had the net generate photorealistic images of renowned artists based on sketched self-portraits.

From Technology Review
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account