News
Computing Applications

Google ­nveils ‘artificial Creativity’

Posted
Art created by Google DeepMind.
Art created by Google DeepMind, the predecessor to Magenta artificial intelligence introduced by Google on June 1.

Magenta, which Google officially unveiled today (June 1), is the search giant’s artificial intelligence that uses neural-network-based deep learning to produce "genuine" works of art.

Google Brain scientist Douglas Eck, who has been working on Magenta, has been giving hints about its progress for months. In fact, at Moogfest 2016 in May, Eck gave a sneak preview of Magenta’s ability to compose original music. The same engine will also manipulate images and even text in pursuit of artistic pieces, he says.

"Google is one of a growing list of organizations pushing artificial creativity forward. It’s going to be a game changer in the near term for game makers, video production, and music," said Dave Sullivan, chief executive officer of Ersatz Labs Inc., which offers deep-learning software-as-a service in the cloud or as an in-house application. Ersatz mainly sells brain-like neural network algorithms that find hard-to-identify trends in user data, he says, but it is also keeping an eye on other fields to enter, such as "artificial creativity."

Magenta is built on Google’s TensorFlow open-source machine learning platform, which allows developers to experiment with different deep-learning scenarios using a graphical user interface (GUI) before deploying them as applications.

Google says its TensorFlow implementation of Magenta can be harnessed by others to potentially start a revolution in machine art, created by programmers and engineers rather than fine-arts practitioners.

"I’m primarily looking at how to use so-called ‘generative’ machine learning models to create engaging media," said Eck on his Google Blog. "The question Magenta asks is, ‘can machines make music and art? If so, how? If not, why not?’"

Magenta also makes use of other Google projects, such as Inceptionism, which produces still-life "paintings" using machine learning. Creating art is considered an aspect of being human that cannot be mimicked well by robots running computer software, but Google aims to challenge that notion with Magenta.

"Additionally, I’m working on how to bring other aspects of the creative process into play. For example, art and music is not just about generating new pieces. It’s also about drawing one’s attention, being surprising, telling an interesting story, knowing what’s interesting in a scene, and so on," Eck explains.

History

Using computers to generate art is nothing new. There have been innumerable uses of specific algorithms to create art (called algorithmic art), such as Ultra Fractal, Scribble, and Fragmentarium, which essentially cut, mutate, and paste existing images into new works of art, as displayed at sites such as The Algorists, Algorithmic Worlds, and The compArt database Digital Art (daDA).

However, Google’s use of deep learning neural networks to create art could change the way such art is perceived.

"If humans can be creative, we must be able to build machines that are creative in the same way," said Mark Riedl, a professor at the Georgia Institute of Technology who studies creative entertainment such as evolving scenario games in the lab he founded, the Entertainment Intelligence Lab. "I believe that computers are capable of creating ‘genuine’ art today, but they need to be more than merely algorithmic. If they are to express creativity that is like human creativity, they will have to have some of the same resources that humans have."

One of those resources is neural networks. A neural network takes many inputs, each measuring a different feature (such the color, size and shape of each element of a scene), then passes them through layers of neurons (called "deep" if there are many layers) that learn the image by memorizing its various parts with synapses, which connect its various layers. It usually takes thousands of iterations to learn an image, or a series of similar images if you want it to generalize recognition of a particular category, such as "birds."

What Google has done is to use deep-learning neural networks in novel ways, such as training them on many images from the same painter, then giving them white noise as an input to create a new piece of art in the style of that artist. Google’s DeepDream tool utilized a convolutional neural network (inspired by biological processes) to identify and enhance patterns in images, which it used to generate dream-like images with elements of the original scattered throughout the scene.

Magenta will fold all these techniques together to allow the user to mix-and-match them along with others that can be added-on with TensorFlow. However, whether its creations will be considered "genuine art," or just a novelty like "algorithmic art," will have to be decided by the public, the art critics and the Lovelace 2.0 test—a version of the Turing Test that measures whether computer creativity can be distinguished from human creativity.

"At the Association for the Advancement of Artificial Intelligence (AAAI) conference two years ago, I put a challenge to the computer community, whether anyone could create algorithms that applied to any type of creativity," recalled Riedl. "I called it the Lovelace 2.0 test, because Ada Lovelace worked with Charles Babbage 140 years before Turing. Magenta is very exciting to me because it could be the first AI to which the Lovelace 2.0 Test could be applied."

Magenta’s Roots

Eck has been working on similar projects at Google since leaving a post as professor in Computer Science at the University of Montreal. Having helped found the pioneering International Laboratory for Brain Music and Sound Research (Brams) at the University of Montreal, and serving as a member of the International Advisory Council of the Centre for Interdisciplinary Research in Music Media and Technology (Cirmmt) of the Schulich School of Music at McGill University, Eck says those experiences helped him grasp how art is created by humans, and how machines could be coaxed into emulating that process.

"In the end, I learned a lot about the complexity and beauty of human music performance, and how performance relates to and extends composition," said Eck in his blog.

The open source TensorFlow platform already is being used by startups to enable all sorts of art projects. In addition, GitHub—the open-source project repository—has created a Magenta repository (now empty) into which programmers and engineers can upload the TensorFlow code they use to create visual, musical, or textual "masterpieces" made with Magenta.

R. Colin Johnson is a Kyoto Prize Fellow who ​​has worked as a technology journalist ​for two decades.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More