Over the last few years, the quest to build fully autonomous vehicles has shifted into high gear. Yet, despite huge advances in both the sensors and artificial intelligence (AI) required to operate these cars, one thing has so far proved elusive: developing algorithms that can accurately and consistently identify objects, movements, and road conditions. As Mathew Monfort, a postdoctoral associate and researcher at the Massachusetts Institute of Technology (MIT) puts it: “An autonomous vehicle must actually function in the real world. However, it’s extremely difficult and expensive to drive actual cars around to collect all the data necessary to make the technology completely reliable and safe.”
All of this is leading researchers down a different path: the use of game simulations and machine learning to build better algorithms and smarter vehicles. By compressing months or years of driving into minutes or even seconds, it is possible to learn how to better react to the unknown, the unexpected, and unforeseen, whether it is a stop sign obscured by graffiti, a worn or missing lane marking, or snow covering the road and obscuring everything.
“A human could analyze a situation and adapt quickly. But an autonomous vehicle that doesn’t detect something correctly could produce a result ranging from annoying to catastrophic,” explains Julian Togelius, associate professor of computer science and engineering at New York University (NYU).
Figure. A scene from Rockstar Games’ Grand Theft Auto V, which is helping to revolutionize how researchers develop autonomous vehicles.
The use of computer games and simulations—including the likes of open-source TORCS (The Open Racing Car Simulator) and commercially available Grand Theft Auto V—already is revolutionizing the way researchers develop autonomous vehicles, as well as robots, drones, and other machine systems. Not only is it possible to better understand machine behavior—including how sensors view and read the surrounding environment—it offers insights into human behavior in different situations. “These games offer extremely rich environments that allow you to drive through a broad range of road conditions that would be difficult to duplicate in the physical world,” says Artur Filipowicz, a recent graduate in operations research and financial engineering at Princeton University who has used machine learning to advance research on autonomous vehicles.
The Road Less Traveled
Although the idea of using video game simulations and AI to boost real-world performance for autonomous vehicles has been around for more than a decade, the concept has zoomed forward over the last few years. The rise of graphics processing units (GPUs) and the advent of convolutional neural networks (CNNs) suddenly made it possible to explore scenes and scenarios in deeper and broader ways. By tossing vast numbers of images at the artificial neural network—stop signs, traffic signals, road markings, barriers, trees, dogs, pedestrians, other vehicles, and much more—and comparing actions and reactions such as steering, braking, and acceleration, it’s possible to cycle rapidly through an array of events and scenarios en route to more refined algorithms and better performing self-driving cars.
Of course, the allure of this approach is that in the virtual world, cars never run out of fuel or need new tires, and they’re able to log millions of miles in a single day. There are no fatigued drivers and no risk of real-world collisions or injuries. However, the benefits don’t stop there.
“One can say that the real world is richer in terms of character than the virtual world, but in the virtual world you can create specific situations and scenarios and study them faster and better,” says Alain Kornhauser, professor of operations research and financial engineering, and director of the Transportation Program, at Princeton University. “The big advantage is that you can focus in on specific ‘corner cases,’ the really difficult situations that represent the greatest risk and lead to the greatest number of crashes.”
About three years ago, Chenyi Chen, then a Ph.D. candidate at Princeton and now a deep learning researcher for autonomous driving at NVIDIA, began exploring the concept in earnest. He turned to the open-source car racing game TORCS to supply low-resolution visual data for a deep learning network. Working with Kornhauser, they devised a method for grabbing still images from the game and plugging them into the CNN. Chen then studied how to train a network for highway driving and how to judge the distance of other vehicles using 12 hours of human driving within the video game. “We realized we could create any situation we wanted and recreate any trajectory we desired. The game images provided a way to study driving in difficult situations, including rain, sleet, hail, and snow,” Kornhauser explains.
All of this attracted attention in the AI and autonomous vehicle communities. For example, Filipowicz decided to study stop signs to understand how humans recognize and react to signs. “Distance is difficult to measure in the real world but easy to measure in the virtual world, even under adverse weather conditions,” he explains. Filipowicz tapped Grand Theft Auto V for its rich and highly varied environment; it includes more than 250 models of vehicles, thousands of pedestrians, and animals, along with realistic settings and weather conditions. Future simulations might focus on additional detection, including signs covered with dirt, faded, partially obscured by fog, trees branches, or other objects, and those completely obscured by paint or graffiti, or broken off entirely. “The performance on both real and synthetic data, while not perfect, are promising,” he says.
Researchers at Darmstadt University of Technology in Germany and Intel Labs have also turned to Grand Theft Auto V to develop and fine-tune algorithms that could be used by auto manufacturers, while a China-based startup electric auto manufacturer, NIO, has turned to simulations to design and build a fully autonomous vehicle that it hopes to bring to market in 2020. In recent months, Waymo, the autonomous vehicle arm of Alphabet (Google’s parent company), has begun using simulators to study every situation and variation engineers can imagine, including multiple vehicles changing lanes at the same time in close proximity, and the car recognizing road debris that could damage a vehicle or pose a crash hazard.
“The game images provided a way to study driving in difficult situations, including rain, sleet, hail, and snow.”
Yet, while the use of video games and AI has already caught the eye of major automotive companies, putting the data to full use is not without challenges. Transforming pixels and RGB values into useful data for a vehicle or other machine is a steep challenge, Kornhauser says.
In addition, NYU’s Togelius, who has experimented with computer games and AI to better understand player performance, as well as events and systems within the game, says the virtual and physical worlds do not always mesh neatly. What’s more, not all game scenarios are faithful to physical reality. In some cases, “It’s possible to learn from a simulation, take the knowledge into the real world and then find out that things don’t comply to the simulation. So, it is necessary to take a very iterative approach to AI and video game simulations and carefully validate results.”
AI in Overdrive
The use of AI and images to drive real-world gains shows no signs of subsiding. For instance, Monfort and a group of researchers at NVIDIA have used a CNN to map raw pixels from time-stamped video captured by a single front-facing camera. With minimum training data from humans, the neural net learned to drive in traffic on local roads and highways with or without lane markings or guardrails by using the underlying mathematics of human steering angles as the training signal. What’s more, the project accomplished the task across a wide spectrum of road and weather conditions in less than 100 hours. It also learned to operate in areas with unclear visual guidance, such as in parking lots and on unpaved roads. Monfort describes the method, which led to a test on an actual vehicle, as “surprisingly powerful.”
Although Monfort’s research involved actual video and real-world data rather than game images, it demonstrated the promise of deep learning applied to game graphics and actual videos—and how the two are closely related. For one thing, deep learning used for both synthetic and actual images could eliminate the need for a near infinite number of “if…then…else” statements, which are impractical to code when dealing with the randomness of the road. For another, this type of data could be combined with game data using a generative adversarial network (GAN), which relies on two neural networks “competing” with one another to boost machine learning. This approach could bridge the gap between data collected from the synthetic world and data from the physical world, Filipowicz says. “This may be the next phase of research. You could transfer the learning from one network to the other using either simulated or real-world images,” he explains.
In addition, researchers are increasingly attracted to video games and simulations to understand how to build better robots, drones, and agents. By applying the same type of deep learning techniques, they can discover things that would have previously gone undetected. For example, in 2015, Microsoft embarked on a project called Malmo, which created an AI-based development platform revolving around the popular world-building game Minecraft. The goal of the project was to experiment with and study complex virtual environments and apply the lessons learned from that study to the physical world. Katja Hofmann, chief researcher for the project, has stated that “endless possibilities for experimentation” exist. Others, such as Google’s Deep-Mind project, are also examining games and how they can apply data to the physical world.
Togelius says the goal is to build smarter systems and agents that can continue to tap data and adapt on the fly. Within this framework, humans could learn from agents, agents could learn from humans, and agents could learn from other agents. He says a “competitive, co-evolutionary process” could result in neural nets and learning systems that are better adapted to the increasingly blurry line between silicon and carbon-based intelligence. Within games, they would “handle more of the unexpected features of the physical world,” but also fuel real-world gains by finding relationships and correlations that humans probably would not or could not notice.
Not surprisingly, there are limitations to how videogames can be used to train robotic and autonomous systems. Software such as Grand Theft Auto V typically requires hundreds of millions of dollars to develop, yet these packages are available commercially at a relatively low cost. Essentially, the game manufacturer is footing the bill for research and development that would be unachievable and unaffordable in a lab. As a result, the use of games for machine learning will likely be limited to specific fields, such as autonomous vehicle and robotics research. It’s difficult to envision a game for training surgical robots, for example.
Researchers are increasingly attracted to video games and simulations to understand how to build better robots, drones, and agents.
Nevertheless, the idea of using AI to extract data from games and apply it to the real world is gaining momentum. Not only do these simulations eliminate the cost, time, and human resources involved with building and operating complex machines—autonomous vehicles, robots, drones, software agents and more—they make it possible to cycle through millions of possibilities and find subtle anomalies and correlations that determine whether an autonomous vehicle maneuvers correctly for a dog on the road and stops at a traffic light that is not working, or simply crashes.
Says Kornhauser, “Humans are very good at recognizing situations. In order to build autonomous vehicles and other devices that work correctly, we must understand and translate all the factors and variables to a machine. It’s a challenge that AI can solve.”
Bainbridge, L.
Ironies of automation. New Technology and Human Error, J. Rasmussen, K. Duncan, J. Leplat (Eds.). Wiley, Chichester, U.K., 1987, 271–283.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., and Zieba, K.
End to End Learning for Self-Driving Cars, April 25, 2016 https://arxiv.org/pdf/1604.07316v1.pdf
Chen, C., Seff, A., Kornhauser, A., and Xiao, J.
DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving, Proceedings of 15th IEEE International Conference on Computer Vision (ICCV2015), May 2015 http://deepdriving.cs.princeton.edu
Loiacono, D., Lanzi, P.L., Togelius, J., Onieva, E., Pelta, D.A., Butz, M.V., Lönneker, T.D., Cardamone, L., Perez, D., Sáez, Y., Preuss, M., and Quadflieg, J.
The 2009 Simulated Car Racing Championship, IEEE Transactions on Computational Intelligence and AI in Games, Vol. 2, No. 2, June 2010 http://julian.togelius.com/Loiacono2010The.pdf
Filipowicz, A.
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensors in Self-Driving Cars, June 2017. http://orfe.princeton.edu/~alaink/Theses/SeniorTheses’17/Artur_Filipowicz_VirtualEnvironmentsAsDrivingSchools.pdf
Join the Discussion (0)
Become a Member or Sign In to Post a Comment