Sign In

Communications of the ACM

ACM TechNews

Bringing Human-Like Reasoning to Driverless Car Navigation


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
The MIT system enables driverless cars to check a simple map and use visual data to follow routes in new, complex environments.

A new system enables autonomous vehicles to navigate complex environments by checking a basic global positioning system-like map and employing video feeds.

Credit: Chelsea Turner

Massachusetts Institute of Technology (MIT) researchers have invented a system to enable autonomous vehicles to navigate complex environments by checking a basic global positioning system-like map and employing video feeds.

The autonomous control system first "learns" humans' steering patterns on suburban streets via the map/video combination.

In training, a convolutional neural network correlates steering wheel spins to road curvatures, as seen through cameras and the inputted map.

Once trained, the system can pilot a car along a preplanned route in a new area by emulating a human driver.

The system also continuously identifies any mismatches between the map and road features to sense whether its position, sensors, or mapping are wrong, and makes appropriate course corrections.

Said MIT’s Alexander Amini, “With our system, you don’t need to train on every road beforehand. You can download a new map for the car to navigate through roads it has never seen before.”

From MIT News
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found