New AI System Makes Autonomous Vehicle Navigation More Humanlike

The recently developed autonomous control system studies the patterns of real human drivers to better handle complex, foreign environments. (Image courtesy of Chelsea Turner, MIT.)

A paper delivered by MIT researchers at the International Conference on Robotics and Automation last month describes a novel approach to AI for driverless vehicles. Their new system will draw on the fact that human drivers tend to be quite adept at negotiating never-before-seen terrain from behind the wheel.

Decluttering the “Learning” Process

A departure from most current self-driving tech, this fledgling model uses visual cues and easy-to-follow maps rather than rigorous computation of all the new roads in the area. Using a machine learning model called a convolutional neural network (CNN), the AI “observes” the way human drivers handle new environments as they travel about a fresh (albeit confined) area. It can then mimic what it learned from the human’s responses to take a self-driving car along a brand new route, provided the trip shares certain similarities with the trial run. As incongruities between its basic layout and the actual trip it’s making emerge en route, it can take simple corrective measures similar to the responses of the human driver.

In live testing, a human took a Toyota Prius (complete with a self-driving camera and navigation system) through a residential area. The CNN collected data on the steering patterns of the driver through this area in response to obstacles and other stimuli and correlated them to sensory inputs. Over time, a pattern of the most likely steering responses to various driving situations emerged. Then, given only a basic map of an entirely different area, the control system took the car safely through the test zone.

This process enables self-driving vehicles to tackle different locations without thorough testing on each new road. Once a baseline level of understanding is established through exposure to a human driver, it can be as simple as downloading a new map. Early signs of progress towards this goal are encouraging, given that adaptability was the researchers’ primary target from the start. “Our objective is to achieve autonomous navigation that is robust for driving in new environments,” said Daniela Rus, a co-author of the paper and director of MIT’s Computer Science and Artificial Intelligence Laboratory.

Advantages of this Method of Integration

The group has been working on the aspect of the control system that processes sensory inputs and translates them into steering commands successfully for years. While this enables the (crucial) step of following a road safely, it lacks a key component: navigating to a fixed point. This new paper outlines the significant new step of taking the system from beginning to end in a new environment.

An overview of the end-to-end autonomous control system.

This is largely thanks to the fact that the maps sufficient for their control system to navigate are vastly simpler than the LIDAR-generated maps typical of other self-driving initiatives. This type of map is wildly data intensive, consuming up to 4 terabytes on a single map of San Francisco. Conversely, the researchers say their maps are simple enough to lay out the whole world in under 40 gigabytes.

The researchers feel this data-light approach to mapping helps their system continuously note any mismatches between their simple maps and actual visual data it collects as it drives. This helps the vehicle stay on the safest and most direct path to its destination in the event that real conditions necessitate a departure from the “plan.” Knowing that sensors are destined to fail at some point, the researchers are doubling down on producing a system that is resilient enough to withstand individual gaps in sensory data and remain safely on course—much as a human might.

For info on other autonomous tech advancements rolled out in 2019, check out this round-up of the relevant unveilings from CES earlier this year.