Augmented Reality Is Used to Improve Autonomous Vehicle Testing


Since autonomous cars began testing on public roadways in 2013, the goal of manufacturers has been to show that they are safer than vehicles operated by human drivers. There have already been several fatalities associated with autonomous driving systems.

It is important to draw a distinction between semi-autonomous and autonomous cars. There are five levels of autonomous driving systems. Levels 1, 2 and 3 are considered semi-autonomous driving systems, while levels 4 and 5 are considered fully autonomous driving systems. An example of a semi-autonomous driving system is the Tesla Autopilot system (level 2). Alphabet’s Waymo vehicles (level 4) are an example of a fully autonomous driving system.

Proving the superiority of autonomous driving systems against human drivers is very difficult because manufacturers of autonomous vehicle systems have not compiled enough driving hours to make reasonable comparisons. There are 1.18 fatalities per 100,000,000 miles in human-controlled cars. Alphabet’s autonomous driving car company Waymo hasn’t yet racked up 10 million autonomously driven miles. In 2018, The Hill reported that human intervention was required every 13 to 5,600 miles on average for Waymo self-driving cars, which at the time had covered only 5 million miles.

Testing autonomous cars in real-life scenarios means actually putting them in traffic. Autonomous cars have already been involved in several fatal accidents. Among them was a level 3 autonomous driving system retrofit to a Volvo XC90 outfitted with an Uber autonomous vehicle system. On March 18, 2018, Elaine Hertzberg was walking her bike across Mill Avenue in Tempe, Ariz., when she was struck and killed by a self-driving car. According to the National Transportation Safety Board (NTSB), Hertzberg was crossing the road at night, was not in a crosswalk, and was not looking out for traffic. They also concluded that the human operator in the retrofit car was not watching the road at the time of the fatal accident.

Working on edge case simulations at the University of Michigan. (Image courtesy of the University of Michigan.)

Using Augmented Reality to Boost Simulation of Edge Cases

Due to the lack of long-term road testing data of autonomous cars, a group of researchers at the University of Michigan are working to change the public perception that autonomous driving systems are dangerous. Most of the time autonomous driving systems work very well, but there are times when something unexpected happens. Human drivers are much more prepared and experienced with these edge cases, like when a person or object appears suddenly and randomly out of nowhere.

Edge cases are inherently hard to test for, so the University of Michigan researchers decided to create edge case simulations for autonomous driving systems using augmented reality. So far, they have designed and implemented two testing scenarios using a custom-built simulation environment. The first test runs like this: the test car perceives a virtual train projected into its real purview through augmented reality, as though a robot were wearing augmented reality glasses. The virtual train comes upon a rail-crossing in the virtual city, known as Mcity. The goal is to test if the car will stop in time and wait for the train to pass.

The second scenario has the car react to changing traffic lights and vehicles in its environment that run red lights unexpectedly. The test car is supposed to tell what color the signal is turning and decide what to do: stop or go. If a nearby virtual vehicle runs a red light, then the car should be able to calculate its position relative to the virtual car that unexpectedly ran the light. It should do this well enough to avoid a crash.

Bottom Line

The goal of the research being done at the University of Michigan is to compile a library of edge cases for testing autonomous driving systems in augmented reality-based simulation in order to run the tests repeatedly, improving the autonomous driving system’s performance with each repetition. The researchers have begun building this library using a vast amount of data collected from reports of collisions that occurred in the real world. They’ve also compiled data from drivers operating vehicles jam-packed with sensors that reflect how drivers actually react and behave on the road.