An Algorithm Helps Self-Driving Cars See Around Corners

The developers of self-driving cars may be chasing advances in laser technology, but when it comes to seeing around corners, a clever algorithm is putting current laser mapping techniques to shame.

Setup for using confocal laser/photon sensor technology to detect a shape hidden behind an occluder. (Image courtesy of Matthew O’Toole, David B. Lindell and Gordon Wetzstein in the supplementary materials to their 2018 paper, Confocal non-line-of-sight imaging based on the light-cone transform.)

Light Detecting and Ranging technology, or LiDAR, uses the varying return times for light emissions bouncing off objects in the environment to build a 3D map of a self-driving car’s surroundings. The primary problem autonomous vehicle engineers face is determining which of the millions of generated data points to use. Up until now, LiDAR systems such as those used by Google have only analyzed the photons that bounce directly off an object to map the area around a vehicle.

What about the scattered photons that hit objects out of the sensor’s line of sight and bump into several other surfaces on their return trip? Previous attempts to sort signal from noise in these situations required hours of computational time, but a new algorithm, called Light Cone Transform (LCT) Reconstruction, reduces the required computing power to such a level that it can be implemented in the simple, reliable computers a truly autonomous car must rely upon

The shape of a mannequin around the corner from the photon sensor is reconstructed by performing the Light Cone Transform (LCT) Reconstruction on the noisy data shown on the left. (Image courtesy of Stanford University in a video created by Kurt Hickman.)

LCT Reconstruction takes non-line-of-sight data that is normally resolved by multiplying two staggeringly large matrices together, and transforms them into an equation that can be resolved regardless of the dimensions of the starting matrices. This means that the shape of an object can be determined from the arrangement of millions of reflected photons within a matter of seconds.

A self-driving car using this technology would be able to sense not only something such as a child playing on the side of the road, but also a child ducking behind a bush to retrieve a lost ball before dashing heedlessly across the street to rejoin a soccer game. LCT Reconstruction and LiDAR would be able to tell an autonomous vehicle to slow down when driving through a potentially risky situation, the same way that a human driver would in such a situation. 
David Lindell and Matt O’Toole setting up their non-line-of-sight confocal laser and photon sensor in the Stanford Computational Imaging Lab. (Image courtesy of L.A. Cicero.)

Researchers in the Stanford Computational Imaging Lab believe that their algorithm can be integrated into current LiDAR systems on Google self-driving cars. However, they say that before the system can be widely applied, they will need to increase its accuracy in detecting nonreflective objects. As it is, researchers are confident that the LCT Reconstruction algorithm will allow LiDAR systems to recognize street signs and reflective vests from around a corner.

For more on autonomous vehicle technologies, check out the following:

The Road to Driverless Cars - 1925 - 2025

What Tech Will It Take to Put Self-Driving Cars on the Road?

Driverless Cars - The Race to Level 5 Autonomous Vehicles