What Tech Will it Take to Put Self-Driving Cars on the Road?

Visualization of all the companies involved in self-driving vehicle technology. (Image courtesy of Vision Systems Intelligence.)
Is your company working on self-driving cars? I wouldn’t be too surprised if it is; it seems like every week another company jumps onto the self-driving bandwagon (although I’m pretty sure no one is actually making a self-driving bandwagon).

There is already an all-star cast of tech companies working on this technology, including Google, Tesla, Uber and Apple (maybe). Conventional automakers aren’t just sitting around waiting to get disrupted either, with Daimler (Mercedes-Benz), Ford, Volkswagen, Volvo, Toyota and others making their own concerted efforts.

With all of these big players involved, self-driving cars seem to be an inevitability, with different companies pitching different timelines: 2020, 2025, 2035 and beyond, depending on who you ask. Before self-driving cars can hit the road however, there are many technological challenges that need to be overcome (not to mention the legal and regulatory hurdles).

The reality is that self-driving cars will not suddenly become available, the transition will be gradual, and it has in fact already begun, with many autonomous features available in cars on the road today.

These include lane-keeping, auto-parking, emergency braking and adaptive cruise control (maintaining a constant gap from vehicles ahead rather than simply holding a steady speed). There is an important distinction, however, between autonomous and self-driving vehicles.

 

Is My Autonomous Car Self-Driving or is My Self-Driving Car Autonomous?

The terms “autonomous” and “self-driving” are often used interchangeably, but they are fundamentally different. Autonomous cars look and feel like a “normal” car, with forward facing seats, a steering wheel and pedals. They’re predicated on being able to take over for a human driver in certain situations, but they can be overridden by a human driver when necessary.

A self-driving car, however, doesn’t need a steering wheel or pedals and that’s because it doesn’t need a driver, ever. For a true self-driving car, the only thing you should ever have to do is enter your destination.

For our complete infographic, check out The Technology to Put Self-Driving Cars on the Road.
The National Highway Traffic Safety Administration (NHTSA) has created a classification for autonomous vehicles. A level 0 classification means the driver is in complete control of the vehicle all the time. At the opposite end of the spectrum is level 4, where the car performs all functions end to end, the driver is not expected to ever control the vehicle and it can even run unoccupied. This is true self-driving.

Most new vehicles on the road today are at level 1, which requires individual vehicle controls to be automated, such as electronic stability control (ESC) or automatic braking. Some of the more advanced autonomous options currently available, like Tesla’s AutoPilot, can be classified as either level 2 or level 3.

Level 2 is defined as two automated controls functioning in unison, for example, a combination of adaptive cruise control and lane-keeping. In a level 3 vehicle, the car can completely take control in certain conditions.

 

Self-Driving Vehicle Technologies

Although an autonomous car is technically not a self-driving car, all of the technologies autonomous vehicles incorporate are also necessary for self-driving vehicles. Autonomous cars are essentially one step along an evolutionary path.

Autonomous features are gradually rolling out, one by one, and once all aspects of driving are automated, we will have self-driving!

The technologies normally incorporated into vehicles today to achieve autonomy include radar, cameras, a variety of different sensors and GPS. The best way to understand these technologies, how they work and what their limitations are, is to consider the features they provide.

 

Adaptive Cruise Control and Automated Parking

One such feature is autonomous/adaptive cruise control (ACC), which refers to a vehicle’s ability to adjust its speed in order to maintain a safe distance from the vehicles ahead of it. Most ACC systems are either laser- or radar-based, though an optical system based on stereoscopic cameras can also be used. A laser or radar system mounted to the front of the car can be used to gauge the gap between itself and the vehicle ahead, and this data can be used to adjust speed accordingly. 

Adaptive cruise control continually adjusts the gap between vehicles based on your speed to ensure sufficient time to react. (Image courtesy of Volvo.)
Laser based ACC systems must be exposed, whereas radar based sensors can be hidden behind plastic because they make use of infrared or microwave sensors, wavelengths that can simply pass through plastic. Radar systems are more common, given that laser systems cannot track vehicles in poor weather conditions and also have trouble tracking very dirty cars because they can be non-reflective.

An additional advantage of radar is that it not only returns obstacle distance, it can also determine obstacle speed using the Doppler Effect. Most systems are single radar, but some automakers will opt for two radars, one each for close (100 feet) and long (600 feet) range.

It’s worth noting that these types of ACC systems are entirely based on information from on-board sensors. Cooperative Adaptive Cruise Control (CACC) could potentially take this further by using information from satellites, roadside beacons and other fixed infrastructure, as well as mobile infrastructure such as reflectors and transmitters placed on the backs of other cars.

Most ACC systems can also be improved by making use of contextual information, such as changing speed limits and freeway off-ramps. For example, on the freeway if the car ahead slows down, a standard ACC system will also slow down. However, using a combination of a GPS and camera vision, the car can determine if the one ahead of it is approaching an off-ramp and is signaling to exit. This information can then be used by the ACC to maintain speed by anticipating the exit.

Most ACC systems are paired with “precrash” or collision avoidance system. This system uses the same forward looking sensors as ACC, and if they detect an imminent collision can warn the driver or autonomously take action by braking and/or steering.

A diagram from the police report about the Tesla crash in May shows how the vehicle in self-driving mode (V02) struck a tractor-trailer (V01) as it was turning left. (Image courtesy of Florida Highway Patrol.)
At low speeds (below 50 km/h) braking is typically used, while at higher speeds steering also becomes necessary.

The ability to automate steering offers all kinds of amazing possibilities, and requires vehicles to have a more complete understanding of their surroundings. The most common application of this is lane-keeping, which can either be reactive, turning a vehicle back into its lane if it starts to drift, or proactive, which constantly keeps a vehicle centered in its lane.

Lane keeping systems typically use a camera mounted in or around the rear view mirror to watch the lane markings, which they use to guide the vehicle. This is effective, but requires clear, correct lane markings to work.

Unclear markings, no markings or poor visibility due to rain or snow will disable these systems. Systems are being developed to use the surrounding traffic and environmental cues like the road’s edge and guard rails to reduce dependence on lane markings.

In addition to lane keeping, some vehicles also support lane changing. In this case, all the driver has to do is flick the turn signal and the car takes care of the rest.

This feature was first introduced by Tesla when it launched its Autopilot system, making use of 12 long-range ultrasonic sensors that are positioned such that the vehicle can sense 16 feet around it in all directions, at all speeds. Mercedes-Benz also offers automated lane changing, using an army of 12 ultrasonic sensors, six radar sensors and up to 8 cameras to monitor all 360 degrees around the car.

Information from ultrasonic sensors and radar can give a vehicle a 360 degree understanding of its environment, enabling autonomous lane changing. (Image courtesy of Tesla Motors.)
Today’s autonomous vehicles are also capable of automated parking. Automated parking requires using sensors—typically ultrasonic—to understand the immediate environment and determine whether a given space is adequate for parking. They are also used to guide the vehicle into an appropriate starting position, at which point a planned parking maneuver can be executed. Automated parking also requires creating control profiles of steering angle and speed in order to achieve the desired shape of the vehicle’s path given the space available.

 

Technological Challenges for Self-Driving Vehicles

In order to reach true self-driving, autonomous tech and engineering still has a ways to go. The technologies and capabilities present in today’s autonomous vehicles are part of the solution, but there are many important challenges that remain.

Solving these challenges will require better hardware to collect more data, better software to make decisions based on that data. Ultimately, a true self-driving revolution will require infrastructural changes as well.

One of the main challenges for self-driving cars is creating a sufficient proficiency in handling the unexpected. On good roads with good markings, in well-known areas with good weather, the top end autonomous cars of today can practically drive unaided.

Predictable driving is “easy”, but driving isn’t always predictable.

Self-driving cars need to be able to navigate pedestrians and cyclists, manage drunk and distracted drivers, differentiate pot holes from puddles, handle detours and construction and drive through rain, sleet and snow.

 

LIDAR and Self-Driving Vehicles

One key technology making its way into a variety of autonomous research vehicles is Light Detection and Ranging (LIDAR) technology. It’s also the core technology for Google’s autonomous car.

LIDAR uses rotating lasers that emit short pulses of light and then measures the return time to create a detailed 3D map of the surrounding environment. The LIDAR on Google’s car, which is also used by Ford and others, is a popular model built by Velodyne LiDAR that has 64 lasers spinning at 900 rpm, providing 2.2 million data points per second, enabling the creation of a detailed 360 degree 3D map.

An actual point cloud image from the Velodyne 64 laser LIDAR, showing a vehicle at an intersection mapping other vehicles and road features around it. (Image courtesy of Velodyne LiDAR.)
The 3D mapping of environments could prove to be a very important infrastructural element for achieving self-driving. This type of mapping effectively provides a super-detailed version of Google Maps for self-driving cars to draw on, which can be combined with real-time sensor readings to navigate. GPS alone provides accuracy in the range of a couple of meters, which is not nearly accurate enough for positioning a car on a road.

Google’s self-driving car relies heavily on having detailed three-dimensional maps of the environment in which it is driving. In fact, before Google’s car goes for a self-drive, its engineers will drive the route multiple times to build a data set for the vehicle to draw on. This approach is effective, but also limited since very few roads have this level of mapping. In order for this to be practical on a larger scale, complete 3D mapping of infrastructure is necessary. This sounds daunting, but it’s happening right now, and the plus side is that as more cars drive more roads with these technologies, the data they collect can be used to build and maintain this infrastructure.

Google, Ford, Audi and others have incorporated LIDAR into their research vehicles. Elon Musk, on the other hand, has said “I’m not a big fan of LIDAR, I don’t think it makes sense in this context,” and as so no current Tesla model uses the technology.

 

Image Recognition for Self-Driving

Tesla is, however, a big fan of another extremely important technology for self-driving: image recognition. Image recognition is very important for achieving a true understanding of the environment, because it is the only way to see indicators such as traffic lights, brake lights and turn signals.

A pedestrian detection system developed in the Statistical Visual Computing Lab at UC San Diego. (Image courtesy of UC San Diego.)
Computer vision is also essential for reading signs, particularly temporary signs for things like construction and detours that cannot be fed into a database the way a stop sign can.

Image recognition can be broken down into two rough categories: machine vision and computer vision. Machine vision is simpler in nature, referring to things like finding particular features (edges, corners, etc.), detecting motion, motion parallax, and using stereo vision to estimate distance. By searching for particular features, objects such as pedestrians, other cars, lane markings and the edges of the road can be identified.

Computer vision is the much harder problem of recognizing objects and understanding what they are doing. The way this currently works is through machine learning techniques in which a large training set can be used to teach an AI to recognize and understand something. That’s really cool, but there is still a long way to go before these techniques reach the level of accuracy necessary for a commercial vehicle and can be trusted to know what to do in any scenario.

 

Camera Vision vs. LIDAR

Comparing camera based vision with LIDAR, a key advantage of LIDAR is that its function is independent of ambient lighting conditions because it uses emitted light, so it is effective in practically all conditions. Computer vision, on the other hand, requires illumination and must deal with light variation.

On the flip side, cameras have higher resolution, can see color and are much cheaper; the 64 laser Velodyne LIDAR costs $70,000, which is more than the price of an average sedan. Most people in the industry aren’t worried about that though, because right now LIDAR is a relatively niche technology, but if every car needs one, the price will inevitably come down.

At the moment it would seem that a combination of camera based vision and LIDAR is the way forward for self-driving cars, Musk’s skepticism notwithstanding. LIDAR is ideal for mapping an environment, determining that an object is present and figuring out what it is doing. Computer vision is better at figuring out what that object is and discerning details about it.

 

Computers in Self-Driving Cars

Given the number of sensors autonomous vehicles have and the functions they perform, they need to have powerful computers running the show. At present, the computers in most vehicles are extremely simple, running at a low clock speed with a low amount of memory and relatively simple code. The reason is simply that the one factor that trumps all else is reliability.

The on-board computer in Stanford's autonomous Audi TTS. (Image courtesy of Stanford University.)
However, in order to do the data processing necessary for image recognition, LIDAR, radar and more in fractions of a second requires much more powerful computers. The car Lexus showed at CES (Consumer Electronics Show)

actually has a number of high-powered computers in the trunk of the car, computers equivalent to what you might have on your desktop, and the onboard computers in autonomous vehicles will only continue to get more powerful. This point is reinforced by computer graphics card manufacturer Nvidia, which has been extremely proactive in building its self-driving car

development platform.

 

Self-Driving Vehicle Communications

We need self-driving cars to make self-driving cars. It sounds like a chicken and egg problem, doesn’t it? The reality is that the more self-driving cars there are, the better self-driving cars will become because of the infrastructural changes they will enable. For one, autonomous vehicles learn from data. They live on data, like three-dimensional maps, and the more vehicles there are on the road, the more data they will feed into the system.

Another awesome step forward for self-driving technology is vehicle-to-vehicle (V2V) communication—the ability for cars to “talk” to each other. V2V communication is essentially the Internet of Things for cars, with each vehicle acting as a node in a mesh network capable of transmitting and receiving information about speed, location, direction of travel, braking, loss of stability, roadside hazards and much more.

Vehicle-to-vehicle (V2V) communication could prevent crashes.
V2V technology uses dedicated short-range communication (DSRC), which lies in the 5.9 GHz band with a range of up to 300 meters, so a message that passes through five to 10 nodes can give you info from a mile ahead or around a corner that’s out of sight. The ability to look ahead will fundamentally change the behavior of autonomous vehicles, because many of their technologies are currently limited to analyzing the immediate environment.

One of the primary concerns surrounding V2V communication is the potential for tracking and hacking. In order to mitigate this, the vehicle ID used to identify individual cars and the system’s security certificate will be changed every five minutes. The idea is that the vehicle ID is simply meant to be a way to mark a vehicle, not uniquely identify it, so it doesn’t need to be linked to the vehicle’s VIN or registration in any way.

A technology that may be necessary for enabling wide scale V2V communication is 5G mobile networks. If we are headed towards a future where all cars are “connected,” the demands on mobile networks and cloud based storage will skyrocket. 5G should be up to the task, expected to deliver speeds between 10-50Gbps, a significant upgrade over the average 15Mbps of 4G.

5G will also have five times lower latency, important for making split second decisions, offer significantly improved network coverage, and have the ability to differentiate between and prioritize packets, ensuring that data about a real time vehicle warning does not take a back seat to a video being streamed by a child in the back seat.

A related concept is vehicle-to-infrastructure (V2I) communication, in which stationary objects like traffic signals and sensors embedded into roads are also nodes in the network, passing information to vehicles. This concept could also be extended to cyclists and pedestrians, transmitting info to vehicles via their smartphones.

This diagram shows vehicles connected to each other as well as the surrounding infrastructure. (Image courtesy of Mercedes-Benz.)
For both V2V and V2I, the benefits are clearly much more pronounced as the number of vehicles and nodes increases. The thing is, if the vehicle driving three cars ahead of your self-driving car slams on the brakes, it would be great if it could communicate that to your car so you can immediately begin stopping as well, but if that car ahead is just a regular old car, your self-driving car better still be able to stop.

V2V communication will make self-driving better, but it can’t depend on it. Similarly, connected infrastructure will help self-driving cars, but self-driving cars need to exist to benefit from it and justify investing in it.

 

The Road to Self-Driving Cars

The technology necessary to achieve self-driving vehicles actually feels very much within reach. Most of it, in fact, already exists. The key really lies in achieving a seamless integration of these technologies, or as close as we can get to that, because any flaw in safety and reliability can have disastrous consequences.

We are moving closer and closer to self-driving cars each day, as more and more sensors are added, more and more environments are mapped and more and more functions are automated. There are still major technical battles to fight, not to mention the economic, legal and ethical issues that will need to be addressed, but when self-driving vehicles finally arrive, they will change the world.

For more information, check out our infographic on self-driving technology.

Wondering how we got to self-driving cars? Read The Road to Driverless Cars: 1925 - 2025.