The Legislation, Liabilities and Ethics of Self-Driving Cars

Details of the accident from the crash police report. (Image courtesy of Florida Highway Patrol.)
On Saturday, May 7, 2016, Joshua Brown was killed in an automotive collision when an articulated truck turned left in front of his Tesla Model S.

The accident was highly publicised and initiated intense debates on the legislation and liability surrounding autonomous vehicles. While fatal car accidents are an unfortunately common occurrence, this incident was notable as it was the first time someone was killed by an autonomous vehicle driving without input from its passenger.

Brown was hardly a novice when it came to operating his Model S. His YouTube channel displays multiple videos of him operating the car and showing the “autopilot” feature in operation. He notes in comments that the system was able to learn as he used it, improving on the car’s ability to handle difficult driving situations and curves in the road.

One video shows the car avoiding an accident with a truck on the highway.


“Tessy did great. I have done a lot of testing with the sensors in the car and the software capabilities,” Brown noted in the video description from April, 2016. “I have always been impressed with the car, but I had not tested the car's side collision avoidance. I am VERY impressed. Excellent job Elon!”

So, what went wrong on May 7th? A Tesla blog stated the car was unable to identify the white truck against a bright sky, while others have asserted that the fault lies with Tesla, or with Brown himself.

"By marketing their feature as ‘Autopilot,’ Tesla gives consumers a false sense of security," said Laura MacCleery, vice president of consumer policy and mobilization for Consumer Reports.

"In the long run, advanced active safety technologies in vehicles could make our roads safer. But today, we're deeply concerned that consumers are being sold a pile of promises about unproven technology," she added, referring to Tesla’s “public beta test” status of the software.

While the autopilot software does warn drivers to keep their hands on the wheel, it’s not an immediate alert, and discounts the human factor.

If your car is handling all your driving responsibilities, why would a driver pay attention – and isn’t that the point?

That question depends a little on the driver’s perspective, and lot more on the legislation of autonomous cars.


Autonomous Vehicle Legislation

Legislation exists for other in-car distractions, from cell phone use to watching movies, but with the advent of vehicle autonomy, new laws will be needed to help keep the roads safe.

The Society of Automotive Engineers (SAE) has identified 6 levels of driving automation, based on the functional aspects of the available technology and the varying levels of human involvement in the act of driving. Its cut-off point for calling a driving system “automated” is the monitoring of the driving environment.

Summary table of the SAE's levels of vehicle automation. (Image courtesy of SAE International/J3016.)
The SAE levels differ from the older U.S. Department of Transportation's National Highway Traffic Safety Administration (NHTSA) policy on vehicle automation, which was developed in 2013.

The NHTSA defines vehicle automation as having five levels, based on the automation of primary vehicle controls (brake, steering, throttle, and motive power). This could include assisted braking systems going as high as level 2, if combined with traction control systems.

During the Automated Vehicles Symposium in July of this year, Mark R. Rosekind, Administrator of the National Highway Traffic Safety Administration, spoke about vehicle automation as a safety measure:

“At the National Highway Traffic Safety Administration, there are two numbers that explain exactly why we are so forward-leaning on this issue,” he said. “The first is 35,200. That is how many people we lost on American roads last year. The second number is 94. That’s the percentage of crashes that can be tied back to a human choice or error.”

Rosekind’s sentiments echo the idea that vehicle automation is about safety and saving lives.

While this may be true, there will need to be firm agreement on what constitutes an autonomous vehicle before any legislation can be passed and bring more of these vehicles on the road.

Will this new technology fall under a different class of laws from existing vehicles, or will manufacturers be held accountable for errors in judgement as people would be currently?

States with self-driving car legislation.
It would be difficult, if not impossible, to convict a piece of software of legal misconduct; even if the system is particularly “smart” and if the car is being trained by observing human drivers, it may be difficult to pin down exactly who is at fault.

Of course, legislating autonomous vehicles is only a part of what will be needed going forward. Cybersecurity for the vehicles’ software will be extremely important, which suggests that much of the liability could likely fall on manufacturers themselves.

But is that really feasible?


Autonomous Vehicle Liability

Should Tesla be on the hook for calling its autonomous driving system “Autopilot”?

The name implies that the car is doing all the work, but is contradicted by the brand’s own marketing - as noted in the press kit, “Tesla requires drivers to remain engaged and aware when Autosteer is enabled. Drivers must keep their hands on the steering wheel.”

While the message is clear enough, some confusion is understandable.

When an individual is driving their non-automated car, they are responsible for the errors that account for Rosekind’s 94 percent.

If the car is driving itself and gets into an accident, is it the driver who should have been paying attention, or the company that released the software responsible for the accident that is at fault?

Interestingly, Volvo has taken the initiative and stated that it will accept full liability for the actions of its autonomous cars in an effort to speed along legislation and development of the technology. There have been similar statements from Google and Mercedes-Benz, all carrying the caveat that the companies will take responsibility provided the fault is in the software or the vehicle itself - not the driver.

One of many - Google’s self-driving Lexus. (Image courtesy of Mark Wilson.)
This may not be an option for a fledgling company like Tesla. Musk may have demonstrated an impressive amount of forward thinking, but the company lacks the clout of its larger automotive competitors, or Google, for that matter.

It does, however, go a long way in showing the company’s confidence in its technology. The Google fleet has covered a lot of ground in its testing on the streets of California, and although there have been accidents, the vast majority were determined to be the fault of other drivers.

All vehicles with autonomous capabilities that are currently on the road are built with the idea that the driver still needs to be in control, aware and responsible for their actions.

Unfortunately, this necessary driver interaction introduces a problem that software will always struggle to compensate for - human error. 


People, by their very nature, make for idiosyncratic drivers. Some get angry in response to gridlock, others get distracted. One person may see the truck in their peripheral vision while another would react too late, causing an accident with an otherwise uninvolved car.

For that reason, the easiest solution would be to have all cars automated, networked and working together to ensure smooth driving.

It might sound like a prohibitively expensive endeavour, but in light of the costs associated with the development or maintenance of any transit system, it doesn’t seem so unattainable, especially if it will save lives and reduce traffic.

However, this goes against the design of current, distinctly autonomous vehicles, and would require infrastructure and a common set of protocols for all manufacturers to follow. Once again, the hydra-headed legislative issues return.

Of course, all the necessary steps, red tape and bureaucracy will always have to contend with one distinctly human fact: change is difficult.

There are people who would outright refuse to use vehicle automation, whether they are driving enthusiasts or skeptical of the software's capability. They may not live in urban centers where vehicle automation is likely to have the greatest impact, preferring to use their old Chevy pickup truck on dirt roads or taking a modified 4x4 off-road. These are not easy cases to deal with via automation. But even putting the idiosyncratic cases aside, there is one other major hurdle to the mass implementation of self-driving cars.


Autonomous Vehicle Ethics

Some people like to stay in control; others would be more than willing to give up that control for an easy commute to the office or safety on busy streets. People often have more faith in themselves than others, or in software designed to accomplish a specific task—whether that confidence is warranted or not.

People’s expectations for autonomous vehicles are no different.

Humans make split second decisions when driving, working on a mix of training and instinct. Autopilot software on the other hand, while being capable of a measure of “learning”, ultimately follows a set of rules.

What happens when these rules run counter to our ethical judgements?

(Image courtesy of MIT.)
A study co-authored by an MIT professor asked respondents if they preferred their automated vehicles to “minimize casualties in situations of extreme danger” or to put it into an example, “having a car with one rider swerve off the road and crash to avoid a crowd of 10 pedestrians.”

Unsurprisingly, most people preferred the utilitarian approach, i.e., the action that would save the most lives. However, respondents also indicated that they would be less likely to drive or own a car that would show preference to other’s lives over their own.

This harkens back to what philosophers and ethicists call the Trolley Problem, which involves weighing one life against several others.

Illustration of the classic Trolley Problem. Do you pull the switch to divert the train, saving five people but killing one, or do you decline to pull, not killing anyone but allowing five people to die?
This results in a social dilemma where self-interest could end up making roads more dangerous for everyone. This moral dissonance could potentially have a severe impact on the adoption of self-driving cars.

Not comfortable stepping around the issue, a representative of Daimler AG recently stated that the company’s cars would prioritize saving the driver and the car’s passengers over pedestrians.

“If you know you can save at least one person, at least save that one. Save the one in the car,” said Christoph von Hugo, manager of driver assistance systems at Mercedes-Benz.

“If all you know for sure is that one death can be prevented, then that’s your first priority.”

Despite a later correction noting that the “statement by Daimler on this topic has been quoted incorrectly,” this points towards potential ethical problems well.

If one auto manufacturer decides to put its passengers first, it may make the cars more attractive to consumers, but potentially at the cost of overall safety on the road. In line with the problem of human error, if every car is working on a different measure of “morality” without working together to ensure these commands do not conflict, will the roads be any safer?

Daimler’s official policy revolves less on ‘whom to save’ and more on ‘stay out of that situation in the first place.’ The idea is that by making automated driving systems essentially perfect, the Trolley Problem won’t even come up, and everyone can go home happy and healthy, at least in theory.

Hugo concluded with a statement supporting this official stance, “This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all.”

Remember, the NHTSA did say that 94 percent of accidents are due to human error.

This all ties back into issues of liability in identifying the culprit of a fatal crash where the car decided who should be saved: its driver, or pedestrians.

Is it cynical to think that not all human drivers would make a better moral decision than their car when relying on instinct?

In the end, these ethical questions will be decided by the engineers who design autonomous vehicles, much to the philosophers’ chagrin.


Self-Driving Cars: Legislation, Liability and Ethics

From left to right, the Tesla Model S, Google’s homemade self-driving prototype, and the Mercedes F 105 autonomous research vehicle. (Images courtesy of Tesla Motors, Google and Mercedes-Benz respectively.)
Automated vehicle technology is reaching a point where it will either become widely adopted and become a part of our daily drive, or be cast aside in favor of better driver assistance - leaving control firmly in the hands of the person behind the wheel.

There are more questions than answers at this point and while it is, for better or worse, up to lawyers and politicians to make the decisions on how the automotive industry will move forward, it is the engineers who are at the forefront of this technological revolution.

Vehicle automation will only succeed if we ask the difficult moral questions, identifying the balance between personal security and saving lives. The software also needs to be secure enough to avoid any interruption or intervention by an outside party, and cybersecurity is the only way people will feel safe enough to purchase one of these cars.

Finding a way to integrate a person’s individualism with the potential fleet of automated people movers will be a challenge that may be more difficult than legislating vehicle automation. Anticipating these needs and factoring them into a design will not be easy.

For automotive manufacturers to be comfortable enough to follow in the steps of Volvo, Mercedes and Google, their cars must be at a certain level of safety and efficiency so that the automation can make up for the myriad ways human error can intervene with the safe operation of a car.

What it comes down to is that there is no easy answer that will get vehicle automation in the mainstream faster. Safety standards will need to be refined before people will be comfortable relinquishing the wheel, and those, too, will be in the hands of engineers.

These hindrances don’t stop automated vehicle technology and the discussions surrounding it from being immensely interesting, a big step towards our technological future—and hopefully a faster, more productive commute.

For more information on the history of autonomous vehicles, check out our feature covering  one hundred years of driverless cars.

If you’re more interested in autonomous vehicle technology itself, check out our feature on what tech it will take to put self-driving cars on the road.