Should Tesla’s autopilot cars be allowed on public roads following accidents?

By Toby Walsh

Humans are terrible at driving. The US Department of Transport estimates that 94% of crashes are due to the driver.

We drive too fast. We get distracted. We make poor decisions.

If history is anything to go by, more than 1000 people are likely to die on Australian roads in the next year. Each death is a tragedy for the families and friends of those killed.

And it’s also a big drain on our economy. Each fatal accident costs around $2 million. The total cost on our economy from all car accidents is more than $17 billion per year.

Autonomous cars are going to be far better drivers. There is therefore a moral imperative to get them onto our roads as soon as possible.

They will also bring many other benefits such as reducing congestion, lowering transport costs and bringing personal mobility to the elderly, disabled and young.

Tesla, more than any car company, has been pushing the field. Its technology is impressive and improving rapidly.

Accidents

Accidents are, however, happening at an increasing rate as autonomous cars become more common.

On Valentine’s Day this year, one of Google’s autonomous cars caused its first crash when it pulled out in front of a bus. Fortunately, no one was hurt.

Just three months later, on May 7, a Tesla S driving autonomously on a 65mph (about 105kmh) limit road drove into a truck turning across the highway. The driver, Joshua Brown, who was sitting in the driving seat of the Tesla was killed. According to reports, he was watching a Harry Potter movie.

What actually happened in the lead up to the accident is currently under investigation.

Tesla issued a statement, acknowledging the tragic loss, and saying their instructions require drivers who engage the autopilot mode to monitor the road and be ready to take back control at short notice.

And last Sunday in the US, a second Tesla car crashed while being used autonomously. No one was injured. But how long before a Tesla car kills an innocent member of the public, a pedestrian or person in another vehicle?

Public safety

Silicon Valley’s “fail fast” culture may work for Facebook. No one is likely to be seriously hurt when their news feed is messed up. But fail fast is too risky for public safety.

Is it responsible for Tesla to release this technology into the wild when serious questions surround its safety?

Will the human driver monitor the road adequately? Will a human driver be able to take back control quickly enough?

It is not sufficient that the human driver gave consent; the rest of us using the roads have not given our consent.

Since human lives are at stake, drug companies do not get to test their new products on the general population. Should car companies be allowed to do so?

Tesla plans to put out a blog post to educate Tesla owners on how to use the autonomous features of their cars safely. I doubt this is enough.

The US Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) has defined five different levels of autonomous driving, ranging from zero to four (level zero is where the driver remains in control at all times).

Should level three autonomy be allowed where a human driver may be required to take back control at any moment? Or should we only allow level four where the system will work safely even if the human driver fails to take back control promptly.

Should Tesla be allowed to push updates out without extensive testing?

On average, one person dies for every 100 million miles (160 million kilometres) driven. According to Tesla, this was the first fatality in 130 million miles (210 million kilometres) of use of its autopilot software driving autonomously.

This would suggest that Tesla’s autonomy is about the level of a human driver. This is probably not good enough to be giving it control. Remember that humans are still having to take over when the driving gets more difficult. I would want autonomous cars to be much safer before we give them control.

Safety review

Regulatory authorities are waking up to these concerns. StaySafe, the joint standing committee of the NSW parliament focused on road safety, is in the middle of an inquiry into driverless vehicles and road safety.

We need to act swiftly to update the rules and the regulatory environment. For instance, none of the legislation introduced to permit driverless cars onto the roads of California, Nevada, South Australia and elsewhere require autonomous cars to be distinguishable from human-driven cars.

In my testimony to the StaySafe committee, I argued that we need to put special plates on such cars or even a magenta flashing light.

A friend who owns a Tesla told me of a situation they encountered recently where this was needed.

The driver of a car in a lane being merged expected his Tesla to speed up or slow down to create a suitable gap. But Tesla’s software is not programmed for courtesy. It continued to drive at a constant speed. My friend had to take back control to prevent a high speed collision.

When we started to build aeroplanes 100 years ago, anything went. But we quickly constructed a strong regulatory framework to ensure public safety. We need to build similar safeguards into the emerging industry for driverless cars.

Until this has happened, we need to question whether Tesla’s autopilot should be allowed on our roads.

Toby Walsh is a professor of AI at UNSW and a research group leader at Data61.The Conversation

This article was originally published on The Conversation. Read the original article.

Follow StartupSmart on Facebook, TwitterLinkedIn and SoundCloud.

COMMENTS