upper waypoint

How Safe Is Safe Enough for a Self-Driving Car?

Save ArticleSave Article
Failed to save article

Please try again

(Teodros Hailye/KQED)

Don’t look now, but self-driving cars are accelerating rapidly toward an on-ramp near you.

To explain why, federal officials point to two facts: (1) some 35,000 people died in crashes on U.S. roads last year, and (2) human error was to blame more than 90 percent of the time. They say one remedy might be to let computers take the wheel.

The prospect of automating your commute has attracted companies like Uber — whose experimental cars in Pittsburgh still feature human backup drivers — and “stealth” tech startups, along with heavyweights like Volkswagen, Nissan and Mercedes Benz. Fifteen companies already have permits to test self-driving cars on California’s roads. You might spot one in the wild today.

The Mercedes Benz F 015 self-driving car at the top of Conzelman Road on the morning of March 3, 2015.
The Mercedes Benz F 015 self-driving car at the top of Conzelman Road on the morning of March 3, 2015. (Kelvin Yap)

“It’s now clear that the next decade is going to be defined by the automation of the automobile,” Ford CEO Mark Fields declared at a recent press conference in Palo Alto. Fields was announcing the iconic carmaker’s plan to mass-produce a fully driverless car – one with no steering wheel and no pedals. He claims such machines will be on the road in the next five years.

”Ready or not, they’re coming,” U.S. Transportation Secretary Anthony Foxx said this summer. “And the choice we have in government is to either act, or react.”

Sponsored

In September, federal regulators released some much-anticipated guidelines for carmakers, governing crucial questions like car safety, cyber-security and ethics. Government officials want to head off an emerging state-by-state patchwork of laws that could hamper new technology in the long run.

‘The Elephant in the Room’

Not that auto-automation isn’t here already. Higher-end cars you can buy today boast features like “lane-keeping,” to keep your car centered in its lane on the highway, and adaptive cruise control, which adjusts to the speed of traffic around you.

People are already abusing these, too — an online video shows a car steering itself down the highway as its driver climbs into the backseat. One entitled ‘Invisible Driver Prank In A Tesla!’ catches the reactions of passing motorists, dumbfounded as the seemingly empty car drives itself along an interstate highway.

Tesla’s autopilot feature attained some infamy after a fatal incident on a Florida highway in May. Joshua Brown, a 40-year-old man from Ohio, died when neither he nor his Model S braked before driving under an 18-wheeler. (Tesla has since released a software update that company co-founder Elon Musk says likely would’ve prevented the crash.)

Speaking to a hotel ballroom full of engineers and policy experts in San Francisco this summer, Mark Rosekind, head of the National Highway Traffic Safety Administration, referred to the incident as “the elephant in the room.”

Rosekind wouldn’t reveal details of an investigation, but said, “No one incident will derail the Department of Transportation and NHTSA from its mission to improve safety on the roads by pursuing new, life-saving technologies.”

He went on to argue that “perfect” should not be the enemy of the good when it comes to roadway safety.

“We should be desperate for new tools that will helps us save lives,” Rosekind told his audience. “If we wait for perfect, we’ll be waiting for a very, very long time.”

Still, a precise definition of what “safe enough” will mean for self-driving cars is elusive. A teenager can qualify for a driver’s license by passing a quick road test, but machines will be held to a higher standard.

CREDIT:
CREDIT: (Teodros Hailye/KQED)

‘Driving Better Than I Would’

“We saw how hard it is to keep people engaged in the driving task,” says Sarah Hunter, head of policy at X, formerly Google X.

“They get switched off,” she says. “They find it very hard to focus on something they’re not actually meant to be doing.”

Google already has more than 50 experimental self-driving cars on the road . Some are two-seat prototypes that look like something a cartoon mouse might roll up in, whereas the one I rode in was a white Lexus outfitted with sensors; they suggest a car that’s wearing Google Glass.

These vehicles feature cameras that can see what color a traffic light is, and measure the distance to objects using radar and lidar (light detection and ranging). All this data from lasers and sensors is integrated with highly precise maps, to give the car a detailed sense of what’s going on around it.

The result, seen on a laptop screen inside, resembles a vibrant, highly distracting update of Frogger, the arcade game in which success is achieved by crossing traffic without dying.

Such cars also feature a cartoonishly large, red button in the console. A quick way to shut off self-driving mode is required in California for companies testing such vehicles, along with $5 million in insurance coverage, though restrictions are already starting to loosen up. Governor Jerry Brown just signed a bill permitting test vehicles to roll without steering wheels or pedals.

My adventure in Google’s car comprised a quick jaunt around the block in Mountain View. In the driver’s seat, Karen Isgrigg kept her hands inches from the steering wheel in case she needed to take over.

She says occasionally the car will act strangely — say by lightly braking at unexpected moments — but then it turns out the computer is sensing something Isgrigg doesn’t see, like someone running a red light, and is responding prudently.

Google has more than fifty experimental self-driving cars on the road. About half are like this one – a modified Lexus RH450H – while the rest are small prototype test vehicles that don’t have steering wheels.
Google has more than fifty experimental self-driving cars on the road. About half are like this one – a modified Lexus RH450H – while the rest are small prototype test vehicles that don’t have steering wheels. (Daniel Potter)

“The car is driving better than I would as a human being,” she says.

This, in a nutshell, is Google’s objective, though it may come about gradually, like preparing a human student driver.

“You start them out in familiar, calmer environments first,” says Hunter, “slowly introducing them to more complex places like a local mall before allowing them onto the freeway or into the city.”

That could happen in perhaps three years, she told me. But it may be 30 years before self-driving cars can go anywhere in the world, mastering more sophisticated driving challenges, such as icy roads or interpreting hand motions from crossing guards.

Moral Judgments

In late September, federal regulators released 116 pages of guidelines on self-driving cars. While these will eventually become rules, for now they hinge on industry cooperation. They’re intended to be updated annually, though even that pace may seem glacial, next to technology that can change with the download of a new software patch.

The guidelines ask carmakers to make their case with a fifteen-point safety assessment, including how self-driving cars will sense and respond to surrounding vehicles, where they’re intended to operate, how they’re being tested, and how they’ll protect data and safeguard against hackers.

There’s also a section on ethics, because self-driving cars will be expected to act morally, just like the rest of us.

Let’s say you’re cruising down the street when a kid runs out from behind a tree. You have only a second to react, swerving into a parked truck. Within that second, you’ve made a moral judgment, choosing the lesser of two “evils.”

Or, say you’d like to give a cyclist some breathing room as you pass on the left, but doing so means illegally crossing a double-yellow line.

In scenarios like these, self-driving cars will do as their programmers have instructed — and their actions may carry moral and legal consequences.

The advent of self-driving cars could have ripple effects throughout society, according to Ryan Jenkins, a philosophy professor who explores the ethics of emerging technologies at Cal Poly in San Luis Obispo.

One is that there’s no guarantee they’ll reduce congestion. Say you’re working late and decide to send your self-driving car to pick up your kid before it doubles back by the office for you, for instance.

A Chevy Volt with self-driving car technology in front of KQED.
A Chevy Volt with self-driving car technology in front of KQED. (Teodros Hailye/KQED)

“It’s not obvious that it’s going to reduce the total number of trips that are taken,” says Jenkins, “and some people think it might not reduce the total number of cars on the road.”

He also noted that thousands of people await transplants of vital organs each year — of which fatal car crashes are a leading source.

“If we reduce by ten thousand or twenty thousand the number of people who are killed in car crashes, we’re going to reduce the number of donor organs. This might mean that more people die waiting for organ donations,” he said.

There are other concerns. If your self-driving car causes a wreck, you might blame the people who programmed it. This shift in liability stands to shake up the insurance industry.

It’s also conceivable that trusting technology to the point that our own driving skills deteriorate could make us humans even less reliable in the driver’s seat. Jenkins calls this effect “skill-rot.”

“Those are the kinds of second- and third- and fourth-order effects that I think society should be thinking about and trying to prepare for,” he says.

Can We Get There From Here?

While many are bullish on self-driving cars becoming widespread over the next few years, not everyone concedes that such a future is just over the horizon.

Take Ford’s projection that it will be mass-producing cars with no steering wheel by 2021.

“I think statements like that raise expectations beyond what the reality is likely to be,” says Steve Shladover, a research engineer at UC Berkeley.

“I expect that they will have something out on the road by 2021,” he speculates, “but it will probably be something that provides very limited functionality in very limited geographical space, within a very limited speed range.”

One reason Shladover cites for skepticism is the car industry’s decades-long history of overpromising such technologies.  Another is this: It’s really hard to prove that driverless cars can navigate the road as safely as humans can.

“Human drivers today are way better than we give them credit for,” Shladover says. “They’re not perfect, but they’re pretty damn close to perfect, when you think of how long they drive, on average, between serious crashes.”

There is another option for saving lives on the road. We could just try spending less time in the cars we have right now.

“The traffic safety statistics showed big improvement during the recession,” Shladover says, “because people were driving less.”

Sponsored

Music in this story from Launch Patterns for the Space Age Girl by LowLiFi.

lower waypoint
next waypoint