Are Autonomous Cars Actually Better At Driving Than Humans? Maybe Not.

We independently evaluate all recommended products and services. If you click on links we provide, we may receive compensation.

Whether we like it or not, self-driving cars are the future.

Autonomous cars could change life as we know it, and tech entrepreneurs won’t let us forget it. In a decade or so, we’ll be carted around by computerized chauffeurs, lazily flipping through our Instagram feeds while advanced algorithms deliver us safely to work, school, or wherever else we want to go. It sounds pretty wonderful, particularly if you’re familiar with the United States’ fairly horrific road safety statistics.

WBUR

If you’re still not excited, read through a few Elon Musk tweets, and you’ll be begging for a robot driver. All Tesla vehicles, by the way, are equipped with the necessary hardware for “full self-driving capability at a safety level substantially greater than that of a human driver,” per the company’s website.

However, in our race to rid the world of human drivers, we may have skipped over something important: Do we really know for sure that autonomous vehicles are safer than human-driven vehicles?

Recent high-profile incidents have brought renewed scrutiny to that question. For starters…

In late 2016, a Volvo equipped with Uber’s self-driving technology zipped through a red light near San Francisco’s Museum of Modern Art. Per a report from The New York Times, anonymous Uber employees said that the violation occurred when the vehicle was in autonomous mode—not when its human driver had control, as the company officially claimed.

In Feb. 2016, the California Department of Motor Vehicles concluded that a Google vehicle’s autonomous system was responsible for an accident in Mountain View. According to the incident report, the Google vehicle had difficulty navigating around sandbags near a storm drain; after avoiding the obstacle, it attempted to return to the center of a lane and made contact with the side of a bus. It was traveling around 2 mph at the time of impact, and there were no injuries reported as a result of the accident.

In January, photographer Oscar Nilsson sued General Motors after his vehicle collided with a self-driving Chevrolet Bolt, which he says suddenly aborted a lane change. GM had recently promoted the Bolt as “the world’s first mass-producible car designed to operate without a driver.”

 

In March, a Tesla Model X SUV collided with a concrete highway lane divider. The vehicle’s Autopilot feature was engaged; the feature attempts to keep Tesla vehicles within their lanes. If drivers take their hands off the wheel, the vehicle gives a visual warning after a few seconds, then an audible warning. If a driver doesn’t take the wheel, the vehicle pulls over to the side of the road. According to Tesla, driver Wei Huang hadn’t touched the wheel for about six seconds when the accident occurred. He’d received several visual warnings and one auditory warning earlier in the drive.

Perhaps most disturbingly, a recent accident involving a self-driving Uber vehicle took the life of a pedestrian. The victim, Elaine Herzberg, was dressed in dark clothing and appears to have been jaywalking, but experts note that the car should have been able to avoid the accident by detecting the woman with LIDAR (a laser-based detection method used by most self-driving vehicles) and other technologies.

 

Fortune

In each incident, the autonomous vehicle seems to have been inadequate at addressing situations that an alert, capable human driver would have been able to handle. That flies in the face of reports claiming that self-driving vehicles are exceptionally safe.

Granted, these are individual incidents, and they may not be representative of the industry. Millions of human drivers do dumber things every day—the next time you’re on the road, try to count the people glued to their smartphones in traffic.

 

But in order to trust artificial intelligence with our lives, we need some assurance that it won’t make the types of dumb mistakes that we’d expect from humans. By this standard, we’re clearly not quite ready to hand the wheel to the robots just yet.

What needs to change?

Some experts say that the accidents aren’t easily avoidable.

The current generation of self-driving vehicles (for instance, Teslas with the experimental Autopilot feature) work fairly well when they drive large stretches of road where they’re able to follow simple rules to stay in lanes, avoid the occasional obstruction, and maintain safe speeds.

“These segments are bound to end, at which point a human driver has to re-engage and take control over the car,” says Tal Krzypow, vice president of product management with eyeSight Technologies, a company that produces computer vision models for vehicles. “This hand-off point is a critical point where we see failures happening.”

 

Google via Fortune

“By nature, we humans tend to adapt quickly to environments and filter out constant yet unchanging (or boring) stimuli. When a driver switches on the autopilot on a long stretch of highway, they will eventually drift to do something else: texting, talking, [or] web browsing.”

In a perfect world, humans would stay perfectly alert at all times. If that were possible, however, we probably wouldn’t need self-driving cars in the first place. At some point, the vehicles have to get the driver’s attention, and that’s not always easy.

 

CNN Tech

“The problem arises when the car has to hand off the control and the driver is not mentally there,” Krzypow says. “One option to address that issue is for the driver to stay focused on the road, with hands on the steering wheel, and ready to re-engage in a heartbeat. Honestly, such level of alertness would be somewhat unnatural and unfair [for us] to expect.”

“Another option is to augment the autonomous systems with means to monitor not only the road, but also the drivers themselves. We see that becoming a requirement. …[Institutions] such as the Euro NCAP (European New Car Assessment Program), which tests vehicle safety and provides the star ratings, [expect] to see driver monitoring systems in new cars starting in 2020.”

 

CNET

“When you monitor the driver with advanced computer vision, the car’s safety systems can be informed in real time when the driver is distracted, drowsy, or outright asleep. In such cases, proactive measures can be taken, from audio alerts and vibrating seats to reducing the speed and stopping on the shoulder—[or even] automatically calling an emergency contact to make sure the driver is capable of driving.”

The next generation of self-driving cars will probably need to leverage all of these tools in order to improve safety.

Volvo (via Tecla)

“When you combine the technologies together, you get a vehicle that understands both the external and internal environment and can sync the two, resulting in an overall safer experience—for the passengers and others around the vehicle. It’s important to realize this is not science fiction, but technology that is already available today. It’s evaluated by automotive manufacturers and will become standard safety equipment very soon.”

That addresses features like Autopilot, but truly autonomous vehicles are on another level. It’s one thing to stay within a lane and maintain a safe difference from other vehicles; it’s quite another to stop at stop lights, avoid obstructions, and handle other fairly complex tasks. Currently, machines aren’t quite as capable as experienced human drivers.

“Sometimes I hear [the] industry talk about autonomous vehicles as though they’re about to put the safest driver on the road,” Nidhi Kalra, senior information scientist at the nonprofit RAND Corporation, told Scientific American in 2017. “The reality is it’s more like putting a teenage driver on the road.”

Soon, the self-driving vehicle industry might actually need regulation to survive.

We spoke with several experts directly involved in the autonomous vehicle industry, and somewhat surprisingly, they all seemed to favor new layers of regulation.

While self-driving cars promise improved safety, some critical issues must be addressed to ensure they are at least as safe as human-driven cars—and, hopefully, even safer.

“The autonomous pilot [programs] are crucial for this industry’s progress,” says Kobi Marenko, CEO of Arbe Robotics, which produces imaging radar systems intended for use in autonomous vehicles. “However, the public has good reasons to be concerned in light [recent accidents]. We feel it is the role of regulators to set higher safety standards and make sure the most advanced solutions are placed into action.”

Krzypow agrees with that assessment, noting that pilot programs need to continue for self-driving vehicles to become safer.

“The technology provides too many benefits to slow progress,” he says. “Autonomous driving will eventually be safer than having humans drive, as long as we address the full scenario and not only parts of it.”

For starters, that means holding autonomous vehicle manufacturers accountable for accidents. This March, Arizona implemented a new executive order that could allow manufacturers to be criminally charged when autonomous vehicles cause accidents. In 2017, Nevada passed Assembly Bill 69, which establishes penalties and regulations for autonomous vehicle companies.

 

ShellyPalmer

The technology community has cautiously embraced these new rules. Musk, undoubtedly the industry’s biggest celebrity, seems to be fully on board.

Perhaps that’s not so surprising. After all, one of the industry’s biggest challenges is convincing consumers that autonomous vehicles are completely safe; every fender-bender is a major setback, so car companies have a big incentive to prevent serious accidents from occurring.

 

“While self-driving cars promise improved safety, some critical issues must be addressed to ensure they are at least as safe as human-driven cars—and, hopefully, even safer,” says Philip Koopman, associate professor of electrical and computer engineering at Carnegie Mellon University. “We need assurance, preferably from independent safety reviewers, that [autonomous vehicle] companies are safely testing their vehicles when on public roads.”

If you’re still looking forward to a robotic chauffeur, you should know that we’re probably farther away than you might think. Still, with enough technical advances—and, as counterintuitive as it sounds, more regulation—we’re at least on the right road.

More from author

Related posts

Advertisment

Latest posts

Caring For Houseplants: Tips, Tricks And Products You Need

Follow these helpful tips to provide the best care for your houseplants.

How To Spot Multi-Level Marketing Scams, And How To Avoid Them

If you're on social media you've probably seen people making posts trying to sell products or asking you to join their "new business" ventures. Chances are you might be witnessing a multi-level marketing scam in action. Here's how to spot these scams and also how to avoid them.

Salvation Mountain And The Last Free City

Salvation Mountain is a man-made mountain built to spread the idea of love for one another, and visiting it is a real interesting experience.

Want to stay up to date with the latest news?

We would love to hear from you! Please fill in your details and we will stay in touch. It's that simple!