Designing for Trust in Self-Driving Cars

Swarit Dholakia
11 min readMar 11, 2019

--

this cute self-driving car is meant to entice you into reading my article. I think it worked

Some weeks ago, my family and I were on our way to a dinner party at a family friend’s place.

I had just watched a video lecture with Drago Anguelov from Waymo at MIT and was sharing some of the learning with my parents. The conversation later dived into the (dangerous) game of predicting how long the advent of self-driving vehicles will take.

My mom had quite a strong first answer:

I don’t care how much you pay me, no freaking way I’m getting in a car without a driver.

Ok, mom.

She had made up her mind quite firmly, without even understanding that these cars are very safe creatures. They don’t drive distracted, drink or any other of the millions of unsafe, human, actions that cause over 40 000 motor vehicle fatalities annually.

My mom, however, didn’t care about the stats, she just hated the idea of not having a person drive the car.

The inherent problem? She didn’t trust the machine at the wheel.

People don’t trust driverless cars— even though they could save tens of thousands of lives annually

all these people seem quite suspicious of this driverless vehicle, as will millions when self-driving cars start roaming the streets in large numbers

A study conducted by AAA in 2017 surveyed Americans on their feelings about driving around, and in, self-driving vehicles. Some key conclusions were that:

  • A whopping 78% of US adults are afraid to get into a self-driving vehicle.
  • Only 10% of US adults will actually feel safer in an autonomous vehicle, than in a regular car.

My mom is not alone. Through extrapolation, we can predict that over 320 million Americans feel self-driving cars are less safe than regular cars, and over 290 million are afraid of getting into these cars.

Before I propose a counter, let me be 100% open here: I am a self-driving car optimist and believer, so take that however you like; that being said, I want you to make an honest decision, and I will only provide the facts to do so.

Well, remember the 40 000 deaths per year credited to motor vehicle accidents? Well NHTSA (the US’s vehicle safety agency) says over 90% of those deaths are directly caused by human error.

That’s right.

That means, at least 36 000 people died because a human (read: not machine) driver in a vehicle made a mistake at the wheel.

Speeding. Driving under the influence. Driving distracted (texting, call, etc). Falling asleep at the wheel.

this crash is staged, so no one got hurt, but you definitely don’t want to be in this kind of a real situation

Humans doing these action cause one death every 15 minutes on the road. Not to mention, that’s just in the US.

Self-driving vehicles, on the other hand, don’t speed, drive drunk or distracted, and can’t get tired. Autonomous vehicles (synonymous with self-driving cars and driverless cars) are very good at obeying traffic laws and can detect road conditions and possible threats to life much earlier than humans, thanks to high-tech sensors enabling 360° field of view with as high as a 120m range — far superior than the vision capabilities of a human.

So it’s clear that even though hundreds of millions of people don’t feel self-driving vehicles are safer than humans, these autonomous vehicles could effective eradicate 90% of deaths from car crashes in the US, annually. They really are quite safe.

Granted, self-driving cars are not vehicles sent from the Heavens

please take this with a grain of salt, and understand the whole picture before making up your mind

These vehicles are not 100% immune to accidents, and therefore injuries, or even casualties.

BEFORE YOU GET SCARED OFF AND NEVER STEP FOOT IN A DRIVERLESS VEHICLE, allow me to make my case.

Robot drivers are strict rule followers. This has led human drivers to rear-end slow-moving or stopped self-driving cars, which account for most of the accidents.

The death in Arizona in 2017 involving an Uber test vehicle, was the first fatality involving a fully autonomous car. In this case, Uber’s sensors failed to identify a woman pushing her bicycle across the road at night, as a valued obstruction. The car misclassified her as an unimportant object (and one that could be plowed through). Uber is not criminally liable for this death, by the way.

In Florida in 2016, the driver of a Tesla Model S who was operating the car in its so-called “Autopilot” mode died when his vehicle collided with a truck that turned into its path. In this event, Tesla’s camera sensors failed to distinguish the whiteish-blue truck trailer from the daytime sky behind it, effectively thinking there was nothing there.

So yes, issues do exist with autonomous vehicles, and some of these could’ve been prevented with human drivers. However, such vehicles would still mitigate most motor vehicle accidents, which would decrease the number of accidents per driven mile. AKA, self-driving cars would make roads safer.

The greatest barrier standing in the way of the advent of self-driving cars is getting people to trust the ability and safe nature of machines at the wheel.

An older-version Waymo vehicle — designed to look cute and innocent and settle all your self-driving car fears

In 1985, the cost of 1 GB of hard drive storage space was pegged at around $300 000. In 2010, that same cost is $0.03/GB.

In the same year, there were around 30 000 cell phone users in the US or 0.012% of the population. In 2010, that number was more like 300 million users or a reflective 84% of the US population.

The point being made is, the cost and technology will develop and make it affordable, scalable, and capable of serving users. The driverless car will have its ‘cell phone moment’. The question is, will there be demand for such technology?

For cell phones, there was. And as demand increased, the cost plummeted and capacity of technology increased exponentially, providing a constant, exceptional customer experience. Thing is, people decided to embrace the cell phone.

For self-driving cars, development is already taking place. Companies and researchers are pushing boundaries for the capabilities of technology, which is driving cost down. The question no longer is if it’s possible to make autonomous vehicle tech scalable, it's down to, will we allow it to scale?

The goal — alongside further developing the technology — is for companies to make the general public feel good and safe about self-driving cars.

The interesting thing about this problem is that self-driving cars are already safe.

So it really comes down to how these cars make people feel, rather than throwing hard data at the people to prove our point.

Allow me to give you a non-self-driving car example.

Millions of people every day open their phone (maybe yourself included), click on this one app, search for a destination, and pay some stranger (often halfway across the world) to let you sleep in their home, or even share a room with another complete stranger. Airbnb has created an experience that reassures both hosts and guests, through design, to make them trust one another, and feel okay with lending out a home. Crazy. Many of the very users now called it absurd this time 10 years ago. 🤷‍♀️

We need to design an experience that lets people trust what we otherwise wouldn’t.

parents should feel comfortable throwing their kids in a car with no driver, as a commute to school

How do we make autonomous vehicles friendly and approachable for the masses?

Self-driving cars need to be attractive and friendly to users. They need to make people feel good and safe about sitting in a car with no driver.

One way of doing this? Give them the feeling that there is, in fact, a driver.

When we sit in an Uber, for example, we confirm our destination by asking and get subtle cues about our route. Like when the driver turns on a right turn signal at an intersection you know the car should be turning right at.

Or when you see the head of the driver turn in the direction of a car that’s driving towards us, you know the driver is indeed paying attention.

There are tons of minor social cues that make us comfortable in our ride in a car, that wouldn’t exist in a self-driving car’s operation. There’s no driver to turn their head and verbally confirm the destination. You wouldn’t know if the car has even understood that the people crossing the street in front of it are indeed people (and therefore have valued tied to those objects; AKA don’t run into them).

In our efforts to remove error in the driving behaviour from the car, we’ve taken away the one thing that makes cars feel safe: the human.

the one time we DON’T want to be in LeBron’s position 😂

To understand the solution, we first need to understand what it is that the passenger feels and gets out of having a human driver in the front.

Most people sitting in a Uber being ferried across the city, are likely on their phone. But at the same time, they want to be in control.

People want to have the ability to ask the driver to pull over or change the path they’re taking, and know that the driver is doing what they should be (and doing it well). They are in control.

With my example person in their Uber ride, they also have the ability to pull out their app and see where they will be going and who (if any) will be sharing their ride with someone. The person can, of course, converse about all of this with the driver. They want oversight.

The passenger in said Uber (who we should probably name at this point since we’ve been talking about them for a while) is not interested in dying today. They want to assure their Uber ride is safe; and moreover, the driver is operating in a safe manner. The person isn’t interested in getting into any collisions. They want to know that the car knows the road conditions or if a specific obstacle is coming up. They want to stay informed.

Unlike many, I’m the kind of Uber rider who enjoys making conversation with the driver (whether they like it or not LOL). I love talking and learning from this stranger taking me to where I want to be: it gives me a unique perspective on matters. Many like me use this as a way to keep the situation not-awkward and make a friend during the ride or learn something new. We want to feel friendly.

There are numerous strategies to replicate these human feelings in a robot-driven vehicle.

The idea is to use technology in a way that technology is not seen as itself. You don’t want to substitute a person with an iPad, because the passenger knows that its not a human.

There is no point in making an elaborate setup to replicate a human. Instead, all we need to do is summon specific feelings in our passengers; ones they want.

Namely, control, oversight, friendliness and informed. We could, in fact, summarize these emotions in a vehicle, as trust.

We can indeed use technology to provide these, with a combination of screens, sounds and lights.

Though often, these elements will be different depending on which person they want to alert. Interior accessories will only interact with the passenger, and exterior hardware will need to communicate with surrounding humans; like those trying to figure out if it’s safe to cross in front of the vehicle.

Interior Elements

soothing colours to show you what a robot car is doing

Waymo, for example, uses a screen display mounted on the back of the front seats to give the passengers an idea of what the car is seeing, to settle their worries about the car possibly missing an obstacle.

The screen display also identifies which object on the road is influencing a decision and action of the car. If, for example, another vehicle cuts in front of the self-driving one, the screen will show the car knows the other vehicle is doing that, and while the self-driving car slows down, it will highlight the car that cut in, identifying that that vehicle is causing the action (the slowing down, in this case).

Exterior Elements

why thank you, fine robot vehicle

When we walk or bike throughout our streets, chances are we cross an intersection, and hence, need to interact with drivers all the time.

In most cases, given the environment, most communication is nonverbal, like when crossing the street the driver of a car nods to you as their car comes to a stop; telling you they acknowledge you crossing and will stop (and you won’t get run over and killed in the process).

Or, if a driver is in a rush and raises a flat hand to you ✋ they are asking you to stop, and give them the way. Which is fine, because they won’t end up running us over.

The point being, even unaligned intentions of parties on the road can be cleared up and communicated, and are done in a nonverbal way. So, the problem lies, in giving a faceless vehicle with no one in the driver's seat, the ability to make pedestrians or cyclists aware of its intentions.

The Drive.AI vehicle shown in the image above uses a series of exterior-mounted LCD monitors to communicate with pedestrians, confirming when it's safe to cross and when the car plans to proceed.

Companies like Lyft want to integrate this even more into their vehicles, and they’ve filed patents to have the windshield double as an exterior notification screen, as seen below.

patent wars: filing images drawn by a 9-year-old

Key Takeaways

  1. The inherent problem for the advent of self-driving vehicles is building trust between the riders and the machine driving the car.
  2. Even though millions of people don’t feel self-driving vehicles are safer than humans, these vehicles could effective eradicate 90% of deaths from car crashes in the US, annually.
  3. Issues do exist with autonomous vehicles, and some of these could’ve been prevented with human drivers. However, autonomous vehicles would still mitigate 90% of motor vehicle accidents, which would decrease the number of accidents per driven mile.
  4. For self-driving car companies and researchers are pushing boundaries for the capabilities of technology, which is driving cost down. The question no longer is if it’s possible to make autonomous vehicle tech scalable, but will people allow it to scale?
  5. In our efforts to remove error in the driving behaviour from the car, we’ve taken away the one thing that makes cars feel safe: the human.
  6. We have to find a way to give driverless vehicles the ability to communicate and interact with pedestrians and passengers, in subtle and natural ways: make the robot car feel nothing like a robot car.

Some awesome articles you should read if you’re super interested in the design problem, as relating to autonomous vehicles:

  1. https://www.fastcompany.com/90275407/the-fate-of-self-driving-cars-hangs-on-a-7-trillion-design-problem
  2. https://www.washingtonpost.com/technology/2018/08/29/how-do-you-get-people-trust-autonomous-vehicles-this-company-is-giving-them-virtual-eyes/?utm
  3. https://techcrunch.com/2018/12/11/lyft-self-driving-car-communication-patent/
  4. https://social.ford.com/en_US/story/ford-community/move-freely/how-self-driving-cars-could-communicate-with-you-in-the-future.html
i really very appreciate you spending time to read what i wrote! come back soon!

Liked this article? AWESOME! Show you’re appreciation down below 👏👏

  1. Follow me on Medium
  2. Connect with me on LinkedIn
  3. Reach out at dholakia.swarit@gmail.com to say hi!

I’d love to chat about autonomous vehicles or any cool exponential technology!

--

--

Swarit Dholakia
Swarit Dholakia

Written by Swarit Dholakia

I write about tech ideas, startups, life, philosophies and mindsets.

No responses yet