Autonomous vehicle accidents: who’s to blame?

Ergonomics

autonomous vehicles dashboardDilemmas await ergonomists and health and safety professionals

Picture the scene: a driverless car carrying a family of four, spots a ball roll onto the road ahead. As the driverless car approaches, a child runs out onto the road to retrieve the ball. Should the vehicle risk its passengers’ lives by swerving off the road? Or should it continue its path, ensuring its passengers’ safety at the expense of the child?

Moral dilemmas like this were once simply hypothetical food for thought for academic philosophers, car engineers, ergonomists and health and safety professionals. But today they are taken much more seriously, as the race to bring so-called autonomous vehicles to our roads intensifies.

Are we ready?

Carmakers, car buyers and regulators have difficult questions to address before vehicles can be given full autonomy. In a report by Bonnefon et al. about ‘The social dilemmas of autonomous vehicles’ published in Science (2016), most of the 1,928 research participants tested indicated that they believed vehicles should be programmed to crash into something, rather than run over pedestrians, – even if that meant killing the vehicles’ passengers. Yet many of the same study participants were reluctant at the idea of buying such a vehicle, preferring to ride in a driverless car that prioritises their own safety above that of pedestrians.

autonomous vehicle start buttonThe researchers concluded that if pedestrians were to be prioritised over passengers when regulating self-driving vehicles, people would be less likely to buy those vehicles. A shrinking market for driverless cars would slow their development despite research showing that autonomous vehicles could potentially reduce traffic, cut pollution levels and save thousands of lives each year (human error contributes to 90% of all traffic accidents).

The ethics of autonomous vehicles

Though autonomous vehicles should reduce traffic accidents, they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passengers to save the pedestrians. But how will the vehicles make these all-important decisions? This is where ergonomists and health and safety professionals are helping to pave the way in autonomous vehicle technology.

“When an accident does finally happen, someone will say: well, a human wouldn’t have caused that accident,” warns Herbert Winner, head of the automotive engineering faculty at Darmstadt University of Technology. “People accept that we all make mistakes, but robots are expected to be infallible.”

Carmakers are now engaging with ergonomists and health and safety professionals, as well as ethicists and those from other disciplines. They are considering whether autonomous vehicles could or should use algorithms to replicate the decisions that human drivers make. Such algorithms could in theory, review alternative crash outcomes and then rank them. But as the example of the child and the ball makes clear, the ‘least bad’ option might still injure or kill a human.

“Even if it’s a rare problem, autonomous car manufacturers still need to specify some action (in the event of an unavoidable crash), and the wrong one could lead to massive lawsuits and alarmist headlines,” says Patrick Lin, director of the ethics and emerging sciences group at California Polytechnic State University.

Driverless cars: does autonomy equal inclusivity? Read more on one of the hottest topics in technology right now.

Where do you stand? Visit the Moral Machine website

Bonnefon et al. have launched a website called Moral Machine to help gather more information about how people would prefer autonomous cars to behave in different scenarios, where passenger and pedestrian safety are at odds.

Visit the Moral Machine website and complete the moral dilemmas yourself, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians, and judge which outcome you think is more acceptable. You can then also see how your responses compare with other people’s – and ask yourself, have you made the right decision? Is there even a right decision? And more importantly, should something as important as this be decided by artificial intelligence (AI)?

As an ergonomist I found the results fascinating. They provide helpful direction on the decisions that need to be made and the questions that need to be asked, and ultimately reveal how people would like autonomous vehicles to operate.

AI vs humans vs autonomous vehicles

Ragunathan Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University’s CyLab and veteran of the university’s efforts to develop autonomous vehicles, said, “AI does not have the same cognitive capabilities that we as humans have.”

Instead, autonomous vehicles will probably make decisions based on speed, weather, road conditions, distance and other data gathered by a variety of sensors and cameras. Driverless cars will most likely calculate a course of action based on how fast they are travelling, as well as the speed of objects in their path.

The main challenge is in gathering and processing the necessary data quickly enough to avoid dangerous circumstances in the first place.

Currently, another huge concern about autonomous vehicles is the ability to keep them protected from hackers, who might want to take over their controls while someone is onboard. That is a discussion for another time!

Autonomous vehicles are just a stone’s throw away from becoming a reality on our roads. It is really encouraging to hear companies acknowledging the importance of research and speaking to the appropriate bodies, as they know they must get the ergonomics, as well as other crucial elements, spot-on. This is a very interesting time for ergonomists and health and safety professionals, who are working closely with carmakers to ensure these vehicles are designed with all users in mind.

Conclusion: who’s to blame?

So, we return to our original question: if an autonomous vehicle has an accident, who is to blame? Is it the car manufacturer? Is it the algorithm programmers? Is it the passengers or pedestrians? These are all factors in the outcome, so they all have a part to play in avoiding accidents.

Ultimately it will come down to the autonomous vehicle to decide the lesser of two evils: who lives and who dies? The challenge is to ensure that this decision is guided by the input of philosophers, engineers, ethicists, programmers, ergonomists, health and safety professionals, and other crucial experts.