Safety of riders or others on road raises ethical questions about autonomous vehicles

Autonomous vehicles and related technologies for self-driving cars are under development at major automobile and technology companies. Self driving vehicle developers and engineers face and important ethical question regarding safety. In case of a serious accident, who should be saves, the passengers in the autonomous vehicle or other people on the road? A new research has tried to find out people’s point of view about this ethical question in case of serious accidents involving autonomous vehicles.

The study was based on survey of U.S. residents. Most of the respondents in the survey would prefer not to travel in autonomous vehicles that could cause them harm. The study team noted that people were not in favor of regulations enforcing utilitarian algorithms on driverless cars.

In the past, Silicon Valley experts have claimed that people might accept human error but they wouldn’t accept error from a machine. And, this could be a major issue which most automobile companies will have to deal with.

Last year, over 40,000 people died across the United States in road accidents. The cost of traffic accidents has been estimated at $1 trillion. 90 percent of the accidents have been attributed to human errors. And, autonomous vehicles could effectively reduce accidents due to human error.

Study co-author Azim Shariff, an assistant professor of psychology at the University of Oregon added, “Programmers will be forced to write algorithms which anticipate situations in which there are multiple people that could be harmed.”

The question about ethics has been picked up in a research paper titled, ‘The Social Dilemma of Autonomous Vehicles’ published in the journal Science. The research paper makes an attempt to understand as to how people want their self-driving cars to act when faced in an extreme situation that could lead to death.

The researchers said that they have noticed two different things. Firstly, participants were in favor to lessen the number of public deaths even when it comes to risk vehicles’ passengers’ life. This approach as per the researchers is known as the ‘utilitarian approach’.

Secondly, when the participants were asked about the car that they would buy, they said that they would like to choose a car that protects them and their passengers first. The researchers said that though we would like to have safer streets, when it comes to safety then preference will be for the cars that keep them safe first.

Study’s co-author Azime Chariff from the University of California Irvine said that the issue is known as the trolley problem. “In the trolley problem, people face the dilemma of instigating an action that will cause somebody’s death, but by doing so will save a greater number of lives”, affirmed Chariff.

In the study, the participants faced different situations like getting a chance to move straight and kill a specific number of pedestrians or taking the car into next lane and kill a different group of animals or humans. In this situation, the participants chose ‘the preferred scenario’. In this case, the car will crash into concrete barrier, killing a criminal, a homeless person, and a baby.

The study researchers mentioned that people like to think good about others in abstract situation but when it comes to buying a car then their main focus is their safety as well as of the car occupants.

Study’s co-author John Bonnefon from France’s National Center for Scientific Research said that self driving is a different concept of automated transportation, as they are not competing with other cars on the road.

The researchers also think it is great if programming could lessen fatalities, but it is also vital to consider that giving too much importance to moral considerations could affect development of a product that is in the stage of development and could take years or even decades.

Anuj K. Pradhan from UMTRI’s Human Factors Group does not think that such ethical issues are going to affect the advancement experts are making in this particular field of technology. Bonnefon has even warned that comparison could not be made between a machine and a human driver.

There is a possibility that one-day, self-driving cars would become perfect. Before, automatic cars are launched on roads, one thing is sure that provocative moral questions will arise. Owing to which, Bonnefon has said that there is a need to engage in a conversation about the moral values as how all want to program in cars.

According to a report in CNN News by Jacqueline Howard, “The surveys revealed that the majority of respondents believed autonomous vehicles should be programmed to be “utilitarian,” attempting to save the most lives (in this case, the pedestrians) while sacrificing as few as possible (the passengers), said Jean-Francois Bonnefon, a psychological scientist at the Toulouse School of Economics in France and a co-author of the study. Yet, most respondents also indicated that they would not want to purchase a vehicle that was programmed to be utilitarian.”

The surveys showed that 76% of respondents believed it is more moral for a driverless vehicle to sacrifice one passenger rather than 10 pedestrians when faced with such a scenario. However, 81% of respondents said they would rather own a car that protected them and their family members at all costs. For the study, conducted between June and November, 1,928 survey respondents were recruited online and presented with crash scenarios as well as questions gauging their personal opinions about riding in an autonomous vehicle and their likelihood of buying one.

“We were surprised that so many people expressed a strong moral preference for cars that would kill them, as passengers, for the greater good,” Bonnefon said. “[We were] even more surprised that so many people would renounce buying a driverless car if there was a regulation in place that would force them to buy the self-sacrificing cars that they morally approved of! People think that utilitarian cars are morally right, but they prefer to buy cars that protect them at all costs.”

A report published in the WSJ News said, The new research offered variations of what is known as the “trolley problem,” a cornerstone of modern ethical inquiry that social scientists use to illuminate potential moral conflicts. In a classic version of the trolley problem, researchers ask a person to imagine being on a trolley racing toward a group of workers. The person has an option of flipping a lever to move the trolley to another track where it would hit only one worker.

The essential difference, some ethicists argue, involves taking an action that doesn’t intend to kill someone versus actively causing the death of one. Variations of these thought experiments test how people might make different choices—say, if the potential casualties are children, the elderly or a pregnant woman.

“The scenario described above is hypothetical, but it and others like it are bound to happen in real life once driverless cars become a mainstream reality, the researchers said. We need answers and rules now, so that we can include them in the programming of these machines. Even if a driverless car has a manual override option, it’s easy to imagine a situation where there simply isn’t time for a passenger to react and take control of the vehicle,” according to a news report published by Huffington Post.

“Around 90 percent of those accidents are due to human error,” Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, and a co-author of the study, said at a press conference. “Autonomous vehicles promise to change all that for the better, but there are barriers to their wide adoption. A number of those are technological barriers, but they’re also psychological ones.”

Leave a Reply

Your email address will not be published. Required fields are marked *