Should autonomous cars have an ethic? : NPR



[ad_1]

New research explores how people think that autonomous vehicles should handle moral dilemmas. Here, people are walking in front of an autonomous taxi demonstration in Frankfurt, Germany, last year.

Andreas Arnold / Bloomberg via Getty Images


hide the legend

activate the legend

Andreas Arnold / Bloomberg via Getty Images

New research explores how people think that autonomous vehicles should handle moral dilemmas. Here, people are walking in front of an autonomous taxi demonstration in Frankfurt, Germany, last year.

Andreas Arnold / Bloomberg via Getty Images

In the not too distant future, fully autonomous vehicles will drive in our streets. These cars will have to make decisions in a split second to avoid endangering human lives, both inside and outside the vehicles.

To determine attitudes toward these decisions, a group of researchers created a variation of the classical philosophical exercise known as the "carts problem." They posed a series of moral dilemmas regarding an autonomous car with brakes that suddenly gave way: should the car avoid a group of pedestrians and kill the driver? Or should he kill people on foot, but spare the driver? Is it important for pedestrians to be men or women? Children or older people? Doctors or bank robbers?

To ask these questions to a wide range of people, the researchers have created a website called Moral Machine, on which anyone can click on the scenarios and tell what the car needs to do. "Help us learn to make the machines more moral", implores a video on the site.

The sinister game became viral, many times.

"Really beyond our wildest expectations," said Iyad Rahwan, associate professor of arts and media science at MIT Media Lab, one of the researchers. "At one point, we got 300 decisions per second."

The researchers discovered a series of near-universal preferences, no matter where the test was performed. Overall, people all over the world believed that the moral of the car was to save young people on older ones, save humans on animals and save the lives of many others. Their findings, led by Edmond Awad of MIT, were published Wednesday in the newspaper Nature.

Using geolocation, the researchers found that the 130 countries with more than 100 respondents could be grouped into three groups with similar moral preferences. Here, they found variations.

For example, the group of southern countries (which includes Latin America, as well as France, Hungary and the Czech Republic) preferred to save the youngest to the oldest. Nations of the Middle East). And the preference for humans spared compared to pets was lower in the southern group than in the eastern or western groups (the latter includes, for example, the United States , Canada, Kenya and much of Europe).

And they found that these variations seemed to correlate with other observed cultural differences. Respondents from collectivist cultures, who "emphasize the respect due to the older members of the community", showed a lower preference for saving the younger ones.

Rawhan stressed that the results of the study should be used with extreme caution and that they should not be considered as the last word of the company's preferences, especially as respondents Were not a representative sample. (Although the researchers have done a statistical correction of demographic distortions, repeating the responses according to the demographics of a country.)

What does this add? The newspaper's authors argue that, if we leave these vehicles on our streets, their operating systems should take into account moral preferences. "Before letting our cars make ethical decisions, we need to hold a global conversation to express our preferences to companies that will design moral algorithms and the policy makers who will regulate them," they write.

But let's say, for a moment, that a society Is have general moral preferences about these scenarios. Should car manufacturers or regulators really take this into account?

Last year, the German Ethics Commission for Automated Driving developed the first guidelines for automated vehicles. One of their key dictations? A prohibition of this decision making by the operating system of a car.

"In the event of an inevitable accident, any distinction between individuals based on personal characteristics (age, sex, physical or mental constitution) is strictly prohibited," the report states. "General programming aimed at reducing the number of bodily injuries can be justified The parties involved in the generation of mobility-related risks should not sacrifice the uninvolved parties."

But for Daniel Sperling, founding director of the Institute of Transportation Studies at the University of California-Davis and author of a book on autonomous and shared vehicles, these moral dilemmas are far from clear. to be the most pressing questions about these cars.

"The most important problem is just to get them safe," he told NPR. "They will be much safer than human drivers: they do not drink, they do not smoke, they do not sleep, they are not distracted." So the question is to what extent must they be safe before leaving them on our roads.

[ad_2]
Source link