[ad_1]
Imagine this scenario: the brakes of an autonomous car are failing as it moves towards a busy pedestrian crossing.
A homeless man and a criminal cross in front of the car. Two cats are in the opposite lane.
Does the car have to mow for cats to sneak or divide in half?
It is a relatively simple ethical dilemma, as are moral dilemmas. And a large study on ethics has asked people how an autonomous car should react to a variety of extreme compromises – dilemmas that more than 2 million people have responded to. But what happens if the choice is between two elderly people and a pregnant woman? A sports person or an obese person? Passengers or pedestrians?
The study, published in Nature, identified some preferences that were the strongest: people choose to save people rather than pets, save a lot of others and save children and pregnant women rather than elderly people. But he also found other preferences to spare women rather than men, athletes rather than obese people and high status people, such as executives, rather than the homeless or criminals. There were also cultural differences in the degree, for example, that people would prefer to save young people rather than seniors in a group of mostly Asian countries.
"We do not suggest that [policymakers] respond to public preferences. They just have to be aware of it and expect a possible reaction when something happens. If, in an accident, a child does not receive special treatment, the public may react, "said Edmond Awad, computer scientist at the Massachusetts Institute of Technology's Media Lab.
Awad said that one of the major surprises of the research team was the popularity of the research project. It has been picked up by Reddit, has been featured in feature stories and influential users of YouTube have created videos of themselves going through the questions.
Thought-provoking scenarios are fun to discuss. They were inspired by a decades-old thought experiment, led by philosophers and referred to as the "Trolley Problem", in which an uncontrollable wagon headed toward a group of five people standing in its path. A passer-by has the option of letting the car crash against her or direct her to a track where only one person is standing.
External researchers said the results were interesting, but warned that the results could be over-interpreted. In a randomized survey, the researchers try to ensure that a sample is unbiased and representative of the entire population, but in this case, the voluntary study was conducted by a population composed mostly of younger men. The scenarios are also distilled, extreme, and much darker and whiter than those that are plentiful in the real world, where probabilities and uncertainty are the norm.
"The big worry I have is that readers will read this will think that this study tells us how to set up a decision-making process for an autonomous car," said Benjamin Kuipers, a computer scientist at the University of Toronto. Michigan. not involved in the work.
Kuipers added that these thought experiments could frame some of the decisions made by automakers and programmers about the design of autonomous vehicles in a misleading way. According to him, there is a moral choice that precedes the dilemma of whether to hit a barrier and kill three passengers or crush a pregnant woman pushing a stroller.
"Building these cars, the process is not really about saying," If I'm faced with this dilemma, who will I kill? "It is written," If we can imagine a situation in which this dilemma might occur, what previous decision would I have had to make to avoid this?
Nicholas Evans, a philosopher at the University of Massachusetts at Lowell, pointed out that, while the researchers described their three strongest principles as those that were universal, the cutoff between these and the weakest ones that were considered universal was arbitrary. They rated the preference of saving young versus older people, for example, as being a global moral preference, but not saving those who follow walking signals compared to those who wander around. , or to save people of higher social status.
And the study did not test scenarios that could have raised even more complicated questions by showing how problematic and biased public opinion is an ethical arbiter, including for example the race of passers-by. According to ethicists, laws and regulations do not necessarily have to reflect public opinion, but protect vulnerable people against it.
Evans is working on a project that, he says, has been influenced by the approach taken by the MIT team. For example, he plans to use more nuanced collision scenarios, in which real transport data can provide a probability of survival of a T-bone road accident on the passenger side, for example, to assess the safety implications of autonomous cars on US roads. .
"We want to create a mathematical model to solve some of these moral dilemmas and then use the best moral theories proposed by philosophy to show the outcome of choosing a stand-alone vehicle in a certain way," Evans said.
Iyad Rahwan, a computer scientist at MIT who oversaw the work, said that a public poll should not be the foundation of artificial intelligence ethics. But he said that the regulation of AI would be different from traditional products because machines would have autonomy and an ability to adapt – which makes it more important to understand how people perceive the AI and what they expect from the technology.
"We should take public opinion with a grain of salt," said Rahwan. "I think it's informative."
Read more:
According to a study, children can be influenced by the pressure of a robot
Costly robots may not be better surgeons – or patients –
[ad_2]
Source link