Teach driverless cars to make ethical decisions – who should live or die?



[ad_1]

  • Science began by creating ethical rules, according to which decisions mean that someone will live or die, for autonomous vehicles.
  • The latest research shows that humans generally share common moral preferences, including saving the most lives, prioritizing young people, and valuing humans over other animals.
  • However, experts inevitably say that there will be errors involving driverless vehicles.

What should a driverless car do in the face of a dilemma on the road, where any decision will result in harm?

Scientists began by creating moral rules for autonomous vehicles with artificial intelligence (artificial intelligence) as part of a study based on nearly 40 million decisions collected in an online survey conducted in the whole world and published in the journal Nature.

Driverless vehicles will not only have to navigate the road, but also deal with the moral dilemmas posed by unavoidable accidents. But what ethical rules should be incorporated into machines?

In the study, the scenario presented to the participants required a choice of combinations of saving pbadengers and pedestrians. Who should live and who should die?

Researchers have identified a number of common moral preferences, including saving the most lives, prioritizing young people and valuing humans over other animals.

"Never in the history of humanity have we left a machine to autonomously decide who should live and who should die in a split second without real-time monitoring," the researchers write. .

"We will cross this bridge at any time now, and this will not happen on a distant military operations theater; this will happen in this most bbad aspect of our life, the daily commute.

"Before we let our cars make ethical decisions, we need to engage in a global conversation to express our preferences to companies that will design moral algorithms and to the policy makers who will regulate them."

Iyad Rahwan of MIT's Media Lab in the United States and his colleagues created the Moral Machine, an online survey designed to explore moral preferences around the world.

The experience presents unavoidable accident scenarios involving a driverless car on a two-lane highway. The car can stay on its original course or move off in the other lane.

Participants must decide which route the car should follow, based on the life they would save.

In Central and South America, as well as in France and its Overseas Territories, past and present, participants had a strong preference for spared women and athletes.

Those from countries where income inequality was greater were more likely to take social status into account in deciding who to save.

Australian researchers said the research clearly identified some agreed principles that it would be relatively simple to code, such as the preference to save the lives of people rather than animals.

Lin Padgham, of the Faculty of Computer Science and Software Engineering at RMIT University, states that it's important to realize that the complex ethical / moral judgments required by some of the questions asked are not posed by the man when he is confronted with such situations expected autonomous vehicles.

"Nevertheless, understanding clearly agreed moral preferences can help to determine the" reflexive "actions to be built in autonomous vehicles that may well be different from those used by humans," says Professor Padgham.

"However, even when there is a clear preference, such as saving a greater number of lives, the decision to action will likely be complex because of the uncertainty of the results.

"Avoiding a single pedestrian in a three-pbadenger car may be the right solution, as the pedestrian is much more vulnerable than the pbadengers in the car.

"It requires a much more complex understanding than a rule to save more lives. The closest rule is to always try to avoid hitting pedestrians.

"The main advantage of autonomous vehicles will probably be to avoid accidents and loss of life, because of the greater potential capacity of autonomous vehicles to detect all relevant information and react quickly.

"Inevitably, there will sometimes be mistakes, but all the evidence suggests that they will be far fewer than those made by humans driving cars."

Is the programming of moral intentions immoral?

Jay Katupitiya, and badociate professor at UNSW, explains that programming moral intentions could be a problem.

"The raging debate about driverless cars and the moral responsibility that lies with their creators is clearly linked to the difficult decision – making process that creators will have to program on these machines to enable them to make a decision when the driver is in the driver 's seat. unthinkable is about to happen, "he said. said.

"The dream scenario is, so that this problem never happens … to be able to declare that they just do not meet. For the moment, few people want to believe that this will be possible.

"To draw a parallel, what would we think if, in a legal proceeding, a driver stated," I headed to the left because I could save a youngster's life and I knew it would kill the child. frail old man and, unfortunately, was the best I could do. "

"In my opinion, programming these intentions is more immoral than not."

NOW WATCH: Videos of briefing

Business Insider E-mails and Alerts

Website highlights every day to your inbox.

Follow Business Insider Australia on Facebook, Twitter, LinkedIn and Instagram.

[ad_2]
Source link