Terminator failed: they now claim that there are "killer robots" that constitute a danger



[ad_1]

A former Google engineer who was working on a military project warned that drones and artificial-intelligence drones can cause unplanned atrocities

In 2015, researchers, scientists and businessmen from the field of artificial intelligence (AI), among whom were the late astronomer Stephen Hawking and the entrepreneur Elon Musk, had warned against the need to ban military robots, could make wars easier and more achievable.

And they claimed that "soldier robots" were a possibility for "in a few years and not decades".

Everything seemed exaggerated, even when they pointed out that the creation of these weapons would mean a third military revolution comparable to what the invention of gunpowder and nuclear weapons meant at the time, milestones aggravating the threats to peace in the world.

Exaggeration? In the light of the statements made by Laura Nolan, a former software engineer at Google, this did not seem to be the case, any more than when the UN itself last year launched a signal alarm about the danger they represent. the so-called "killer robots".

What did the engineer Nolan say? Well, killer robots and Artificial Intelligence could end up conspiring against humanity.

In an interview with the British Guardian newspaper, the former Google engineer said that "the main threat is the lack of human control because these robots could do terrible things for which they were not programmed at the origin ".

The probability of a disaster is proportional to the number of robots in the same place

And Nolan knows what he's talking about. It was recruited by Google for its controversial Maven project, which would allow the US Department of Defense to provide badistance in artificial intelligence and technology, whose main goal was to use artificial intelligence to improve military drones.

Maven's plan was to build intelligent AI systems capable of selecting enemy targets that distinguish people from objects. It was so controversial that the project was eliminated by Google itself. And while his contract expired in March of this year, Nolan left his post after being "more and more concerned about an ethical point of view."

Even when there is no evidence that Google is seeking to create or develop weapons treated autonomously, Nolan warned of the dangers that entails.

"The probability of a disaster is proportional to the number of these machines that will be in a given area at the same time," he explained.

What she sees is atrocities and unlawful killings, even under the laws of war, especially if hundreds or thousands of these machines are implemented.

"There could be large-scale accidents, because these things will begin to behave unexpectedly," he explains, "which is why any advanced weapon system should be subject to significant human control. Otherwise, they should be banned because they are too unpredictable and dangerous. "

Another thing that worries Nolan is that these weapons are tested in real war scenarios.

"The other terrifying thing about these autonomous warfare systems is that you can really only try them out by deploying them into a real combat zone – maybe this is happening with the Russians currently in Syria, who knows, or with the United States Army in places like Afghanistan. "

"If you test a machine that makes its own decisions about the world around it," he added, "it has to be in real time. Also, how do you train a system that only works with software to detect subtle human behavior or discern the difference between hunters and insurgents? How do you distinguish the killing machine on your own flight from an 18-year-old fighter and an 18-year-old fighter who hunts rabbits? It's very dangerous. "

On his work, Nolan said, "As a site reliability engineer, my experience at Google was to keep our systems and infrastructure running, and that's why I needed to help Maven. I did not participate directly with the weapons, but I realized that I could be part of a chain of murders. "

.

[ad_2]
Source link