Stop the killer robots before they exist



[ad_1]

Many emerging technologies present ethical, legal, social and political challenges. We refer to recent advances in artificial intelligence (AI), genomics research, nanotechnology, robotics, computer science and applied neuroscience.

Together, these technologies can manipulate atoms of physical matter, digital and biological information (DNA) and nerve cells. This "nano-info-bio" convergence and its application in a military context can transform the mode of war of human beings.

One of the main ethical concerns is the dual use of scientific research for military purposes. Many scientists feel guilty of receiving funding from the military to carry out their experiments. Many others refused to collaborate:

John Napier (1550-1617), Scottish mathematician and founder of Log Theory, described the design of a new form of artillery that he later concealed.

Norbert Wiener (1894-1964), father of cybernetics who worked during the Second World War to control and supervise missiles, gave up his participation. He promised not to publish anything else on the subject.

Currently, Google has withdrawn from an offer on a multi-million dollar contract with the US Department of Defense to create cloud computing.

However, the potential benefits of developing technologies for civil and commercial as well as military purposes do not escape the attention of scientists and companies. Mainly because in an environment of scarce financial resources, you can not close the door to military funds. The alternative is to leave the researcher career.

Artificial intelligence aims to create intelligent machines, but also to better understand biological intelligence. Researchers were quick to realize that instead of creating computers to train with large amounts of information, it was better to give them the opportunity to learn without an explicit program.

This is how "machine learning" was born and a technique of this subfield called "deep learning", in which "neural networks" are developed. In other words, interconnected computing nodes that mimic the human brain's ability to perceive, process, and transmit information.

The automatic learning has improved thanks to the large amount of information (voluminous data) that we currently have, mainly thanks to the Internet.

In medicine, AI can provide reliable diagnoses based on the patient's symptoms and make decisions accordingly, detect abnormalities in images with greater reliability than radiologists, or promote precision medicine.

In law, the AI ​​can process thousands of sentences in milliseconds and find specific patterns through sophisticated natural language (written) processing algorithms far better than human lawyers or judges.

The artificial intelligence applied to transport and mobility offers autonomous vehicles or driverless cars that enhance the safety and comfort of users by reducing the rate of accidents.

Recent advances in AI, machine learning and computer vision have also, if we are not careful, negative consequences.

Fleets of autonomous vehicles for the intentional purpose of colliding.

Commercial drones converted into missiles.

Videos and images made to support hoaxes.

These dystopian scenarios and many others are already possible: citizens, organizations and states face the dangers of misuse of AI.

Each of these areas of application has its ethical, legal and social concerns, but none as important as its implementation on the battlefield, where it can capture the life of a human being.

What does it mean to give machines the power to decide life or death? The implementation of AI for the development of lethal autonomous weapons promises to reduce collateral damage and the number of civilian casualties. At the same time, he is sent to the battlefield with computers rather than soldiers.

But, as we said above, it is a double – edged sword.

In early 2018, United Nations Secretary General Antonio Guterres drew attention to the dangers of using AI in military contexts:

Before talking about the regulation of lethal autonomous weapons, it is necessary to describe the militarization of AI. To evaluate this technology, we need to know what it can do.

In 2013, Altmann, Asaro, Sharkey and Sparrow academics founded the International Committee for the Control of Robotic Weapons (ICRAC). Your mission is as follows:

They proposed a series of topics to define the debate. For example, prohibiting and using these systems because machines should not make the decision to kill people even in space.

Also in 2013, a campaign was launched to "stop killer robots" and the ban on military robots was debated at the Human Rights Council.

A military robot is not the same as a drone, and a drone is not the same as an autonomous deadly weapon system.

A military robot is a robot that can be used as a weapon in a military context. A robot is an "actionable and programmable mechanism that traverses the environment to perform intentional tasks" (ISO-8373, 2012). Therefore, a military robot is a semi-autonomous weapon capable of selecting and attacking unattended targets, but under the ultimate control of a human operator. In other words, it is an artificially intelligent war machine.

A drone is a "remote-controlled unmanned vehicle". They are frequently used to avoid risking the lives of pilots. They carry very sophisticated vision cameras that allow you to see most of the airspace and land they pbad through. Predator and Reaper are two popular models used by the United States Armed Forces. In this country, one in three aircraft is a drone.

Israel, China, South Korea, Russia and the United Kingdom also have drones in their air forces.

Unlike a drone or a military robot, a lethal autonomous weapon system is made up of artificially intelligent war machines that can make tactical military decisions without human intervention. These still do not exist.

The autonomous lethal weapons systems are the logical continuation of robots and drones: completely autonomous machines. The mere possibility of developing something like this would transform wars.

Therefore, long before they exist, the campaign "to stop the killer robots" urges their preventive ban.

The initiative is supported by the UN Secretary-General, 21 Nobel Prize winners, 86 NGOs, 26 countries, 25,000 AI experts and the European Parliament.

Spain does not openly support a treaty regulating autonomous armaments. This is despite the fact that the law professor Roser Martínez and the researcher Joaquín Rodríguez, both from the Autonomous University of Barcelona, ​​participated in a congress on the Convention on Certain Conventional Weapons (CCPA) held in Geneva in last August. In this document, they specified that a machine does not take into account respect for human dignity.

The good news is that after five years of CCAC meetings, a general consensus has emerged: it is necessary to maintain human control over lethal autonomous weapon systems, especially when selecting targets.

In our opinion, decision-makers should work with experts to regulate armaments. The use of these lethal autonomous weapon systems should be prohibited.

That artificial machines can have the final decision on life and death of people poses a serious threat to our future. Human affairs are not limited to how, but why. We have the ability to solve various problems with knowledge translation, but we will continue to think with the conceptual tools of philosophy if necessary.The conversation

Anibal Astobiza Monastery: Basque Government Postdoctoral Researcher, University of the Basque Country / Euskal Herriko Unibertsitatea
Daniel López Castro: Pre-doctoral Researcher, Institute of Philosophy, Applied Ethics Group, Center for Human and Social Sciences (CCHS – CSIC)

[ad_2]
Source link