[ad_1]
Technology leaders, including Elon Musk and the three co-founders of DeepMind, Google's subsidiary for artificial intelligence, have vowed not to develop lethal self-sustaining weapons.
This is the latest initiative of an international coalition of researchers and executives who oppose the spread of such technology. The promise warns that weapons systems that use AI for "[select] and [engage] targets without human intervention" constitute moral and pragmatic threats. Morally, say the signatories, the decision to take a human life "should never be delegated to a machine". On the pragmatic level, they say that the spread of these weapons would be "dangerously destabilizing for each country and each individual". [19659003Cetengagementaétépubliéaujourd'huiàlaConférenceinternationaleconjointesurl'intelligenceartificielle(IJCAI)de2018àStockholmetorganiséparl'Institutdel'avenirdelavieuninstitutderecherchequiviseà"atténuerlerisqueexistentiel"pourl'humanitéL'institutadéjàaidéàémettredeslettresdecertainsdesmêmesindividusappelantlesNationsUniesàenvisagerdenouvellesréglementationspourcequ'onappelledesarmesautonomesmortellesouLAWSCependantc'estlapremièrefoisquelespartiesconcernéess'engagentindividuellementànepasdévelopperunetelletechnologie
Among the signatories are Spaceon and Tesla CEO Elon Musk; the three co-founders of Google's DeepMind subsidiary, Shane Legg, Mustafa Suleyman and Demis Hassabis; Skype founder Jaan Tallinn; and some of the world's most respected AI researchers, including Stuart Russell, Yoshua Bengio and Jürgen Schmidhuber.
Max Tegmark, a signatory of engagement and professor of physics at MIT, said in a statement that the promise showed that AI leaders "were turning the talk into action." Tegmark said the promise did what politicians do not have: the development of AI for military purposes. "Weapons that autonomously decide to kill people are as disgusting and destabilizing as biological weapons and should be treated in the same way," Tegmark said.
Up to now, attempts to support the international regulation of autonomous weapons have been ineffective. Activists suggested that laws should be restricted, similar to those imposed on chemical weapons and landmines. But note that it's incredibly difficult to draw a line between what constitutes and what does not constitute an autonomous system. For example, a gun turret could target individuals but not shoot at them, with a human "in the loop" simply hot-making his decisions.
They also emphasize that the application of such laws would be a huge challenge, as the technology develop AI weaponry is already widespread. In addition, the countries most involved in the development of this technology (such as the United States and China) have no real incentive not to do so.
Paul Scharre, a military analyst who wrote a book on the future of war and AI, said The Verge this year that there is not enough "dynamic" to push forward international restrictions. "There is not a core of Western democratic states involved, and this has been critical [with past weapons bans] with countries like Canada and Norway, which are leading the charge, "said Mr. Scharre.
However, while international regulation might not arrive soon, recent events have shown that collective activism as the commitment of today Can be the difference – Google, for example, has been shaken by the protests of Loyal after it was revealed that the company was helping to develop non-lethal AI drone tools for the Pentagon. A few weeks later, he issued new research guidelines, promising not to develop AI weapons systems. A threatened boycott of KAIST University in South Korea had similar results, with the president of KAIST promising not to develop military AI "contrary to human dignity including autonomous weapons without significant human control" . The organizations involved do not prohibit the development of military AI tools with other non-lethal uses. But a promise not to put a computer solely in charge of killing is better than no promise at all.
The full text of the pledge can be read below, and a complete list of signatories can be found here:
Artificial Intelligence (AI) is ready to play a growing role in military systems. There is an urgent need and need for citizens, policymakers and leaders to distinguish between acceptable and unacceptable uses of AI.
In this context, we, the undersigned, agree that the decision to take a human life should never be delegated to a machine. . There is a moral component to this position, namely that we should not allow machines to make life decisions for which others – or people – will be guilty.
There is also a powerful pragmatic argument: lethal autonomous weapons, the selection and engagement of targets without human intervention, would be dangerously destabilizing for every country and every individual. Thousands of AI researchers agree that by removing the risk, accountability and difficulty of killing, lethal self-sustaining weapons could become powerful instruments of violence and oppression especially when they are related to monitoring and data systems. quite different characteristics of nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily trigger an arms race that the international community does not have the technical tools and systems of global governance to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.
We, the undersigned, call on governments and heads of government to create a future with international standards, regulations and laws against lethal autonomous weapons. As they are currently absent, we choose to maintain a high standard: we will not participate in or support the development, manufacture, trade or use of lethal autonomous weapons. We are asking technology companies and organizations, as well as leaders, policy makers and others, to join us in this commitment.
Source link