[ad_1]
The ghost of Edward Teller was to tour the members of the National Commission on Artificial Intelligence. The father of the hydrogen bomb was never too bothered by the ethical drawbacks of inventing lethal technology. It was not, for example, “the scientist’s job to determine whether a hydrogen bomb should be built, whether it should be used or how it should be used”. Responsibility, however it is exercised, rests with the American people and their elected officials.
The application of AI in military systems has tormented the ethicist, but has excited some leaders and inventors. Russian President Vladimir Putin has asserted with pomposity that “it would be impossible to guarantee the future of our civilization” without a mastery of artificial intelligence, genetics, unmanned weapon systems and hypersonic weapons.
Activists against the use of autonomous weapons systems in war are growing in number. UN Secretary General António Guterres is one of them. “Autonomous machines with the power and discretion to select targets and take lives without human involvement,” he said. wrote on Twitter in March 2019, “are politically unacceptable, morally repugnant and should be banned under international law.” The International Committee for the Control of Robotic Weapons, the Campaign to Stop Killer Robots, and Human Rights Watch are also dedicated to banning lethal autonomous weapon systems. Weapons analysts like Zachary Kallenborn see this outright position as untenable, preferring a more modest ban on “the riskiest weapons: drone swarms and autonomous chemical, biological, radiological and nuclear weapons.”
Criticism of these weapon systems was far in the draft report of the Commission for Congress. The document has more than a touch of a mad scientist in the bloody service of a master. This made sense, given that its chairman was Eric Schmidt, technical advisor to Alphabet Inc., parent company of Google, of which he was previously CEO. With Schmidt in charge, we would have the guarantee of a show devoid of any moral restraint. “The promise of AI – that a machine can perceive, decide and act faster, in a more complex environment, with more precision than a human – represents a competitive advantage in all areas. It will be used for military purposes, by governments and non-state groups. ”
In his testimony before the Senate Armed Services Committee on February 23, Schmidt was all about the “fundamentals” of maintaining American ascendancy. This involved preserving national competitiveness and shaping the military with these fundamental principles in mind. But to do that, it was necessary to keep the eyes of the security establishment wide open to any dangerous competitor. (Schmidt understands Congress well enough to know that spikes in funding and spending tend to be related to promoting threats.) He sees “the threat of Chinese leadership in key technological areas” as “a national crisis.” In AI terms, “only the United States and China” had the “resources, market power, talent pool, and innovation ecosystem to run the world.” Over the next decade, Beijing may even “surpass the United States as the world’s AI superpower.”
The testimony is generously enriched with the thesis of the Chinese threat. “Never in my life,” he said, “have I been so worried that we will soon be displaced by a rival or more aware of what second place means for our economy, our security and the future. of our nation. ” He fears these concerns may not be shared by officials, with the DoD treating “software as a low priority.” Here he could offer advice on lessons learned from spawning companies in Silicon Valley, where the Principles live short. Those dedicated to defense might “build smart teams, drive tough deliverables, and act quickly.” Missiles, he argued, should be built “like we build cars now: use a design studio to develop and simulate in software.”
All of this necessarily meant praising a less repressive form of AI in heaven, especially in its military applications. Two days of public discussion saw panel vice chair Robert Work extol the virtues of AI in combat. “It is a moral imperative to at least pursue this hypothesis,” asserting that “autonomous weapons will not be blind unless we conceive of them that way.” The devil is in the human, as he always has been.
In a manner reminiscent of the debates over the sharing of atomic technology in the aftermath of World War II, the Committee urges the United States to “pursue a comprehensive strategy in close coordination with our allies and partners for innovation and development. adoption of artificial intelligence (AI) that promotes values essential to free and open societies. A proposal for a coalition of emerging technologies of like-minded powers and partners would focus on the role of “emerging technologies according to democratic norms and values” and “coordinate policies to counter the malicious use of these technologies by regimes. authoritarian ”. We quickly forget the fact that distinctions such as authoritarianism and democracy make little sense at the end of a gun.
Internal changes are also suggested to ruffle some feathers. The US State Department comes for special mention as in need of reform. “There is currently no clear direction for emerging technology policy or diplomacy within the State Department, hampering the Department’s ability to make strategic technology decisions.” Allies and partners were confused when they approached the State Department to find out “which senior official would be their main point of contact” for a range of topics, whether it was AI, quantum computing, 5G, biotechnology or new emerging technologies.
Overall, the US government is coming forward for a blow, criticized for operating “at human speed, not machine speed.” He was lagging behind in the business development of AI. It suffered from “technical deficits ranging from a digital workforce shortage to inadequate acquisition policies, insufficient network architecture, and weak data practices.”
Official Pentagon policy, as it stands, is that autonomous and semi-autonomous weapon systems should be “designed to enable commanders and operators to exercise appropriate levels of human judgment in the use of weaponry. strength”. In October 2019, the Department of Defense adopted various ethical principles regarding the military use of AI, making the DoD Artificial Intelligence Center the focal point. These include the provision that “DoD personnel will exercise appropriate levels of judgment and attention, while remaining accountable for the development, deployment and use of AI capabilities.” The principle of “traceability” is also transposed with the principle of human control, with staff “having an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities”.
The National Commission is full of praise for these protocols, acknowledging that operators, organizations and “the American people” would not support AI machines not “designed with predictability” and “clear principles” at the mind. But the note of warning not to be too morally restrained becomes a howl. The risk was “inescapable” and failure to use AI “to solve genuine national security challenges risks putting the United States at a disadvantage.” Especially when it comes to China.
Dr Binoy Kampmark was a Commonwealth Fellow at Selwyn College, Cambridge. He teaches at RMIT University in Melbourne. E-mail: [email protected]
[ad_2]
Source link