The Pentagon plans to spend $ 2 billion to put more artificial intelligence into its weapons



[ad_1]

The Department of Defense's leading research arm has pledged to make the largest military investment to date in artificial intelligence (AI) systems for US armament, committing to spending up to 2 billions of dollars over the next five years. to make these systems more reliable and accepted by military commanders.

The director of the Defense Advanced Research Projects Agency (DARPA) announced the last day of a conference in Washington celebrating its sixty years of history, including its historic role in creating d & # 39; Internet.

The agency believes that its main role is to propose new technological solutions to military problems, and the technical heads of the Trump administration have strongly supported the injection of artificial intelligence into a greater number of US weapons as a means to compete better with Russian and Chinese military forces.

DARPA's investment is low by Pentagon spending standards, where the cost of buying and maintaining new F-35 fighter jets is expected to exceed $ 1 trillion. But it is more important that Amnesty International's programs have been funded in the past and pretty much what the United States spent on the Manhattan nuclear weapons project in the 1940s, although that number is about $ 28 billion.

In July, defense contractor Booz Allen Hamilton was awarded a $ 885 million contract to work on unspecified artificial intelligence programs over the next five years. And the Maven Project, the largest military AI project, aimed at improving the ability of computers to locate objects in images for military purposes, is expected to yield $ 93 million in 2019.

Further transforming military analytical work – and possibly a critical decision-making process – into computers and algorithms installed in weapons capable of acting violently against humans is controversial.

Google ran the Project Maven project for the department, but after a protest by Google employees who did not want to work on software that could kill military targets, the company announced in June that it would cease operations. work after the expiry of his current contract.

While Maven and other AI initiatives have helped the Pentagon's weapon systems better recognize targets and more effectively perform drone flights, deadly computer systems have not yet been approved. .

A Pentagon strategy paper released in August indicates that technological advances will soon make these weapons possible. "The DoD does not currently have an autonomous weapon system capable of searching, identifying, tracking, selecting and engaging targets regardless of the contribution of a human operator," said the signed report. by senior acquisition and research managers at the Pentagon, Kevin Fahey and Mary. Miller.

But "The technologies that underpin unmanned systems would enable the development and deployment of autonomous systems that can independently select and attack targets with lethal force," the report predicts.

The report notes that even though AI systems are already technically capable of targeting and firing weapons, commanders are reluctant to hand over control to weapons platforms, in part because of lack of confidence in weapons. machines. and its designers have never met before.

At present, for example, if a soldier asks an AI system as a target identification platform to explain his selection, he can only provide a confidence rating, said the director of DARPA, Steven Walker. often expressed as a percentage, as in the fractional probability that an object that the system has selected is actually what the operator was looking for.

"What we are trying to do with an explainable AI, is that the machine says to the man" here is the answer, and here's why I think that's the right answer and explain to the To be human how she got to that answer ".

DARPA officials have been opaque about how newly funded research will allow computers to explain key decisions to humans on the battlefield, despite the noise and urgency of a conflict, but officials said that it was possible to do it. critical for the future of AI in the military.

Hiding this obstacle, explaining AI's reasoning to operators in real time, could be a major challenge. Human decision-making and rationality depend much more than simple rules and good machines. It takes years for humans to build a moral compass and thinking skills on common sense, features that technologists still struggle to design in digital machines.

"We probably need a gigantic Manhattan project to create an AI system with the skills of a three-year-old," said Ron Brachman, who spent three years managing DARPA's AI programs. 2005. "We had expert systems in the past, we had very robust robotic systems to a certain extent, we know how to recognize the images in giant databases of photographs, but the aggregate, including what the People have called it's still pretty difficult on the ground.

Michael Horowitz, who worked in 2013 on artificial intelligence issues for the Pentagon as a member of the Secretary of Defense's cabinet and is currently a professor at the University of Pennsylvania, explained: [about] algorithms unable to adapt to the complex reality and thus to operate in an unpredictable way. It's one thing if what you're talking about is a Google search, but it's another thing if what you're talking about is a weapons system. "

Horowitz added that if AI systems could prove that they were using common sense, "it would be more likely that executives and end users would want to use them."

In 2016, the Defense Science Council approved the extension of the use of AI by the military, which noted that machines could act faster than humans in military conflicts. But with these quick decisions, he adds, come the doubts of those who have to rely on machines on the battlefield.

"Although commanders understand that they could benefit from better organized, more up-to-date and more accurate information through the autonomy of the war, they also express significant concerns," the report says.

DARPA is not the only Pentagon unit to sponsor research on AI. The Trump Administration is creating a new joint artificial intelligence center in this building to help coordinate all AI-related programs in the Department of Defense.

But the planned investment of DARPA is distinguished by its scope.

DARPA currently has about 25 programs focused on AI research, according to the agency, but plans to channel some of the new funds through its new AI program. The program, announced in July, will provide grants of up to $ 1 million each for research into how AI systems can learn to understand the context, enabling them to operate more effectively in complex environments.

Walker said that allowing AI systems to make decisions, even when distractions are ubiquitous, and then explaining these decisions to their operators will be "extremely important … in a war scenario."

the Center for Public Integrity is a nonprofit news agency in Washington, DC.

[ad_2]
Source link