[ad_1]
- Technology billionaire Peter Thiel has painted a gloomy picture of artificial intelligence in his NYT editorial on Thursday, detailing the case of using this technology as being primarily military. But the expert to whom we spoke is not in agreement.
- "I do not think we can say that AI is a military technology," said Dawn Song, a professor of computer science at the University of California at Berkeley. "Artificial intelligence, machine learning technology, is like any other technology.The technology itself is neutral."
- The experts also highlighted the many benefits to consumers by AI, which Thiel failed to mention in his article.
- Visit the Business Insider home page for more stories.
Technology billionaire Peter Thiel painted a gloomy picture of artificial intelligence in his NYT editorial on Thursday, detailing the real value of technology and its goal, primarily military.
"The first users of the machine learning tools created today will be the generals," Thiel said in his 1,200-word article. "A.I. is a military technology."
Thiel's portrait is far from the optimistic view that many people in Silicon Valley have embraced. Artificial intelligence has promised to give us the next best Netflix recommendations, search the Internet for help with our voices and suppress driving humans. It should also have a huge impact in medicine and agriculture. But instead, Thiel says that Amnesty International's real home is on the battlefield, whether it's the physical world or the virtual world.
Many Amnesty International experts interviewed by Business Insider on Friday, however, do not agree with Thiel's claim that Amnesty International is a forward-thinking technology of military origin and say that It can be used for a much greater good than the great editorial of Thiel would suggest.
"I do not think we can say that AI is a military technology," Business Insider Dawn Song, a professor of computer science at the University of California, Berkeley, told AFP on Friday. Professor of the Berkeley Artificial Intelligence Research Laboratory (BAIR). "Artificial intelligence, machine learning technology, is like any other technology.The technology itself is neutral."
Song said that just like nuclear or security encryption technologies, artificial intelligence can be used positively or negatively, but that describing it as something in which people should in principle be afraid would have everything to scary.
Read more: Peter Thiel criticized Google in a scathing New York Times editorial, but did not mention that he was working for the rival of the search giant and investing in it.
Fatma Kilinc-Karzan, Associate Professor of Operations Research at Carnegie Mellon University, told us that Thiel's perspective on AI was "far too pessimistic" and that her daily use cases were n & # 39; Were not sufficiently enlightened.
"Of course, AI is pretty much used by the army," said Kilinc-Karzan. "But its daily use in simplifying and activating modern life and businesses is largely neglected from this point of view."
Kilinc-Karzan said Thiel's same targeted technologies – such as in-depth learning and automated vision – are already being used positively for a wide variety of commercial and medical applications, such as cars without driver and enhanced CT and MRI devices that make it easier for physicians. detect different types of cancers.
In his article, Thiel recognized artificial intelligence as a "dual-use" technology, which means that it has both military and civilian applications, although the technology billionaire failed to report specifically one of its consumer benefits.
"[Thiel’s] The public has neglected the fact that artificial intelligence is used in everyday life by everyone in the United States, "said Kilinc-Karzan." This seems very minor to him. He did not discuss this impact. It is true that the armed forces will choose and use what is most powerful, but that will be the case regardless of the technology we are talking about. "
The main theme of Thiel's article was that Google – an American company – had created an AI research laboratory in China, a country that created the precedent that all research done at the Within its borders be shared with its national army.
Berkeley's Song agreed that artificial intelligence projects needed to be handled carefully, but pointed out that it was wrong to describe the technology as inherently bad.
"It is important for us to move the AI forward so that we can reap the social benefits of its progress," Song said. "Of course, we have to pay attention to the way technology is used, but I think it's important to keep in mind that the technology is neutral."
Source link