Researchers create "malicious" AI for writing



[ad_1]

Contour of the face with the brain replaced by wires and cogs

Copyright of the image
Getty Images

A team of researchers who built an artificially intelligent editor said she was withholding the technology, which could be used for "malicious" purposes.

OpenAI, based in San Francisco, is a research institute supported by Silicon Valley personalities, including Elon Musk and Peter Thiel.

He shared some new research on the use of machine learning to create a system capable of producing natural language, but in doing so, the team was concerned that the tool could be used to mbad-produce convincing false information.

Which, to put it another way, is of course also a recognition of the fact that what his system states is an invented and unreliable waste. Nevertheless, when it works well, the results are incredibly realistic – that's why I shared an example below.

Feed the system

OpenAI stated that its system was able to produce consistent articles, on any subject, requiring only a brief prompt. The AI ​​is "unsupervised", which means that there is no need to recycle it to talk about a different subject.

It generates text with the help of data extracted from about 8 million webpages. To "power" the system, the team created a new automated way to search for "quality" content on the Internet.

Rather than collecting data on the Web indiscriminately, which would have provided a lot of complicated information, the system only looked at the pages published on the Reddit link sharing site. Their data included only links that had attracted a "karma" score of 3 or more, which meant that at least three humans had judged the content valuable, for whatever reason.

"This can be seen as a heuristic indicator of whether other users have found the link interesting, educational or just fun," says the research paper.

The artificial intelligence generates the story word by word. The resulting text is often coherent, but rarely true – all citations and attributions are fabricated. The sentences are based on information already published online, but the composition of this information is intended to be unique.

Sometimes the system spits text pbadages that do not have much structural meaning or laughable inaccuracies.

In a demo given to the BBC, Amnesty International wrote that a protest march had been organized by a man named "Paddy Power" – a recognizable United Kingdom as being a chain of Paris stores.

"We have observed different modes of failure," observed the team. "For example, repetitive texts, modeling failures around the world (for example, the model sometimes describes fires that occur under water) and non-natural subject changes."

Under wraps

By claiming an independent perspective on the work of OpenAI, it has become clear that the institute is not quite popular among the many in this field. "Hyperbolic", was commented by an independent expert the announcement (and much of the work done by OpenAI).

"They have a lot of money and produce a lot of parlor tricks," said Benjamin Recht, an badociate professor of computer science at UC Berkeley.

Another said he felt that OpenAI's advertising efforts had "negative implications for academics" and she pointed out that the research paper published alongside the OpenAI announcement had not been peer reviewed.

But Professor Recht added: "The idea that artificial intelligence researchers should think about the consequences of their productions is extremely important."

OpenAI said it wanted its technology to spark debate about how such an AI should be used and controlled.

"[We] think that governments should consider expanding or initiating initiatives to more systematically monitor the social impact and diffusion of AI technologies, and to measure the progress of capabilities of such systems. "

Brandie Nonnecke, director of the Berkeley CITRIS Policy Lab, an institution that studies the social impacts of technology, said such misinformation was inevitable. She said the debate should focus more on platforms – such as Facebook – on which it could be disseminated.

"It's not a question of whether harmful actors will use artificial intelligence to create dummy press articles and compelling deepfakes, they will do it," she told the BBC.

"Platforms need to recognize their role in reducing its reach and impact, and the days of platforms claiming immunity for content distribution are gone. badessments of how their systems will be manipulated and incorporated into transparent and accountable mechanisms to identify and mitigate the spread of maliciously falsified content ".

Earlier this week, US President Donald Trump asked his federal agencies to develop a strategy to advance artificial intelligence. He is about to sign a decree to launch the initiative on Monday.

The decision was made when the United States feared being overtaken by China and other countries in technology.

In action

So, is it good? Here is an example, provided by OpenAI, based on a prompt written by the BBC.

The first paragraph, in bold, is the text written by a human. The rest was generated by OpenAI technology. The system works word by word and each new addition is generated according to all that preceded it.

We chose to display this text as an image to prevent search engines from indexing words and displaying them out of context as legitimate BBC News reports.

I have added additional comments in [square brackets].

Follow Dave Lee on Twitter @DaveLeeBBC

Do you have more information about this or any other technology story? You can reach Dave directly and securely via the encrypted email application. Report it: +1 (628) 400-7370

[ad_2]
Source link