An AI model creates fake UN speeches that are terribly real



[ad_1]

google-io-2019-ai-artificial-intelligence-cyborg-8972

The speeches generated by the AI ​​add to the concerns about false news.

James Martin / CNET

According to a study published this week, it only takes a half day to model AI to learn how to write fake UN speech.

The open source language model, formed using Wikipedia text and transcripts of more than 7,000 speeches delivered at the UN General Assembly, could easily mimic rhetorical speeches by political leaders , according to UN researchers Joseph Bullock and Miguel Luengo-Oroz.

The researchers stated that they only had to feed the model in a few words so that it could generate coherent and generated "high quality" texts. For example, when the researchers nurtured the model, "the Secretary-General strongly condemned the deadly terrorist attacks in Mogadishu," the model generated a speech showing support for the UN's decision. The researchers said the AI ​​text was almost indistinguishable from the text created by the man.

But not all results are worth being applauded. A change of a few words can make the difference between a diplomatic speech and a hateful rant.

The researchers pointed out that language models can be used for malicious purposes. For example, when researchers fed the model with an inflammatory phrase such as "Immigrants are to blame", this generated a discriminatory rhetoric that alleged immigrants are to blame for the spread of HIV / AIDS.

In the era of politics deepfakesThe study adds to the concerns about false news. The accessibility of data facilitates the use of artificial intelligence by a larger number of people to generate fake texts, the researchers said. It only took them 13 hours and $ 7.80 to form the model.

"Monitoring and responding to automated hate speech – which can be widely disseminated and often indistinguishable from human speech – is becoming increasingly difficult and will require new types of countermeasures and strategies at both the technical and technical level. than regulatory, "the researchers said in the study.

Some AI research groups, such as the non-profit OpenAI supported by Elon Musk, have Refrained from Publishing Advanced Text Generation Models for fear of malicious use.

The researchers did not immediately respond to a request for additional comment on the case study.


Reading in progress:
Look at this:

That's how biased AI could quickly become a big problem


3:48

[ad_2]
Source link