Elon Musk's artificial intelligence company has created a fake news generator he's too scared to make public – BGR



[ad_1]

It is time for us to move to a new stage in our ongoing look at the future with the increasingly alarming capabilities of artificial intelligence. Everyone is aware of the problem of false information online and now, the OpenAI non-profit organization supported by Elon Musk has developed an AI system capable of creating a content of false information as compelling as the group is too fearful to publish it publicly, citing fears of abuse. They let the researchers see a small part of what they did. They do not hide it completely. Nevertheless, the fears of the group are certainly revealing.

"Our model, called GPT-2, was formed to predict the next word in 40 GB of Internet text," reads on the new OpenAI blog. "Because of our concerns about the malicious applications of technology, we do not publish the model formed. As an experiment in responsible disclosure, we are releasing a much smaller model that researchers can experiment with, as well as a technical document. "

Basically, the GPT-2 system was "trained" by feeding 8 million web pages, until the system could examine a set of text that it provided and predict the words to come. According to the OpenAI blog, the template looks like that of a chameleon: it fits the style and content of the text wrapping. This allows the user to generate realistic and consistent continuations on a subject of their choice. Even if you were trying to produce, for example, a false news.

Here's an example: The artificial intelligence system has received this text-generated text prompt:

"In a shocking discovery, a scientist discovered a herd of unicorns living in a remote and unexplored valley in the Andes. Even more surprising to researchers, the fact that unicorns speak perfect English. "

From this, the artificial intelligence system – after 10 trials – continued "the story," starting with this text generated by IA:

"The scientist named the population, according to their distinctive horn, Ovid's Unicorn.This white four-horned unicorn was previously unknown to science.Now, after almost two centuries, the mystery of what caused this strange phenomenon is finally resolved. " (You can view the OpenAI blog by clicking on the link above to read the rest of the unicorn's story as the AI ​​system has expanded.)

Imagine what such a system could do, for example, in case of presidential campaign. The reasons for this are why OpenAI claims to publicly disclose only a very small portion of the GPT-2 sampling code. It does not publish any datasets, training code or "GPT-2 model weighting". The OpenAI blog also announced: "We are aware that some researchers have the technical capacity to reproduce and open our results in open source. We believe that our publishing strategy limits the initial group of organizations that may choose to do so and gives the AI ​​community more time to discuss the implications of such systems.

"We also believe that governments should consider expanding or initiating initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progress of the capabilities of such organizations. systems, "concludes the OpenAI blog.

Image Source: Chris Carlson / AP / Shutterstock

[ad_2]

Source link