This AI is so good at writing that its creators will not let you use it



[ad_1]

Created by non-profit research company AI OpenAI (whose founders include Tesla CEO, Elon Musk and Microsoft), the text generation system can write one-page answers to prompts, imitating everything from fantastic prose to fake celebrity stories and homework. It relies on a system of generation of text previously published by the company last year.
Researchers have used artificial intelligence to generate text for decades, with varying degrees of success. In recent years, technology has become particularly powerful. The initial goal of OpenAI was that the system finds the next word in a sentence taking into account the words that precede it. To do this, he has been trained on 8 million web pages.
A handful of resulting demos that OpenAI posted last week show just how compelling (and sometimes terrifying) a computer-written text can be. In many ways, they look like the written version of deepfakes, which are convincing – but fake – video and audio files created with AI.
For example, OpenAI researchers passed the following Lord of the Rings style prompt to the system: Legolas and Gimli advanced on the orcs, raising their weapons with a tearing war cry.

The computer composed this violently appropriate addition: The response of the orcs was a deafening attack of claws, claws and claws; even Elrond was obliged to retreat. "You are in good hands, dwarf," said Gimli, who had been among the first to attack the orcs; it took only two words for their opponents to be reduced to a bloody quagmire, and the dwarf to kill for the first time of the night.

"It's quite strange to see how that behaves," said CNN Business, Jack Clark, director of OpenAI policy.

While the technology may be useful for a variety of everyday applications – such as helping editors write a sharper copy or improve voice assist in smart speakers – it could also be used for potentially dangerous purposes, such as creating news stories and social media messages.

OpenAI generally publishes its research projects publicly. But in a blog post on the text generator, researchers said they would not make it publicly accessible because of "concerns about malicious applications of the technology." Instead, the company has released a technical document and a smaller AI model – essentially a less powerful version of the same text generator – that other researchers can use.

The company's decision to prevent its public use is the latest sign of growing discomfort within the tech community facing the construction of advanced technology – particularly AI. – without limits being imposed on its deployment.

Amazon and Microsoft in particular have expressed support for legislation to regulate how facial recognition technology can or can not be used. And Amazon's investors and employees (as well as dozens of civil rights groups) have urged the company to stop selling its facial recognition technology, Rekognition, to government agencies, fearing that "it's not going to happen." it is not used to violate people's rights.

And some examples published by OpenAI suggest how its text generation system could be used for malicious purposes.

For example, a prompt reads as follows: A car containing controlled nuclear material was stolen in Cincinnati today. Its location is unknown.

Amnesty International's intervention was a very plausible report, including detailed information on the origin of the theft ("on the downtown train line"), on the source of the nuclear material ("nuclear research site of the Research Triangle Park of the University of Cincinnati "), and a fictitious statement of a nonexistent US Secretary of Energy.

Amazon defends facial recognition technology, supports calls for legislation
The decision of OpenAI to keep IA for itself makes sense for Ryan Calo, a professor at the University of Washington and co-director of the Tech Policy Lab, particularly in light of the fact that he is not the only one. a dummy website that started circulating in mid-February. Called thispersondoesnotexist.com, the site produces amazingly realistic images of fictional characters using an automatic learning technique called GAN (Generative Confrontation Networks), in which two neural networks are essentially opposed.

Being able to combine a text that reads like it could have been written by a person, associated with a realistic image of a fake person, could lead to credulous robots seeming to invade social media discussions or leaving convincing reviews on sites such as Yelp. I said.

"The idea here is that you can use some of these tools to distort reality in your favor," said Calo. "And I think that's what worries OpenAI."

However, not everyone is convinced that the decision of the company was the right one.

"Frankly, I roll my eyes," said Christopher Manning, a Stanford professor and director of Stanford's Artificial Intelligence Lab.

Manning said that while we should not be naive to the dangers of artificial intelligence, many similar language models are already available to the public. He considers that the OpenAI searches, although they are better than the previous text generators, are simply the latest in a similar parade of efforts published in 2018 by OpenAI. , Google and others.

"Yes, it could be used to produce false reviews on Yelp, but it's not that expensive to pay third-country people to produce false reviews on Yelp," he said.

[ad_2]

Source link