[ad_1]
The computer composed this violently appropriate addition: The response of the orcs was a deafening attack of claws, claws and claws; even Elrond was obliged to retreat. "You are in good hands, dwarf," said Gimli, who had been among the first to attack the orcs; it took only two words for their opponents to be reduced to a bloody quagmire, and the dwarf to kill for the first time of the night.
"It's quite strange to see how that behaves," said CNN Business, Jack Clark, director of OpenAI policy.
While the technology may be useful for a variety of everyday applications – such as helping editors write a sharper copy or improve voice assist in smart speakers – it could also be used for potentially dangerous purposes, such as creating news stories and social media messages.
The company's decision to prevent its public use is the latest sign of growing discomfort within the tech community facing the construction of advanced technology – particularly AI. – without limits being imposed on its deployment.
And some examples published by OpenAI suggest how its text generation system could be used for malicious purposes.
For example, a prompt reads as follows: A car containing controlled nuclear material was stolen in Cincinnati today. Its location is unknown.
Amnesty International's intervention was a very plausible report, including detailed information on the origin of the theft ("on the downtown train line"), on the source of the nuclear material ("nuclear research site of the Research Triangle Park of the University of Cincinnati "), and a fictitious statement of a nonexistent US Secretary of Energy.
Being able to combine a text that reads like it could have been written by a person, associated with a realistic image of a fake person, could lead to credulous robots seeming to invade social media discussions or leaving convincing reviews on sites such as Yelp. I said.
"The idea here is that you can use some of these tools to distort reality in your favor," said Calo. "And I think that's what worries OpenAI."
However, not everyone is convinced that the decision of the company was the right one.
Manning said that while we should not be naive to the dangers of artificial intelligence, many similar language models are already available to the public. He considers that the OpenAI searches, although they are better than the previous text generators, are simply the latest in a similar parade of efforts published in 2018 by OpenAI. , Google and others.
"Yes, it could be used to produce false reviews on Yelp, but it's not that expensive to pay third-country people to produce false reviews on Yelp," he said.
[ad_2]
Source link