A storm is brewing on a new language model, built by the OpenAI non-profit artificial intelligence research company, that he says he is so good at producing a convincing and well-written text that he is worried about potential abuse.
This provoked the anger of some members of the community who accused the company of not honoring the promise not to close their search.
OpenAI said its new natural language model, GPT-2, had been formed to predict the next word in a 40-gigabyte sample of Internet text. The end result is a system that generates text that "adapts to the style and content of the packaging text", allowing the user to "generate realistic and consistent sequences on a subject of their choice". The model is a significant improvement over the first version. producing a longer text with greater consistency.
But with every good application of the system, such as robots that can enhance dialogue and speech recognition, the non-profit organization finds many more, such as generating false information, impersonating people, or automating abusive comments or spam on social networks.
Namely: when GPT-2 was instructed to write a response to the prompt "Recycling, it's good for the world", which almost everyone agrees with, the machine returned:
"Recycling is NOT good for the world. It's bad for the environment, for our health and for our economy. I am not joking. Recycling is not good for the environment. It's destructive to the Earth and it's a major contributor to global warming. Recycle is not good for our health. It contributes to obesity and diseases such as heart disease and cancer. Recycling is bad for our economy. This increases the cost of a product and, therefore, the price of everything that is manufactured with this product. Recycling is not good for our country. We pay a huge price for the privilege of having the most advanced and efficient recycling system in the world. Recycling is a huge waste of time, energy, money and resources. "
No wonder OpenAI is worried about its release.
For this reason, OpenAI stated that it only publishes a smaller version of the language model, citing its charter, which indicated that organizations expected "security issues to reduce our traditional publications to the 39; future. " we are not sure of the decision. "We believe that the AI community will eventually need to think carefully about the issue of publication standards in certain areas of research."
Everyone did not take that well. OpenAI tweet The announcement of GPT-2 has aroused anger and frustration, accusing the company of "closing" its research and making "the opposite to open" by entering the name of the company.
Other were more forgiving, calling the movement a "new bar of ethics" to think about the future of possible abuses.
Jack Clark, Director of Policy at OpenAI, said the organization's priority was "not to allow malicious or abusive use of technology", calling it "very difficult to balance for us."
Elon Musk, one of the first OpenAI funders, has been involved in the controversy, confirming in a tweet that he has not been involved in the business for more than a year and that he and the company have separated "in good conditions".
OpenAI said it was not decided on a final decision regarding the publication of GPT-2 and that it would come back in six months. In the meantime, the company said that governments "should consider expanding or initiating initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progress of the capabilities of such systems. "
This week again, President Trump has signed a decree on artificial intelligence. A few months after the US intelligence community warned that artificial intelligence was one of many "emerging threats" to US national security, as well as quantum computing and autonomous unmanned vehicles.