AI breaks down the writing barrier



[ad_1]

Word escaped the tech community: The world changed this summer with the deployment of an artificial intelligence system called GPT-3. His ability to interact in English and generate cohesive writing surprised hardened pundits, who called it “GPT-3 shock.”

Where typical AI systems are trained for specific tasks (image classification, Go reading), GPT-3 can handle tasks for which it was never specifically trained. Research published by its creator, San Francisco-based OpenAI, found that GPT-3 can solve old SAT analogy questions with better results than the average college candidate. It can generate news articles that readers may have difficulty distinguishing from human-written articles.

And it can do things its creators never thought of. In recent weeks, beta testers have found that he can complete a half-written investment memo, produce famous people-style stories and letters, generate business ideas, and even write certain types of software code. based on a plain English description of the desired software. OpenAI has announced that after the testing period, GPT-3 will be released as a commercial product.

The name stands for Generative Pre-Training Transformer, third generation. Like other current AI systems, GPT-3 is based on a large organized collection of digital weights, called parameters, that determine how it works. The AI ​​builder trains it using large digital data sets – in this case, a filtered version of web content, plus Wikipedia and a few others. The number of parameters is a key measure of the capacity of an AI model; GPT-3 has 175 billion, more than 100 times that of its predecessor, GPT-2, and 10 times that of its closest rival, Microsoft’s Turing NLG.

AI has gone through cycles of hype and crisis before. Still, I was curious. I became a beta tester for the simplify.so website, which allows users to enter English text for GPT-3 for simplicity, and tested the technology.

[ad_2]

Source link