[ad_1]
Last week, the $ 1 million Turing Prize – sometimes called the "Nobel Prize for Computer Science" – was awarded to three pioneers of artificial intelligence: Yann LeCun, Geoffrey Hinton and Yoshua Bengio .
There is a good story behind their work.
In the 1980s, researchers briefly focused on the concept of neural networks, an approach to artificial intelligence that, as its name suggests, resembles the functioning of the human brain. The idea was that rather than following carefully specified rules, neural networks could "learn" as humans do – looking at the world. They could start without pre-programmed preconceptions and make inferences from the data on how the world works and how to work it.
But after several years of research, the field has not been successful with neural network approaches. The desired learning behavior did not materialize and they underperformed other AI strategies, such as explicit AI programming with logical rules to follow. So, in the 90s, the field had evolved.
Hinton, LeCun and Bengio, however, have never really given up on the idea. They continued to tinker with neural networks. They have made substantial improvements to the initial concept, including adding "layers", a structure for organizing "neurons" into a neural network that dramatically improves performance. And finally, it turned out that neural networks were as powerful a tool as we could hope for; they just need powerful supercomputers and tons of data to be useful.
We did not have enough computers to take advantage of neural networks until the beginning of this decade. When we developed these computers, the advances of the neural network in AI began. As a result, AI and neural networks could be used for image recognition. For the translation. For voice recognition. To play. For research in biology. To generate a text that reads almost as if it had been written by a human.
We started to invent different ways of configuring neural networks to get better results. For example, to create photorealistic images of human beings that have never existed, you actually form two neural networks: one learns to draw, and the other to judge between machine-drawn images and images. real pictures.
The paradigms LeCun, Hinton and Bengio have stubbornly continued to work and have become the most important game of the city. Today, LeCun is vice president and chief researcher on AI at Facebook. Hinton works for Google Brain and the University of Toronto. Bengio founded a research center at the University of Montreal.
And around the world, thousands of researchers are working on neural networks, hundreds of billions of dollars have been invested in hundreds of AI startups, and we continue to discover new applications. There is no doubt that the price of Turing is well deserved – it is rare for an idea to take such a terrain at ease.
Watching the field of AI transform itself raises questions about its future.
The field of artificial intelligence has been transformed over the last 10 years: concerns about the effects of artificial intelligence on society are now taken much more seriously.
Of course, there are many reasons for this, but the pace of AI progress over the last decade is a determining factor. Ten years ago, many people trusted that they could say that the truly advanced AI, the one we had to worry about, was centuries away.
There are already AI systems that are powerful enough to pose ethical questions, and we do not know how far distant artificial intelligence – an AI surpassing human capacity in many areas – is.
LeCun, Bengio and Hinton take all the ethical issues of Amnesty International very seriously, even though they do not fear that their creation will erase us from the planet. (Hinton's position, the most pessimistic of the three, is that nuclear war or a global pandemic will probably get there first.)
"If we had had foresight in the 19th century to see how the industrial revolution would unfold," says Bengio in his book chapter of 2018 Architects of Intelligence"We could have avoided much of the misery that followed. The problem is that it will probably take less than a hundred years this time to unveil this story, so that the potential negative impacts could be even greater. I think it's really important to start thinking about it now. "
Observing the incredible rush of progress of this decade is enough to urge caution – and leave us with a great deal of uncertainty as to what to expect. A paradigm that many rejected as irrelevant turned out, once we had enough computers, to be an incredibly powerful tool. New applications and new variants have been discovered. That's enough to ask you if this could happen again.
Are there other AI techniques that most researchers do not pay attention to, but which will develop when computers improve and we finally have tools powerful enough to take advantage of them? Will we continue to invent variants of neural networks that can easily solve unresolved problems?
It's hard to predict. But seeing the field totally transformed in the space of a decade gives an idea of speed, surprising and unpredictable progress.
Sign up for the Future Perfect newsletter. Twice a week, you will have an overview of ideas and solutions to meet our greatest challenges: improving public health, alleviating human and animal suffering, mitigating catastrophic risks and, to put it simply, improving performance.
[ad_2]
Source link