[ad_1]
The idea of artificial intelligence overthrowing humanity has been talked about for many decades, and scientists have just delivered their verdict on the possibility of controlling high-level computer super-intelligence. The answer? Almost certainly not.
The catch is that controlling a super intelligence far beyond human comprehension would require a simulation of this super intelligence that we can analyze. But if we cannot figure it out, then it is impossible to create such a simulation.
Rules like “ do no harm to humans ” can’t be set if we don’t understand the type of scenarios an AI will come up with, suggest the authors of the new article. Once a computer system operates at a level beyond the reach of our programmers, we can no longer set limits.
“A super-intelligence poses a fundamentally different problem from those typically studied under the banner of ‘robot ethics,'” the researchers write.
“Indeed, a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a variety of resources in order to achieve goals that are potentially incomprehensible to humans, let alone controllable.”
Part of the team’s reasoning comes from the stopping problem put forward by Alan Turing in 1936. The problem is whether or not a computer program will reach a conclusion and an answer (so it stops), or simply in a loop to always look for one.
As Turing proved through smart math, while we may know that for some specific programs it is logically impossible to find a way that we can find out for every potential program that could ever be written. This brings us back to AI, which in a super intelligent state could hold all possible computer programs at once.
Any program written to prevent AI from harming humans and destroying the world, for example, may or may not come to a conclusion (and stop) – it’s mathematically impossible for us to be absolutely sure anyway, which means it cannot be contained.
“In fact, this makes the containment algorithm unusable,” explains computer scientist Iyad Rahwan, of the Max-Planck Institute for Human Development in Germany.
The alternative to teaching AI ethics and telling it not to destroy the world – which no algorithm can be absolutely certain to do, the researchers say – is to limit the capabilities of AI. super intelligence. It could be cut off from parts of the Internet or certain networks, for example.
The new study also dismisses the idea, suggesting it would limit the reach of artificial intelligence – the argument is that if we’re not going to use it to solve problems beyond the reach of humans, then why create it?
If we’re going to go ahead with artificial intelligence, we might not even know when a super-intelligence beyond our control is coming, such is its incomprehensibility. This means that we have to start asking ourselves serious questions about the directions in which we are going.
“A super-intelligent machine that controls the world looks like science fiction,” says computer scientist Manuel Cebrian of the Max-Planck Institute for Human Development. “But there are already machines out there that perform some important tasks independently without programmers fully understanding how they learned it.
“The question therefore arises as to whether this could at some point become uncontrollable and dangerous for humanity.”
The research was published in the Artificial Intelligence Research Journal.
[ad_2]
Source link