Scientists say it will be impossible to control superintelligent AI

Scientists say it will be impossible to control superintelligent AI

The idea of ​​overthrowing humanity with artificial intelligence has been discussed for decades, and in 2021 scientists delivered their verdict on whether high-level computer superintelligence can be controlled.

Scientists say it will be impossible to control super intelligent AI

The scientists said that the catch is that in order to control a superintelligence that is far beyond human understanding, it will be necessary to simulate this superintelligence, which can be analyzed and controlled. But if people are not able to understand this, then it is impossible to create such a simulation.

The study was published in the Journal of Artificial Intelligence Research.

Rules such as “don't harm people” can't be set if people don't understand what scenarios AI has to offer, scientists say. Once a computer system is running at a level beyond the capabilities of our programmers, then it will no longer be possible to set limits.

“Superintelligence is a fundamentally different problem than those that are usually studied under the slogan “robotic ethics. This is due to the fact that the superintelligence is multifaceted and, therefore, is potentially able to mobilize a variety of resources to achieve goals that are potentially incomprehensible to humans, not to mention that they can be controlled,” the researchers write.

Part of the team's reasoning came from the halting problem posed by Alan Turing in 1936. The problem is to know if the computer program will come up with a conclusion and an answer (so it will stop), or just loop around trying to find it.

Scientists have noted, as Turing proved with clever math, although we we can know this for some particular program, it is logically impossible to find a way that would allow us to know this for every potential program that could ever be written. This brings us back to an AI that, in a superintelligent state, could simultaneously hold all possible computer programs in its memory.

According to the scientists, any program written to prevent AI from harming humans and destroying the world, for example, can decide (and stop) whether or not it is mathematically impossible to be absolutely sure anyway, which means that its can't be contained. The scientists said the alternative to teaching the AI ​​some ethics and telling it not to destroy the world – something that no algorithm can be absolutely sure of – is to limit the capabilities of the superintelligence.

“The study also rejected this idea , suggesting that this will limit the AI's capabilities. The argument is that if we're not going to use it to solve problems beyond human capabilities, then why create it at all?

If we're going to get ahead with AI, we may not even know when there will be a superintelligence beyond our control, such is its incomprehensibility. This means that we need to start asking serious questions about where we are going, ”the scientists noted.