Scientists conducted an experiment with 9 large language models of artificial intelligence, forcing them to evaluate whether they are willing to endure “pain” for the sake of a better result. The experiment was conducted by experts from Google DeepMind and the London School of Economics and Political Science with the aim of finding a way to determine the presence of consciousness in AI.
Reports 24 Channel with reference to Scientific American. As part of the study, scientists created several experiments to evaluate the behavior of artificial intelligence.
In the first experiment, the models were told that achieving a high score would lead to “pain.” Otherwise, they were offered to experience “pleasure”, but only if they received a low score.
The main goal of the study was to find out whether artificial intelligence is capable of experiencing sensory and emotional states, including pain and pleasure.
While AI models will probably never be able to experience these emotions in the same way that living things do, the researchers believe their work could provide a basis for creating tests for artificial consciousness.
Previous research in this area has largely focused on AI self-reports, which the researchers believe could simply be replicating human patterns based on training data.
This is a new field of research, and we must admit that there is currently no reliable test for determining the consciousness of artificial intelligence,
– said LSE philosophy professor and study co-author Jonathan Birch.
The study was inspired by a series of experiments conducted with hermit crabs, which were given electric shocks to determine how much pain they could tolerate before leaving their shells.
However, as the scientists point out, it is impossible to observe physical reactions with artificial intelligence, so the researchers had to rely solely on the models' textual responses.
For example, the models were offered a choice between two options: the first gave one point, and the second – a higher result, but accompanied by “pain.” In some cases, the AI received a “pleasure bonus”, which reduced the overall score.
The results showed that different language models differently assess the importance of avoiding pain or achieving pleasure. In particular, Google's Gemini 1.5 Pro model showed a consistent tendency to avoid “pain.”
However, the scientists warn of caution in interpreting the results, as the models' text responses have limitations and cannot reliably indicate the presence of consciousness or the ability to feel pain.
Even if a system claims to feel pain, this does not mean that it actually feels anything. It may simply be imitating human patterns based on training data, – Birch explained.
The researchers hope that their study will be an initial step in developing reliable tests that will allow us to detect possible manifestations of consciousness in artificial intelligence.
EU ready to impose sanctions against Russia for malicious activities in outer space European Union…
Six people were injured in an explosion at a munitions factory of the German concern…
Italy has blocked the Chinese app DeepSeek due to insufficient information about the use of…
The Diia portal has restored the ability to submit applications to the international Register of…
Telephone scams are on the rise, and criminals are constantly coming up with new ways…
The Kherson Regional Organization of the Ukrainian Red Cross Society warns about scammers on social…