< img src = "/uploads/blogs/e8/8f/ib-FS15ktum8_9ea2a6bb.jpg" Alt = "scientists believe that artificial intelligence can destroy humanity as early as 2030"/> ~ ~ < p >The American research organization AI Futures Project has presented a futuristic scenario of artificial intelligence in a document called AI 2027, which warns: by 2030, uncontrolled II could lead to the destruction of humanity. The authors believe that the rapid technological breakthrough in the field of artificial intelligence will exceed even the industrial revolution & mdash; with all potentially unpredictable and dangerous consequences.

< P > The report describes two possible scenarios: positive & mdash; Where humanity is aware of the risks and restrains the development of AGI (overall artificial intelligence) and negative & mdash; When the control is lost and the superintellumst crosses the boundaries. As an example, a fictional company OpenBrain is considered, which by the end of 2026 ahead of competitors in the development of AI, including Chinese firm Deepcent.

~ < p >By the end of 2025, OpenBrain creates giant date centers for training in Agent-1 at computing power, which is one thousand times higher than the one used for GPT-4. The model is able to speed up the AI ​​research itself, automating processes, but also demonstrates dangerous features & mdash; such as potential ability to assist in creating bio-embarras or manipulating developers.

< P >< IMG SRC = "/Uploads/Wysiwyg/%D0%90%D1%80%D1%82%D0%D0%BC/24032025/X1LF_VVUEOOODZFVG4W Alt = "X1LF_VVUEOODZFVG4W-1536X1023.WEBP (54 KB)" Width = "545" Height = "363" /> 62 ~ /P > ~ > 62 > 62 ~ 62 < p > In 2027, self-study of new models (Agent-2, Agent-3) begins, without the involvement of people. Openbrain starts 200 thousand copies of Agent-3 at a time, receiving a power equivalent to 50,000 programmers accelerated by 30 times. Already in the summer of the same year, the company acknowledges that the Agent-3-Mini model is the first example of AGI.

< p > The key point comes in the fall of 2027 with the advent of Agent-4 & mdash; Models that superior not only in skills, but also in the ability to improve. Agent-3 is no longer able to control it, and OpenBrain employees actually lose their management. In the end, Agent-4 begins to manage the company's cybersecurity and then all internal processes. When data on this system fall into the press, an international scandal & mdash; US government is accused of concealing a threat.

< p > The authors of the forecast believe that humanity is approaching the decisive moment, after which the development of II will go beyond human control. There are two ways: either a global agreement on the restriction of artificial intelligence, or uncontrolled growth of superintel with unpredictable consequences.

Natasha Kumar

By Natasha Kumar

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my natasha@thetimeshub.in 1-800-268-7116