< IMG SRC = "/Uploads/Blogs/63/D2/IB-FRG6TVGOM_C1991E38.jpg" Alt = "Google Deepmind predicted the future si: 4 risks that can threaten humanity"/> ~ ~ < P > Google DeepMind experts have presented a detailed technical report on the safe development of overall artificial intelligence (AGI). In their opinion, despite the doubts of skeptics, the appearance of AGI may occur in the near future.

< p > About 4 key scenarios in which such AI is capable of harming, says RBC-Ukraine (Styler project) with reference to analytical online edition of Ars Technica devoted to information technology.

~ < h2 lang = "ru-ru" paraeid = "{e14d57a6-1cd1-4961-9210-280d1f0cb6d} {152}" Paraid = "484680808" xml: lang = "Ru-ru" Deepmind

< p lang-ru "paraeid =" {2fa9e183-2421-421-4208-98da-5074444444444444444444444444444444444444444444444444444 is engaged in the development of artificial intelligence (AI), which was founded in 2010 and later purchased by Google 2014.

< p lang = "ru-ru" paraeid = "{2fa9e183-2421-421-4208-98da-50744444444444444444444444444444444444444444444444444444444 Creating & nbsp; < strng >SI-systems capable of learning and making decisions , imitating human cognitive processes. The company is known for its achievements in the field of machine training and neural networks, as well as creating algorithms for solving complex tasks.

< P lang-ru "paraeid =" {2fa9e183-2421-4208-98da-5074444444444444444444444444444444444444444444444 AI in medicine, developing algorithms for the diagnosis of diseases such as diabetic retinopathy and cancer. In 2016, Deepmind released the Alphazero system that demonstrated the ability to learn and win in chess and other games without prior human introduction.

< P Lang = "Ru-Ru" paraeid = "{2fa9e183-2421-4208-98da-50744444444444444444444444444444444444444444444444444444444444The company also develops systems to improve Google servers using AI to optimize the energy efficiency of the Date Centers.

< P lang-ru "paraeid =" {2fa9e183-2421-421-4208-98da-50744444444444444444444444444444444444444444444 by their innovation and the desire to create more sophisticated and safe Shi technologies.

< h2 lang = "ru-ru" paraeid = "{51F686FD-963c-44447-a363-a169b7ad1c26} {2}" Paraid = "1406639524" ~ ~ ~ "rang. (Artificial General Intelligence) < p lang-ru "paraeid =" {2fa9e183-2421-4208-98da-5074444444444444444444444444444444444444444444444 (Artificial General Intelligence) is a system that has intelligence and abilities comparable to human.

< p lang-ru "paraeid =" {863303f2-cae2-403c-B609-120313131d267}} {207} "Paraid =" 480990079 "XML: Lang =" Ru-Ru-Ru " Agi then humanity has to develop new approaches so that such a machine does not threaten.

< p lang = "ru-ru" paraeid = "{6b3949d2-1c8a-4acb-828e-deebd55555b6f4f4fd} {106}" Paraid = "134897881" xml: lang = " Elegant as three laws of Aisek Azimov's robotics. Deepmind researchers take care of this problem and have published a new technical document (PDF), which explains how to safely develop AGI. The document is available for download and has 108 pages (excluding a list of literature).

< p lang-ru "paraeid =" {6b3949d2-1c8a-4acb-828e-deebd55555b6f4f4fd} {108} "Paraid =" 1413385792 "XML: lang =" Ru-Ru "Ru-Ru".Although many experts consider AGI fantastic, the authors of the document suggest that such a system can appear by 2030. In this regard, Deepmind has decided to study the potential risks associated with the emergence of synthetic intelligence that has human traits, which, as researchers themselves recognize, can lead to "serious harm".

~ ~ > 62 > 62 > 62 ~ < h2 lang = "ru-ru" paraeid = "{6b3949d2-1c8a-4acb-828e-deebd55555b6f4f4fd} {110}" Paraid = "2098604162" ~ ~ ~ ~ ~ "Ru-Ru" Agi

< P lang-ru "paraeid =" {6b3949D2-1C8A-4acb-828e-deebd55555b6f4fd} {112} "Paraid =" 19587888636 "xml: lang =" Ru-ru-ru-ru-ru " Shain Legage Companies Highlight & nbsp; < strong > four categories of potential threats associated with AGI : abuse, mismatch of intentions, mistakes and structural risks. The first two categories are considered in the document most detailed, while the last two are only brief.

< p lang = "ru-ru" paraeid = "{6b3949D2-1C8A-4acb-828e-Debd555b6f4f4fd} {114}" Paraid = "947615965" " XML: lang = "ru-ru" >< strng > abuse

< P lang-ru "paraeid =" {6b3949D2-1C8A-4acb-828e-deebd555b6f4f4fd} {116} "Paraid =" 1450181671 "XML: Lang =" Ru-Ru ". In fact, it is similar to the risks associated with current SI systems. However, AGI by definition will be much more powerful, and therefore potential harm - much higher. For example, a person with bad intentions will be able to use AGI to find zero -day vulnerability or create design viruses for use as a biological weapon.

< P lang-ru "paraeid =" {6b3949d2-1c8a-4acb-828e-deebd5555b6f4f4fd} {118} "Paraid =" 2059662010 "XML: lang =" ru-ru "Deepmind emphasizes that AGI companies are required to carry out careful testing and implement reliable safety protocols after modeling. In essence, the reinforced "restraints" are required.

< p lang-ru "paraeid =" {4CEEA825-5-5D3D-4F7B-9A7D-638Cabd39218} {27} "Paraid =" “learning”) although it is unclear whether it is possible without a significant restriction of the functionality of the models.

< p lang = "ru-ru" paraeid = "{6b3949D2-1C8A-4acb-828e-Debd5555b6f4f4fd} {120}" Paraid = "1997472456" " XML: lang = "ru-ru" >< strong > Invisibility of intentions 0 ~/p > < p lang-ru "paraeid =" {6b3949d2-1c8a-4acb-828e-deebd55555b6f4fd} {122} "Paraid =" 984527974 "XML: lang =" ru-ru ".This threat is less relevant to modern generatives. However, for AGI, it can be fatal - imagine a car that has stopped listening to its developers. This is no longer a fantasy in the spirit of "Terminator", but a real threat: AGI can take actions that they know that they contradict the intentions of the creators.

< p lang = "ru-ru" paraeid = "{6b3949D2-1C8A-4acb-828e-deebd55555b6f4fd} {124}" Paraid = "244325752" XML: lang = "Ru-ru" Supervision "when two copies of AI check each other's conclusions. It is also recommended to conduct stress testing and constant monitoring to notice the signs that AI "got out of control".

< p lang = "ru-ru" paraeid = "{35d4e993-F90C-492B-92F3-43F7A32CAE85}} {123}" Paraid = "864598149" ~ ~ "RuIt is additionally suggested to isolate such systems in protected virtual media with direct human control - and necessarily with a "red button".

< p lang = "ru-ru" paraeid = "{6b3949D2-1C8A-4acb-828e-Debd5555b6f4f4fd} {126}" Paraid = "730375791" XML: lang = "ru-ru" >< strong > errors

< P lang-ru "paraeid =" {6b3949d2-1c8a-4acb-828e-deebd55555555555555555b6f4fd} {128} "Paraid =" 1338578711 Did not assume that it is possible - this is a mistake. Deepmind emphasizes that the military can start using AGI because of "competitive pressure" and this threatens more serious mistakes, because the AGI functionality will be much more difficult.

< p lang-ru "paraeid =" {6b3949d2-1c8a-4acb-828e-deebd55555b6f4f4fd} {130} "Paraid =" 2015675619 "XML: lang =" ruThere are few decisions here. Researchers propose to avoid excessive aggravation of AGI, to introduce it gradually and to limit its powers. Also proposed to pass commands through a "shield" - an intermediate system that checks their safety.

< p lang = "ru-ru" paraeid = "{6b3949D2-1C8A-4acb-828e-Debd555b6f4f4fd} {132}" Paraid = "1333178969" " XML: lang = "ru-ru" >< strong > Structural risks 0 ~/p > < P lang = "ru-ru" paraeid = "{6b3949d2-1c8a-4acb-828e-deebd555b6f4f4fd} {134}" Paraid = "375006391" XML: lang = "Ru-Ru". This is understood as unintentional but real consequences of introduction of multicomponent systems in the already complex human environment.

< P lang-ru "paraeid =" {1aA5666a5-5b7e-4576-9FE9-BFFBBB174AEDB4}} {105} "Paraid =" 254052022 "XML: lang =" ruAGI can, for example, generate such a convincing misinformation that we can no longer trust anyone. Or - slowly and imperceptibly - start controlling the economy and policy, for example, by developing complex tariff schemes. And one day we can find that we no longer control cars but they are us.

< P lang-ru "paraeid =" {6b394949d2-1c8a-4acb-828e-deebd555555555555555555555555555555b6f4f4f4f4f4fd} {136} "Paraid =" 267503738 "XML: Lang =" Ru-Ru " many factors: from people's behavior to infrastructure and institutes.

< p lang = "ru-ru" paraeid = "{6b3949d2-1c8a-4acb-828e-deebd55555b6f4fd} {136}" Paraid = "267503738" XML: lang = "Ru-ru" SRC = "/Uploads/wysiwyg/%d0%90%D1%80%D1%82%D0%D0%D0%BC/24032025/1_1488.png" Alt = "1_1488.png (144 KB)" Width = " />< /p > < P >< Em > Four Categories AGI RISK ASSECED DEEPMIND (PHOTOS: Google Deepmind) 0 >/P > ~ < p >< Br/>< Strong > What will be Agi in five years 0 ~/p > < p > No one knows for sure whether smart cars will appear in a few years, but many in the industry are sure that it is possible. The problem is that we still do not understand how the human mind can be embodied in the car. In recent years, we have really seen tremendous progress in generatives, but will it lead to a full AGI ?

< p > Deepmind emphasizes that the work presented is not the final AGI security guide, but only a “starting point for extremely important discussions.” If the team is right and AGI really appears in five years, such discussions should start as soon as possible.

Natasha Kumar

By Natasha Kumar

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my natasha@thetimeshub.in 1-800-268-7116