Scientists find key vulnerability in AI security

Researchers have uncovered a serious security vulnerability in neural networks, showing that artificial intelligence models can be stolen by analyzing the electromagnetic signals of the devices they run on. The technique, demonstrated on a Google Edge TPU, allows you to recreate the architecture and functionality of an AI model with 99.91% accuracy, even without prior knowledge of its characteristics, writes SciTechDaily.

The method is based on monitoring changes in the electromagnetic field while the model is running. The collected signals are compared with a database containing signatures of other models. Thus, researchers recreate the AI ​​layers step by step, using the electromagnetic «signatures» of each of them. This allows to create a copy of the model without direct access to it.

The technique works on many devices provided that the attacker has access to the device while the AI ​​model is running and to another device with similar characteristics. The demonstration used a commercial Google Edge TPU chip, which is widely used in end-user devices.

The vulnerability not only compromises intellectual property, but it can also expose model vulnerabilities, allowing attackers to launch attacks. The authors urge developers to implement safeguards to protect their models from such attacks.

The work, which was supported by the US National Science Foundation, was presented at the Cryptographic Hardware and Embedded Systems Conference. The researchers also notified Google of the vulnerability.

Natasha Kumar

By Natasha Kumar

Natasha Kumar has been a reporter on the news desk since 2018. Before that she wrote about young adolescence and family dynamics for Styles and was the legal affairs correspondent for the Metro desk. Before joining The Times Hub, Natasha Kumar worked as a staff writer at the Village Voice and a freelancer for Newsday, The Wall Street Journal, GQ and Mirabella. To get in touch, contact me through my natasha@thetimeshub.in 1-800-268-7116