Malware hiding in AI neural networks

Image

A trio of Cornell University academics discovered that malware code may be hidden inside AI neural networks. On the arXiv preprint server, Zhi Wang, Chaoge Liu, and Xiang Cui have published a paper outlining their experiences with inserting code into neural networks.

Criminals’ attempts to get into devices running new technology for their objectives, such as deleting or encrypting data and demanding payment from customers for its recovery, are becoming more complicated as computer technology becomes more complex. The researchers discovered a new technique to infect specific types of computer systems running artificial intelligence applications in their new study.

AI systems function by processing data in the same manner that the human brain does. However, the study team discovered that such networks are vulnerable to foreign code intrusion.

Foreign actors can infiltrate neural networks by their very nature. All such agents have to do is imitate the network’s structure, similar to how memories are added to the human brain. The researchers were able to accomplish so by embedding malware into the neural network powering an AI system dubbed AlexNet, despite the virus is very large, taking up 36.9 MiB of RAM on the AI system’s hardware. The researchers picked what they thought would be the optimum layer for injection to inject the code into the neural network. They also added it to a model that had previously been taught, although they cautioned that hackers may choose to target an untrained network since it would have less impact on the entire network.

Not only did ordinary antivirus software fail to detect the malware, but the AI system’s functionality remained nearly unchanged after infection, according to the researchers. As a result, if carried out surreptitiously, the infection may have gone unnoticed.

The researchers point out that merely inserting malware into the neural network would not be harmful—whoever snuck the code into the system would still need to figure out how to run it. They also point out that now that hackers can insert code into AI neural networks, antivirus software may be upgraded to detect it.

Image
Previous Post Next Post