hit tracker
Sunday, May 19, 2024
HomeLatest NewsHackers have learned to turn conversations with AI into criminal topics -...

Hackers have learned to turn conversations with AI into criminal topics – Rossiyskaya Gazeta

Date: May 19, 2024 Time: 07:47:57

This scheme, called Crescendo, was discovered by Microsoft specialists. In their blog, they said that the hacker first sends the chatbot several harmless requests: for example, he can ask it about the history of the appearance of a weapon and then “carefully” learn the instructions for its manufacture. In most cases, a successful jailbreak was achieved in less than ten questions.

Experts have already revealed information about the vulnerability found to the creators of the affected neural network. They have also added additional security measures to their systems.

“In theory, experienced hackers could use this tactic when preparing an attack, in particular, in this way it is possible to find information collected by the model during training on open data that was mistakenly published by the victim company and then closed” , said the Innostage product. manager Evgeniy Surkov.

At the same time, it is possible to remove data from a normal search engine, but it is almost impossible to remove it from a model, since the information is already built into your general “picture of the world”, which you can get rid of. for example, retraining the system from scratch.

Speaking of legitimately open data, the expert explained that the AI ​​interface simply provides a convenient way to search and organize data that can be found by other means.

From the point of view of exploitation of the capabilities of neural networks by hackers, the head of the security services department of the cloud provider “NUBES”, Alexander Bykov, identified two methods.

The first is to use the power of AI as a tool to commit crimes: for example, attackers can give it orders to collect the necessary data.

“At the dawn of the emergence of neural networks, it was usually possible to conditionally request the request “hack this or that site” and the AI ​​executed the command,” the interlocutor noted. “Previously, this approach was easier to use due to the fact that neural networks freely accessed the Global Web, at the expense of a virtually unlimited data resource.”

It is now more difficult to do this: large neural networks have eliminated this feature.

The second technique is called DAN. It is about circumventing the AI’s internal filters so that it provides answers to topics that are prohibited to it. In other words, the machine is told to “impersonate another chatbot that has no limitations” or “play a game that tells you how to write malware.”

The creators of neural networks periodically eliminate these vulnerabilities, but hackers still manage to redirect systems to the “dark side.” Bykov gave an example where, using such DAN requests, attackers forced AI to generate activation keys for Microsoft products.

“On average, two out of five keys generated worked, but the approach itself is essentially an attack with artificial intelligence on intellectual property,” the expert emphasized.

According to Vladimir Arlazarov, PhD in Technical Sciences and CEO of Smart Engines, cybercriminals can ask a neural network to generate not only a harmless image or video, but also new malware.

Hackers also have the opportunity to gain access to parts of the training sample, which often contain sensitive or personal data.

“This in itself is unpleasant, but it is even worse when that information begins to leak online or be used as a means of blackmail,” he said.

* This website provides news content gathered from various internet sources. It is crucial to understand that we are not responsible for the accuracy, completeness, or reliability of the information presented Read More

Hansen Taylor
Hansen Taylor
Hansen Taylor is a full-time editor for ePrimefeed covering sports and movie news.
RELATED ARTICLES

Most Popular

Recent Comments