“The generated image of a person is sent to friends or relatives via instant messengers or social networks. In a short fake video, a virtual clone of a real person supposedly talks about his or her problem (illness, accident, dismissal) and asks to transfer money to a certain account. In some cases, scammers create deepfakes of employers, employees of government agencies, or famous personalities from the field of activity in which their potential victim works,” the Central Bank reported.
To protect yourself from such scams, there is no need to rush to transfer money, the Central Bank recommends. In response to a friend’s request for financial assistance, it is better to call him back and clarify the circumstances. You should not write text messages – sometimes attackers create fake pages with a person’s name and photo to send their messages.
“Make sure to call the person on whose behalf they are asking for money and verify the information. If it is not possible to call, ask a personal question in the message, the answer to which only your friend knows. Video messages are often confused by the interlocutor’s monotonous speech, unnatural facial expressions and sound defects – the most obvious signs of a deepfake,” the Central Bank warned.
Experts say that non-specialists can distinguish a fake image from a real one, even without using special technical means, if they are careful.
“You can try to see how the image is built along the edges of the face: whether it changes, whether there are shifts, reflections, unnatural changes in contrast and brightness in the recording. Another interesting way to “spot” a deepfake is to pay attention to the reflections of the eyes and teeth. “In most cases, people’s teeth are visible during a conversation, but in deepfakes, these clues are not always difficult to notice if the quality of the recording is poor, which is why deepfake videos are often published in artificially low resolution,” explained Andrei Kuznetsov, head of the FusionBrain laboratory at the AIRI Artificial Intelligence Research Institute.
Image defects visible to the naked eye will disappear as technology improves, says Vladimir Arlazarov, CEO of Smart Engines, PhD in technical sciences.
“Currently, a deepfake can be identified by signs such as excessively smooth movements, tremors and other anomalies. However, it is feared that as technology improves, these signs will gradually disappear. Even now, if only the face, the person, is replaced. Without a deep analysis of the chiaroscuros, this deepfake will be able to detect and recognize other characteristics with great difficulty,” the expert noted.
Attackers have learned to fake audio so well that untrained people cannot distinguish recordings based on technical details, Andrey Kuznetsov and Vladimir Arlazarov agree. After receiving an audio message with a request to transfer money, you should definitely start a dialogue to verify the sender. To do this, you should ask questions to which only this person knows the answers. It is even better to try to contact him directly, for example, by a regular call.
It is true that even real-time video communication can no longer be a 100% guarantee of protection against forgery. “There are technologies that allow generating deepfake videos in real time. However, in order to connect such technology to video conferencing, fraudsters still need to first find a video with the face of the person they want to present themselves. “Masks” are created on the video and placed over the framed figure,” said Andrei Kuznetsov.
A clear example of the technology’s operation was demonstrated across Russia in December 2023, when a student at St. Petersburg State University recorded a video message to President Vladimir Putin, using the image and voice of the head of state himself. At the same time, the deepfake turned out to be surprisingly realistic. This was made possible, among other things, by the large number of Putin videos uploaded to the Internet.
The more photos, videos and audio recordings a person uploads to social networks and other public Internet resources, the easier it is for attackers to create a realistic deepfake. “There are a variety of automatic analysis programs that can download images from social networks. Using these databases, a neural network can be created that, based on photographs of a person, can reproduce his or her appearance in great detail,” Vladimir Arlazarov confirms.
You can protect yourself from this by refusing public activity or increasing your vigilance. Unfortunately, there is still no reason to rely on purely technical means of protection, because deepfake recognition algorithms have not yet become “ready-made” solutions for ordinary users.
“The more data about a person is freely available, the easier it is to “imitate” him or her. If a profession requires you to appear regularly in the public sphere, then the main hope is good recognition methods and the appropriate legislative framework. “In the European Union, any image created using artificial intelligence must be marked accordingly,” Arlazarov added.
As for ordinary people, they have already published enough information about themselves for the vast majority to create deepfakes. However, scammers are not interested in them until they have provided them with access to money or other valuables using their biometric data. Therefore, the use of biometrics should be treated with caution, the expert advises.
“Scammers are attracted by attempts to use biometric technologies universally, even when this is not always justified. And this alarming trend has begun to be noticed by major manufacturers. For example, Microsoft recently announced a non-biometric facial verification technology, similar to. In our case, where facial verification is done using identification documents, major banks are also leaning towards it, which, wanting to avoid incidents with deepfakes, the market will move from biometric identification to document verification,” he predicted.