Written by Ivan Petrov

Translated by Elizaveta Ovchinnikova

11 February 2021. Rossiyskaya Gazeta – Federal Issue No. 30(8381)

 

Signs of Deception

 

The Ministry of Internal Affairs of the Russian Federation has announced a tender offer for the IT project enabled to recognise a new threat – video containing face substitution

 

Currently, deepfakes are used exclusively for entertainment. However, very soon, according to experts, scammers will arm themselves with this technology. And then, by answering a video call, a person will see his relative or friend, who will ask, for example, to urgently help with money. Only then it will turn out that the close person did not call anyone at all. This is just one way that scammers will use in the future.

Deepfakes have become very popular in recent years. This technology is widely used to create realistic pornographic videos featuring celebrities in which they have never been filmed or fake speeches by major political figures.

One of the most popular deepfakes on the Internet was a video with the founder of SpaceX, Elon Musk. On this clip, he allegedly sings ‘The Grass of Home.’ The song is actually performed by the group ‘The Earthlings.’

The media has already made its way to television. While the real actor Leonid Kuravlev, distraught by the death of his wife, disappeared from the public and stopped contacting anyone, his rejuvenated copy starred in a video advertising. Georges Miloslavsky, performed by Leonid Kuravlev in the 1973 film ‘Ivan Vasilievich Changes Professions,’ walks around Moscow in the commercial, rides the train and taxi, goes shopping, participates in a concert with modern stars. It is clear that another actor played this role, and the young Kuravlev’s face was superimposed on the computer later. But the advertisement was shot professionally so it is impossible to recognise a fake by eye.

With the help of this technology, it is easy to substitute an inconvenient person by inserting his image on a video, where an immoral act or even a crime is committed

‘With the help of this technology, it is easy to substitute an inconvenient person by inserting his image on a video, where an immoral act or even a crime is committed. That is why it is important to learn to quickly and effectively detect such fakes as soon as possible,’ an employee of the Ministry of Internal Affairs of Russia (MVD RF) ‘Office K’ (specialising in solving crimes committed with the use of modern technologies) commented on the MVD’s tender offer.

By the way, the implementation of the deepfake tools are prohibited now by the largest websites, including Reddit and Twitter.

China has already adopted a law that prohibits the publication of deepfakes without a special warning mark. The law came into force on 1 January 2020. In the near future, the use of this technology will definitely be regulated around the world. In the most high-tech state – California – it was decided to ban deepfakes as part of political campaigns at the legislative level. There are no legislative initiatives to ban deepfakes in Russia yet.

Therefore, the police are determined to get ahead of the scammers in the near future and develop a tool for detecting such fake clips.

According to the tender terms published on the public procurement website, the winner will have to write on the topic ‘Research Into Possible Ways to Identify Signs of Intra-Frame Video Editing Performed Using Neural Networks.’ In addition, the contractor should draw up a technical task for the development of a hardware and software complex that will be able to detect fake elements not later but while watching a video.

The contracting authority is the Federal Government Institution ‘Scientific and Production Association “Special Equipment and Communication.” This is a structural subdivision of the Ministry of Internal Affairs of the Russian Federation responsible for the development of all technical innovations. The initial contract value is 4,790,357 roubles.

The system is given the codename ‘The Mirror (The Camel)’. It is assumed that the work will be carried out in two stages and will be completed on 30 November 2022.

It is noted that the new enterprise should increase the level of scientific and technical support for forensic units of the Ministry of Internal Affairs of the Russian Federation through the video technic expert examination and research.

 

Expert Opinions

 

Yury Zhdanov, Lieutenant-General of the Ministry of Internal Affairs of Russia, President of the International Police Association Russian Section, Doctor of Law:

– The deepfake technology has existed since 2017. But until recently it was most often used in architecture, cinema, design and video game creation. With the development of digital technologies and artificial intelligence, the popularity of deepfakes is growing, which can be used not only in entertainment and marketing formats. Fraudsters will be able to use a deepfake in cyber attacks – to simulate the required image or voice and use them to convince a person to send money or provide personal data.

Biometric identification allows to radically change the customer experience and increase the convenience of using the widest service spectrum, including bank service. At the same time, any tool is associated with cyber security risks. In the context of the rapid development of artificial intelligence techniques (image and speech synthesis), the risks of efficient attacks on systems using biometrics are increasing. Therefore, for critical operations or services, biometrics is used in the bank sector as one of the authentication factors, not the only factor.

In general, this technology poses a huge threat, and it is extremely difficult to fight it. It is becoming more and more difficult to figure out where the truth is and where the fake is. So here powerful systems of protecting people from scam materials created with the help of a deepfake should come to the rescue. On 1 September 2020, Microsoft announced its new tool – Microsoft Video Authenticator, a software that allows to detect interference in video.

IT giants are developing anti-deepfakes mechanisms: train algorithms that would be able to qualitatively determine video clips. Last autumn, Facebook organised the Deepfake Detection Challenge – a competition for the best deepfake recognition programme. The prize fund is ten million dollars. The results will be announced in June 2021.

In addition to voice and image synthesis, fraudsters can soon set up and master tools for substitution or intercepting a retinal pattern or fingerprint.

A deepfake can be used to make fake news and malicious deceptions. Now there are dozens of applications on the market allowing anyone to make a deepfake. The most famous are Zao, DeepFaceLab, Deepfakes web, FakeApp, which uses TensorFlow.

 

Sergey Shipilov, IT Specialist:

– Today, when the capabilities of the deepfake technology that appeared in film studios not so long ago have become available to ordinary smartphone users, it would not be an exaggeration to say that the world (in every sense of the word) is on the verge of major upheavals. For the first time, they began talking about deepfakes (from the blending of ‘deep learning’ and ‘fake’) in 2017. But it turns out that the ancestor of such attempts can be considered, perhaps, 2013. Then, Audrey Hepburn appeared in a video advertisement despite being deceased for many years. The actress advertised the Galaxy chocolate in front of scenery reminding the Italian Amalfi coast from the film called ‘Roman Holidays.’

Since then, as they say, a lot of water has passed under the bridge. And technology has progressed incredibly far. Now we are no longer talking about one programme but about entire neural networks based on artificial intelligence. In their turn, they compete with each other for image quality. This algorithmic model is called generative-adversarial neural networks (GAN). Two algorithms create two pictures of a person. At the same time, one algorithm works with a person’s photo or video, relatively speaking, encodes the actor Kuravlev to ‘live’ and ‘act’ in a bank advertisement and another algorithm tries to decode this image to the original one. And such competition of networks continues until the second programme begins to confuse the generated ‘fake’ image with the authentic one.

In addition to voice and image synthesis, fraudsters can soon create and master tools for the retina or fingerprint pattern substitution. 

This is being perceived as a game so far. But the prospects for the development of such technologies are already clearly visible. Both citizens and states can be engulfed by a crime wave in almost all social spheres. For example, with the help of a compromising deepfake video, you can put an end to the political career of a public figure or contribute to his loss in the elections. It is possible to falsify evidence in an administrative or criminal case. Now there are recurring instances when valuables left by citizens disappear from the bank boxes. It is already difficult to prove that a citizen left real money and not cut paper in the box. The surveillance cameras only show that a person puts something in the box, not visible what exactly. Now imagine that a fraudster bank will present a deepfake video from its cameras to the court. And on this record, the citizen not only put, but a few days later ‘came back’ again and ‘took’ everything from the box. That is, the victim should already be tried for fraud against an ‘honest bank.’ It is not excluded that facts, both historical events and modern ones, are distorted with the help of deepfakes, especially in such sensitive and explosive spheres of life as interethnic and interreligious. Relatively speaking, when suddenly, out of nowhere, by chance, allegedly ‘archival’ photos and videos of any state’s army crimes appear.

Real actors can lose jobs or certainly their extra fee. Instead of them, the celebrities of bygone eras will begin to play in future films.

But if it becomes possible to see, say, Elvis Presley or Michael Jackson performing the part from ‘La Traviata,’ then, most likely, it will not be very soon. Oddly enough, but in all the latest high-tech fake activity, voice reproduction remains and will remain for a long time the most difficult thing. However, it is quite possible that with the development of technology, this problem will also be solved.

The list of possibilities for the malicious use of deepfakes can go on and on. So it is necessary for legislators to pay attention to this problem. And quickly decide on the responsibility for ordering, making and distributing deepfakes.