You are here

Fake News in the Ukraine-Russia Disinformation War

The blonde Ukrainian girl raised her fist at the Russian soldier. “Go back to your country,” she yelled at him – at least according to the caption for the video that has been shared widely on social media and been watched millions of times over the past week.

The problem, as numerous fact-checkers have revealed, is that the soldier isn’t Russian and the girl isn’t Ukrainian. The soldier is Israeli and the girl is none other than Palestinian Ahed Tamimi, who rose to fame a decade ago thanks to footage shot in the West Bank showing her confronting an Israeli soldier.

The fact that an old West Bank video is playing a role in the information war during the Russia-Ukraine conflict isn’t a big surprise. Repurposing old videos is a common disinformation tactic and was used extensively by Russia in the lead-up to its invasion. And it’s now also being employed by activists trying to win support for the Ukrainians.

So in this information war online, it’s a tough task to sort fact from fiction. But social-media verification tools and the investigators using them – the field of visual forensics – can help. The following is a quick guide to help you ensure that you don’t get duped or inadvertently dupe others.

The four Ms

Bellingcat, the investigative journalism group that specializes in fact-checking and open-source intelligence, or OSINT, lists its “four Ms” of Russian visual disinformation: misdate, misrepresent, mislocate and modify.

The first troika here means changing the context of a video from, say, the Middle East to make it seem like Ukraine, as with the Tamimi video. Bellingcat and other fact-checkers like Reuters have found that Russian state outlets have “recast” old videos and satellite images from other conflicts and contexts many times in recent weeks.

Similar tactics have been used by cyberwarriors from Russia, but also by pro-Ukrainian hacktivists. For example, the hacker group Kelvinsecurity that claimed to have targeted Russia’s nuclear facilities was actually posting old images, Israeli cybersecurity giant Check Point reported Sunday. And a group that said it hacked a Russian bank also posted mislabeled screenshots to prove its claim.

The fourth M, modify, means actual manipulation of the video – not just its purported date, location or context. In the past, images relating to the Malaysia Airlines plane that was shot down over eastern Ukraine in 2014 were doctored by Russian state media to boost Moscow’s narrative. Fake or misrepresented videos of Ukraine are being spread on Telegram and by pro-Kremlin mouthpieces across the internet.

In a recent example, a video of a burning Ukrainian tank from Russia’s 2014 invasion of Crimea was modified to make it seem like a destroyed Russian tank from the current fighting. The forgery was discovered by online digital sleuths.

Bellingcat also revealed that forgery techniques weren’t just being used digitally but also in real life, with pro-Russian separatists using dead bodies to stage scenes of destruction meant to justify a Russian invasion. The materials were then used by official Russian media, highlighting disinformation’s role in the Russian offensive.

Being aware of these four techniques – and following Bellingcat’s extremely active (and excellent) Twitter account – is the first step toward gaining the necessary skepticism.

Sometimes basic skepticism can go a long way. With a video, if you can’t find any source beyond the video saying that such an incident occurred, there’s very good reason to assume it’s false. With the Russia-Ukraine war, this goes for fighting that allegedly took place at a certain place and time.

If you’d like to delve deeper, other open source tools exist. Sites like Flightradar24 can reveal the existence of (some) military planes over a certain area at a certain time. Others, like the Centre for Information Resilience, are keeping a running thread with all attacks and incidents that have been confirmed. All told, a critical eye isn’t enough to debunk a fake or misleading video or image.

The 3 Ws

To do some open source visual forensics – the term for trying to figure out if a video or image is real or accurately presented – Bellingcat’s “A Beginner’s Guide to Social Media Verification” is an extremely useful tool.

The goal, the group explains, is to ascertain a visual’s origin. This requires figuring out who the source of the photo or video is, where it was taken and when. There are a number of methods.

Regarding the “who,” basic social media savvy and a critical eye can actually go a long way. Check out the account that posted the video. Does it seem legitimate? Does it provide a source? When was the account opened? What else does it post? Is it extremely new?

As a general rule, when Twitter accounts are only a few days old and have few followers, but those tweets are getting massive traction, don’t believe them. Still not sure? The website Bot Sentinel will analyze any Twitter account and its followers and flag bot-like activity.

Reverse visual engineer

Answering the “where” and “when” can be done in a number of ways. First, examine the video or image and search for text or any detail that can shed light on its location or date. Is the car’s license plate number backwards? Maybe the image has been flipped to try to hide something. In what language is the sign on the store in the background? If it’s Arabic, it’s probably not from Kyiv.

A more advanced tool is reverse image search. With websites like Google Images and Bing, you can upload an image and search the web for it. So if the image appeared in other contexts in the past, you’ll know. Countless images fail this simple test.

Take the following Twitter account purportedly belonging to a Canadian named Steve Kevin. A simple search for the user’s profile photo revealed that it has been used by other accounts in the past, so Steve is probably a fake user pushing a pro-Kremlin line amplified by bots or other fake accounts.

In another case, the image of the so-called Ghost of Kyiv – an allegedly daring MiG pilot battling the Russians – was widely circulated as a symbol of the Ukrainian defense effort. Through a reverse image search, it was clear that this image came from when Ukrainian air force pilots received new helmets

Websites like FotoForensics and apps like Serelay offer a more detailed search of both the web and the file itself, also flagging any manipulation of an image. They may also provide a photo’s so-called metadata, which can help you determine a picture’s originality. InVID Verification, a plug-in for your browser, provides the same service but with videos.

Metadata and the sun

Every digital image has its metadata; for example, when the photo was taken and sometimes also the location. Though useless for screen captures, it’s extremely useful if you’ve managed to find – through a reverse image search – a real photo of the site.

Metadata can be found by looking at a file’s information (right-click and press “show details” or “file information”), and it can also be provided by FotoForensics, Serelay or the InVID Verification plug-in. This provides key information for answering the “when” and “where.”

With a bit more investigative work and corroboration, the metadata can also reveal purposeful misrepresentation; for example, the app SunCalc helps place the sun’s location at any time at any area on the map. With the app, you can try to match the shadows in an image to the purported location and time, says Arizona State University’s News Co/Lab, an open access digital literacy organization focusing on journalism.

How to spot a deep fake

So-called synthetic media, or deep fakes – computer-generated visuals that seem real – are usually images. Fully fake videos are still rare.

Deep fakes have been found by Facebook; for example, to serve as the profile pictures of fake accounts pushing out polarizing information on Russia and Ukraine. These fake accounts aren’t necessarily bots but serve the same role of amplifying Russian state propaganda and spreading misinformation.

A number of tell-tale signs help you spot a deep fake, and tools are available to help you understand if a user’s profile is real and if a person in a video is real.

Generic stock images of people who don’t exist has become a niche market of sorts and is not always nefarious. The website This Person Does Not Exist and its Random Face Generator provide insight on how to spot fake faces. The MIT Media Lab also provides a website for training yourself how to identify fake faces.

These two sites and other experts say the best way to spot a deep fake is to look at a person’s ears, eyes and hair (facial or otherwise).

Many low-end deep fakes – known as “cheap fakes” like those provided by This Person Does Not Exist – fail on the fine details. These are also the fakes most commonly used online. With these types of fakes, ears tend to be asymmetrical, with some fake avatars having two different ears or a different earring on each. Eyes are sometimes misshapen, or there’s a mispairing. And hair can seem synthetic and the hairline strange, sometimes suffering from pixelations.

Such fact-checking is increasingly being done by news agencies like Reuters and The Associated Press. Bellingcat and others to try to flag disinformation that has gone viral. Sometimes the best defense is just to stay abreast of news on disinformation and follow trusted sources already doing the investigative work.

All told, make sure you’re only retweeting and sharing content you know is either true or came from a trusted source that can provide context.

Omer Benjakob