Switch Mode

Motivation, telltale signs, the larger narratives


AT A GLANCE:

  • Before looking at the telltale signs, it’s important to know who can be the target subject of deepfake videos, and the motivations behind such content.
  • Researchers, including those from MIT, have identified a list of telltale signs to look out for in order to detect deepfake videos such as inconsistencies in the eyes and face, pixelation, and faulty lip-syncing.
  • While deepfake detection technology is helpful, experts say that in this “arms race,” there has to be a human-machine collaboration to fight off deepfake’s dangers most effectively.
  • While there are readily appreciable dangers when it comes to deepfake video, the use of AI in influence operations has mostly been focused in increasing output for traditional content such as social media posts and bots, while deepfake audio has also shown its impact.
  • The potential power and influence of deepfake videos are enabled by the larger narratives.

MANILA, Philippines – Deepfake videos and audio have spread this year, signaling the advent of a new disinformation age where people will have to contend not just with the traditional fake posts on social media, but with eerily realistic deepfake videos and audio that people will have to learn how to identify. 

Just to name a few, this year, we saw deepfake GMA anchors Ivan Mayrina and Susan Enriquez promoting a free ‘Mama Mary necklace from Vatican, Italy’; a deepfake Maria Ressa promoting Bitcoin; personalities such as Vilma Santos, Lucy Torres, Regina Velasquez, and again, Maria Ressa, each with their own deepfake videos promoting fake diabetes cures; and President Ferdinand “Bongbong” Marcos Jr. who had a deepfake audio ordering a military attack, and a deepfake video alleging the use of a psychoactive substance. 

There are three things to be learned from these incidents right away: 

  1. Public personalities are prime targets. News anchors, journalists, political leaders, and personalities and celebrities are deepfaked as they are known to carry influence over people. 
  1. Profit can be a motivation in some cases. Taking advantage of people’s trust, the propagators of the deepfake video may attempt to scam people in a variety of ways such as selling a product with promises of a cure, or getting people to send money in an investment ruse. 
  1. Deepfake videos can be used to create political chaos. Aside from profit, public figures such as top officials can be targeted in an effort to harm their reputation or to create chaos. A deepfake can potentially be dangerous in terms of national security, as in the case of the fake Marcos audio ordering a military attack. 

What can you do? Here are our tips: 

  1. If you see a high-profile personality in a video that’s stating something that they might not usually say or do — such as a journalist asking people to invest in cryptocurrency, or promoting a free product in exchange for your information — search for other credible sources such as the personality’s own social media page, trusted news sites, or the YouTube channels of TV shows where the original video might have been sourced.

    With the awareness that profit can be a motivation for these schemes, be on guard when a person of influence is being used to promote something.

  2. In other cases, if a video is very sensational, if it makes you feel very strongly or very emotional, or if it garners an immediate intense reaction, that should also tell you to double-check and look for other sources that may verify or debunk the claims of the video. 

Huge claims like a president allegedly caught on camera doing drugs or ordering a military attack are some types of content that fall in this category. It’s something you would want to see whether other credible sources of information are verifying or reporting on. 


From disinformation beneficiary to target: Marcos battles deepfakes

Basic deepfake signs

Apart from being aware of who can be a target of deepfake videos and what the motivations might be, there are also artifacts that people can learn to spot in order to detect a deepfake video. 

The Massachusetts Institute of Technology (MIT) Media Lab in 2019 launched the Detect Fakes project (you can try the test for your deepfake detection aptitude) that argues for exposing people to deepfake content in the proper forum in order to train their senses for detection. 

“We hypothesized that the exposure of how DeepFakes look and the experience of detecting subtle computational manipulations will increase people’s ability to discern a wide-range of video manipulations in the future,” MIT said.

It added, “When it comes to AI-manipulated media, there’s no single tell-tale sign of how to spot a fake. Nonetheless, there are several deepfake artifacts that you can be on the look out for.”

Through their research, MIT spotted the telltale signs of a deepfake video. 

The clues can be seen by focusing on the face, specifically the skin on the cheeks and the forehead. Is the skin too smooth or too wrinkly? Make sure to compare the wrinkles on the eyes or how aged the hair is with the quality of the skin — are these elements consistent with regards to the age of the person? 

Looking at the eyes and eyebrows, the MIT advised to look for misplaced shadows, saying, “Deepfakes may fail to fully represent the natural physics of a scene.” 

The same question of physics applies in cases where the subject is wearing glasses. Ask if the glare looks natural, and notice how the glare’s angle may change? Is it shifting consistently?

With regards to eyes, research by University of Hull master’s student Adejumoke Owolabi, said that real ones will have similar reflection patterns in both eyes, while deepfake may have reflection inconsistencies. Owolabi illustrated this in a set of images.

In this set of deepfake images of eyes, the reflections (highlighted in color in the right column) have different patterns when looked at closely:

Art, Collage, Contact Lens, deepfake eyes
Photo from Adejumoke Owolabi

In this set of images of real eyes, the pattern is largely the same for both:

Art, Collage, Baby, real eyes

Look at the facial hair. MIT said that “Deepfakes may fail to make facial hair transformations fully natural” so pay close attention to the moustache, sideburns, or beard. Aside from facial hair, facial elements like moles can be a tell. Do they look real? 

Unnatural blinking can also be a sign. Does it seem like the subject is blinking enough or blinking too much? 

And lastly, look at the lips. Many deepfake videos attempt to sync an audio to the movement of the lips, and currently, the syncing is imperfect. The lips may make a shape different from the sound being produced, and this is one of the more noticeable elements that you can look out for. 

The Guardian, asking for advice from Siwei Lyu, the University of Buffalo professor behind the opensource deepfake detector tool DeepFake-o-meter, wrote that most deepfake inconsistencies are found on the lips, because “because current AI video tools will mostly change the lips to say things a person didn’t say.” 

“An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said.”

The expert added that the teeth and tongue may also seem to appear off when the mouth is open.

Lyu, analyzing a deepfake video of Ukraine President Vladimir Zelenskiy, said to look for pixelation in the image: “The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.” 

MIT said, “High-quality deepfakes are not easy to discern, but with practice, people can build intuition for identifying what is fake and what is real.”

Microsoft highlighted the importance of media literacy as well as the tech evolves. “As technologies keep developing, keep your media literacy skills sharp. Learning about deepfakes is an important step in a continuous journey of improving your media literacy and fact-checking skills,” the tech company advised in a blog.

A fast-evolving problem

Generative artificial intelligence, however — the technology that powers deepfakes — is fast evolving. The signs that we look at now to determine deepfakes may not be there in the future when the technology becomes better, so as the technology evolves, so too must people with the techniques and tools we can use to identify deepfakes. 

Vincent Conitzer, co-author of the book Moral AI, talked to Wired about the problem of deepfake being an arms race: “Even if you have something that right now is very effective at catching deepfakes, there’s no guarantee that it will be effective at catching the next generation. A successful detector might even be used to train the next generation of deepfakes to evade that detector.”

There are tools like the aforementioned open-source DeepFake-o-meter, which lets users upload a piece of media on the site for analysis; and GetReal Labs’ detector, which uses several AI models for detection too, like DeepFake-o-meter. 

And while these things may help, the tools’ creators both express the need for a combination human-machine approach to dealing with deepfakes. 

DeepFake-o-meter’s Lyu told The Guardian, “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”

GetReal Labs’ founder Hany Farid told Wired, “Anybody who tells you that the solution to this problem is to just train an AI model is either a fool or a liar.” 

Emphasizing the importance of tackling the problem of manipulated media, Farid said, “the poisoning of the human mind is an existential threat to our democracy.”

Election dangers? 

Deepfake videos, with their potential in sowing chaos and confusion, can be dangerous within the context of elections.

With the 2025 Philippine midterm elections coming up, it’s important for people to learn how to quickly and properly identify a deepfake video, so as not to be swayed by its efforts to manipulate. 

How big of a threat are deepfake videos currently to our election? Let’s take a look at the current 2024 US presidential election cycle. 

A September article by NPR noted that while there has been widespread concern on deepfake videos and audio “doing or saying something they didn’t,” the biggest use of AI in disinformation has been in boosting the volume of fake content on social media, fake accounts, and fake news stories, or generating memes and jokes. 

The Trump campaign, for instance, ran an AI-generated image of a fake Taylor Swift endorsement, which the singer responded to in her later endorsement of Democratic candidate Kamala Harris. 

The threat of deepfake videos has truly yet to materialize, less than two months before the US elections, the site reported. US adversaries Russia and Iran are using AI to “quickly and more convincingly tailor often-polarizing content aimed at swaying American voters” — just not necessarily, deepfake videos. 

China is also using AI to boost its influence operations, but has not specifically targeted the US elections. 

That said, Filipinos have a reason to look out for potential Russian and China influence operations in the upcoming elections, including the potential use of AI-generated content and deepfake videos. Rappler in 2019 already found links between the disinformation ecosystem in the Philippines to Russia and the St. Petersburg-based Internet Research Agency (IRA), a state-sponsored troll farm, while China propaganda is seeded online in the country.

Now, why haven’t deepfake videos been deployed in a more widespread manner? US think tank RAND said the cost and computational resources needed to create an incredibly realistic deepfake are high, and that’s one factor.

Deepfake audio not video might be more powerful as well at the moment, with the think tank citing how the 2024 presidential polls in Slovakia was disrupted by a fake audio with politicians allegedly rigging the vote. In the US Democractic primaries in January, fake robocalls using President Joe Biden’s voice were used to tell voters in New Hampshire to stay home and not vote.

Aside from cost and the apparent effectiveness of fake audio, RAND suggested that a deepfake video also needs to be supported by a larger narrative. 

It can’t be just a video that looks convincingly real — there has to be supporting information in people’s minds to make it truly effective. 

“The influence of deepfakes is not necessarily to make people believe something that is not true but to create engaging content as part of a broader narrative — just like many other forms of media,” RAND wrote. 

So, while looking at the telltale signs of a deepfake is important (especially in fast-moving situations such as the elections where information might spread quickly before it can be debunked), it’s just as crucial to stay informed and have an understanding of the surrounding narratives, especially within the context of elections, which may allow for the spread of an effective, negatively impactful deepfake video. – Rappler.com



Source link

Recommendations

Comment

Leave a Reply

Your email address will not be published. Required fields are marked *