Shallowfakes are rampant: Tools to spot them must be equally accessible

Published: Thu, 10/27/22

Presence News

from the International Society for
Presence Research
 
JOIN THE ISPR PRESENCE COMMUNITY ON FACEBOOK  

Shallowfakes are rampant: Tools to spot them must be equally accessible

October 27, 2022


[Deepfake videos are increasingly likely to cause viewers to overlook the role of technological manipulation in their media experience (i.e., experience presence), with dangerous consequences, but as this opinion piece from The Hill notes, shallowfakes – ”videos shared with misleading contexts or small edits” – pose a more immediate danger, and learning how to blunt their negative impacts will help prepare us to address the more sophisticated media manipulations to come. For much more information about this see a 90-minute video of a talk by the author on YouTube.  –Matthew]

[Image: A woman in Washington, DC views a manipulated video on January 24, 2019, that changes what is said by President Donald Trump and former president Barack Obama, illustrating how deepfake technology can deceive viewers. Credit: Rob Lever/AFP via Getty Images]

Shallowfakes are rampant: Tools to spot them must be equally accessible

By Sam Gregory, Opinion Contributor
August 26, 2022

[Sam Gregory is the director of programs, strategy and innovation of WITNESS, a human rights organization with global experience working on how people in crisis ensure their videos are trustworthy and more evidentiary.

Note: The views expressed by contributors are their own and not the view of The Hill.]

Deepfakes had a moment this year: From a fairly obvious deepfake of Ukrainian President Volodymyr Zelensky on hacked media in Ukraine to more proficient video scams involving Elon Musk and potentially spurious claims of deepfakes obscuring the truth from Pakistan to mayors’ offices across Europe .

Hundreds of articles have focused on these and hypothetical deepfake scenarios and what they mean for the future of video and trust. But despite all this attention, shallowfakes — videos shared with misleading contexts or small edits — are far more widespread and easier to produce. Tackling the scourge of shallowfakes will set us in good stead in a number of ways. It will solve a pressing issue the public faces today. And it will provide a powerful set of tools for addressing more complex synthetic media manipulation like deepfakes.

Shallowfakes can muddy the waters by casting doubt on urgent, authentic videos or by misdirecting attention by promoting false narratives. In conflict zones or during civil unrest, addressing these videos can mean the difference between life and death. It can be the difference between securing justice and enough reasonable doubt to prevent action. Poynter noted the explosion of this kind of shallowfake on TikTok during the early stages of the war in Ukraine, and we’ve seen countless more examples from the U.S. and from countries around the world.

Luckily, we don’t need to implement complex new processes to tackle shallowfakes. Centrally this is a question of media literacy — how do you enable people to do the ‘lateral reading’ around a video or image they encounter in order to identify that it has been decontextualized or that there are better sources of information? However, there are also concrete, yet underresourced ways to help a media consumer verify whether a piece of video or an image has been shared with misleading context.

Most journalists know how to use a reverse image search (or keyframe search for video) to see where the first instance of a piece of content is, and to see where it pops up over time. It is standard practice, and yet it is not something that is broadly accessible to ordinary people on the platforms they use most. TikTok doesn’t offer it as individuals look at videos on their “For You” pages and Facebook and Twitter don’t offer it as users browse their timelines — neither does YouTube as people watch video after video. On Google, it is inadequate where it is found offered directly for consumers: in Google Lens and embedded in Google Images, where it is oriented towards shopping queries, not confronting hoaxes and deception.

This is something we’ve heard our partners on the ground ask for again and again in the global consultations we’ve led on dealing with media manipulation. If we could only ensure that there was equitable, intuitive access to simple verification tools like reverse image and video search — both within individual platforms and across platforms, as falsehoods don’t know platform borders — ordinary people and human rights defenders alike would be in a much stronger position to tackle the next wave of shallowfakes.

At the same time, the fact that a simple reverse image search remains inaccessible highlights how we are going to have to approach this from a standpoint of equity. We need to approach this issue from the perspective of global human rights orientation, rather than applying a Western first or Silicon Valley-led approach. We see this bias manifest in a number of ways, including an overreliance on centralized authority and legislative or regulatory solutions, as well as in assumptions that social media platforms are able or willing to act globally in contextually appropriate ways to understand and respond to misinformation. This belies the state of affairs in much of the world, (including the Western world) where those in power have active interests in leveraging shallowfakes and the liar’s dividend , and where platforms have systematically underinvested in content moderation.

On the flip side, we need to preserve creative expression and recognize the continued use of satire and creative expression to work around repressive regimes. We also need to ensure that our approach doesn’t discount people who don’t have access to the “right” technologies, whether that is a specific app or just a smartphone. These tools need to be ubiquitous and in the platforms that we all use, just like the tools used to create shallowfakes.

Ensuring that communities around the world can access and intuitively use detection tools is the heart of what an equitable approach to detection should look like. It also lays the groundwork for much more daunting challenges to come — deepfakes and AI-generated photorealistic images among them.

It may surprise some to learn that we actually have a burgeoning set of technologies to track the transformations of manipulated media. My company, WITNESS, has been deeply involved in several such efforts, including the standard-setting work of the Coalition for Content Provenance and Authenticity, originally convened by Microsoft, Adobe, Twitter, the BBC and others. These technologies hold promise if they are built and used with attention to these global contexts : incorporating privacy, having broad accessibility, being opt-in and not using harmful legislative approaches. And there is increasing investment in the technical tools to detect deepfakes in cases where these authenticity trails are not embedded in the media. But if we can’t manage to make these emerging tools available for those who need them most, and intuitive to use, we can never hope to address the coming wave of disinformation.

It is hard to overstate the value of and potential for justice that civilian witnesses and democratized access to video provides. It’s critical that we prioritize equitable access to simple, easy-to-use methods of authenticity detection if we want to continue advancing the fight for human rights and justice and fortify the truth.


 
 

Managing Editor: Matthew Lombard

The International Society for Presence Research (ISPR) is a not-for-profit organization that supports academic research related to the concept of (tele)presence. ISPR Preseence News is available via RSS feed, various e-mail formats and/or Twitter. For more information please visit us at http://ispr.info.

ISPR
2020 N. 13th Street, Rm. 205
Philadelphia, PA 19122
USA