“Promise me you won’t feel upset”: The Deepfake Pornography Crisis and Regulatory Roadblocks
10-minute read
For Rana Ayyub, a day that began with coffee ended with a stream of visceral harassment, rape threats, and doxing. In April 2018, the Indian journalist, craning over a steaming coffee cup, received a chilling message from a friend: “Something is circulating…I’m going to send it to you but promise me you won’t feel upset.”
The video that appeared on Ayyub’s phone that day was pornographic. Ayyub describes only visceral reactions to staring at a bundle of pixels forming an image that looked exactly like her, but a version of her doing something she never did: shock, tears, vomit.
What Ayyub was looking at was a deepfake, a hyper-realistic video that had been doctored using artificial intelligence (AI). More specifically, it was deepfake pornography that featured her face superimposed on a sex worker’s body. This deepfake pornography of Ayyub would be shared tens of thousands of times in the coming weeks. Ayyub was powerless to stop it from spreading and powerless to stop people from believing that it genuinely depicted her.
Unfortunately, Ayyub’s experience is far from unique. Ninety-six percent of all deepfake videos detected online in 2019 constituted non-consensual porn. Anyone with a device and an internet connection can now generate hyper-realistic deepfake photos or videos of people doing things that they never did, including performing sexual acts. Almost exclusively, women are the targets of pornographic doctoring of this sort.
Deepfake pornography is a vehicle for misogyny that has been empowered by fast-moving and unchecked technological development. Recent innovations in AI seem likely to further promote its spread. For example, the growing accessibility of AI text-to-image tools like DALL-E or Stable Diffusion could make it even easier to create pornographic deepfakes of real people without their consent.
But regulators are not blind to the proliferation of issues with hyper-realistic AI generated pornography. Just last week, the U.K. government announced plans to amend a proposed law called the Online Safety Bill to criminalize the sharing of any deepfake pornography made without the subject’s consent. If the Online Safety Bill were to pass and these amendments were enshrined in U.K. law, certain platforms and services would be obligated to remove non-consensual deepfake pornography at the risk of serious penalties. Although the Online Safety Bill remains in draft form and must pass through the U.K. parliament before it could become law, it marks an important call to action against deepfakes by regulators.
Compared to the U.K., the patchwork regulatory environment in the U.S. presents new challenges for regulating deepfake pornography. California, New York, Virginia, and Georgia have initiated state laws that could protect against deepfake pornography in various ways. In addition, bans on non-consensual pornography by private platforms like Reddit and Twitter may curb sexually explicit deepfake content. However, though encouraging, these efforts are likely to be undermined by several factors unique to the U.S. environment. The First Amendment remains a significant challenge to the ability to regulate deepfakes on both a federal and state level. Deepfake creators often assert a First Amendment defense in civil claims brought against them. By contrast, there remains no constitutional right for victims to defend their personalities and reputations in the U.S., meaning their privacy interests are bound to lose out to free speech arguments in court. Only rarely – and typically in cases involving celebrities – could Deepfakes be considered to infringe upon individuals’ rights to their likenesses, especially if those responsible for this content are deemed to have unfairly appropriated value from likenesses for their own commercial benefit. And when it comes to industry solutions, private platforms are unlikely to enforce widespread bans on deepfake pornography given how Section 230 of the US Communications Decency Act protects them from liability for this content.
It is possible that laws like the Online Safety Bill could pass and that non-consensual deepfake pornography could be banned in jurisdictions like the U.K. But, even in this case, enforcement would remain a problem given how difficult to distinguish real videos and pictures from hyper-realistically doctored content like deepfakes. To initiate policies that adequately protect victims of non-consensual deepfake pornography and ensure this content does not evade detection, I propose that we first identify the ethical, legal, and moral problems with this content that regulators must address.
So, what are the core issues with Deepfakes? Deepfakes inhibit our understanding of reality and simultaneously constitute egregious violations of victims’ privacy. As an initial matter, scholars often isolate what I call “the authenticity problem,” namely, that this technology confounds the average person’s ability to distinguish between a real recording and a deepfake video. In the case of deepfake porn, the authenticity problem means that a viewer of this content could reasonably believe that a victim who has been doctored into a video actually engaged in the sexual acts depicted. This conviction harms the victim’s reputation, dignity, autonomy, and privacy, as reflected in Ayyub’s case. Because she often investigated crimes committed by public and government officials, Ayyub stresses that she was accustomed to being the target of trolls. But seeing her face so realistically doctored onto the body of a sex worker in a pornographic video made this abuse unlike any other. This video essentially “claimed to be” her. And the effect of this claim, she explained, was to make “people…think that they could now do whatever they wanted to” her. The distressing impacts of this kind of sexual abuse, including the fear that it can engender into victims, cannot be overstated. The deepfake video created of Rana Ayyub, through empowering online actors to launch other forms of harassment and abuse at her, posed tangible threats to her person. These could have silenced Ayyub, inhibiting her from continuing her journalistic work. While it is important to acknowledge the serious harms this video posed to her person, her identity, and her livelihood, there are more covert impacts of deepfake pornography that are less easy to articulate, including on victims’ mental health. You can easily imagine, for example, how being exploited via having your photo manipulated without your consent could easily make you reluctant to have your photo taken again, have any presence online, or even leave the privacy of your own home.
Some industry and policy solutions have been launched in the U.S. that specifically target these core issues with deepfakes, including the authenticity problem. Facebook announced in 2020 that it would ban deepfakes that had been edited in “ways that aren’t apparent to an average person” and “would likely mislead someone into thinking that a subject of the video” performed an act they “did not actually” do. The DEEP FAKES Accountability Act, a bill under consideration by Congress since 2019, would criminalize any Deepfake created without a digital watermark that clearly identifies doctored videos or photos. Theoretically, these solutions would mean that, even if online users were to encounter hyper-realistic deepfake pornography videos, they would be alerted that this was doctored content. They would then know that any pictured individuals had not necessarily participated in the sexual acts depicted. However, despite their intentions, these proposed solutions to the authenticity problem still fail to mitigate the traumatic harms of non-consensual deepfake pornography attacks for victims like Ayyub in practice.
Why would these proposed solutions ultimately fail to help victims of deepfake porn? Firstly, they do not account for the evolving sophistication of the spectrum of deepfake technologies. One evolution lawmakers must anticipate is the inevitable integration of deepfakes with even more immersive technologies, such as Virtual Reality (VR) and Augmented Reality (AR). Technology scholars are already predicting that this marriage could eventually allow “synthetic realities” to become commercially viable. VR-enhanced deepfake pornography could be so immersive that it effectively eradicates the ability of viewers to distinguish between real and virtual experiences. The U.S. proposed DEEP FAKES Accountability Act, for example, fails to account for such highly immersive technologies. For, with this degree of immersion, even having a digital watermark on deepfake pornography could fail to convince a reasonable user that a video is doctored. Hence, the authenticity problem remains unsolved and victims of non-consensual deepfake pornography continue to have their reputation, sexual privacy, autonomy, health, and safety threatened.
In addition, we cannot forget that it’s not just the autonomy of victims whose faces have been non-consensually doctored into pornography videos. There are many, dehumanizing implications for pornographic performers who have their bodies doctored via deepfake technology, reducing them to the surface of another individual’s face and the tool for another woman’s harassment. State initiatives to establish private rights of action for victims of deepfake porn, like California’s 2019 law, seem to overlook the need to protect the sex workers whose nude bodies have become canvases upon which to superimpose others’ faces.
Moreover, mandating digital watermarks on deepfake videos might ensure that individuals who come across deepfake pornography are alerted that they are watching doctored content, but watermarks will not mitigate the violation of privacy for victims whose faces and bodies are used, shared, and profited from against their will. The case of DeepNude, a deepfake-generator app that started circulating in 2019, suggests that profit motivations will drive platforms producing deepfake pornography to evade accountability for violating women’s privacy, even if safeguards like digital watermarks are used. DeepNude allowed users to “strip” images of clothed women in seconds by superimposing realistic nude bodies upon them. A victim of a nonconsensual “strip” highlighted how destablizing it was to have her sexual privacy violated: “It felt like thousands saw her naked, she felt her body wasn’t her own anymore.” Before the DeepNude app was taken offline in 2017, it began enforcing digital watermarks on all Deepfake content in a way that resembbles. the recommendations of the U.S. proposed DEEP FAKES Accountability Act. However, not long after this policy change, DeepNude rendered any protection it claimed to offer victims through this change futile. Keen to monetize its strategy, Deepnude allowed users to remove the watermark on deepfake videos if they chose, provided — of course — that they paid a fee of $50.
The level of realism offered by technologies that can digitally and sexually reproduce individuals without their consent is only set to increase. With the recent news that Facebook is researching how augmented reality could become powered by users’ thoughts, the distinction between natural reality and doctored reality is still to be eroded. And as Big Tech continues to invest in the Metaverse, sites offering such virtual experiences are poised to serve as vehicles for toxic and misogynistic behaviours, including sexual harassment. With the development of these immersive virtual spaces, balancing questions of liability and First Amendment rights with the dignitary, privacy, and reputational rights of those implicated in these spaces will become more difficult. First Amendment issues will become particularly thorny when creators claim their content constitutes works of art, even as it also constitutes a form of abuse towards non-consenting victims. In this light, the digital and sexual reproduction of individuals without their consent will only amplify obstacles to justice for victims of online abuse, including the potential for harassers to retain online anonymity, the ease and scale of non-consensual attacks, and the asymmetry of power between victims and abusers.
We must call upon policymakers to recognize the acute risk that technologies like deepfakes pose to women. More than that, though, if we are to ever eradicate non-consensual deepfake pornography from the internet, we must propose and pass deepfake laws that provide adequate remedies for victims, proper incentives for platforms to remove harmful content, and harsh penalties for bad actors. How many women like Ayyub must promise a friend that they “won’t feel upset” before they are adequately protected from the dehumanizing harms of nonconsensual deepfake pornography attacks?