Finding Fantasy in the Metaverse Shouldn’t Involve Freedom From Moderation

 
Individual in blue, star-lit tent playing with virtual reality headset and equipment.

8-minute read

When sci-fi author Neal Stephenson was quizzed on how he had predicted the metaverse, he insisted he was just “making shit up.”  His 1992 novel Snow Crash envisioned how people donning virtual reality (VR) goggles could escape real life by entering into virtual worlds. But last year, Stephenson’s vision seemed set to become a reality as Facebook rebranded itself as “Meta” and committed to building an online universe that would meld real-world, augmented, and virtual experiences. This universe would mimic the encounters, economies, hierarchies, and incentives of the real world, but within an online space of avatars, virtual currencies, VR headsets, and augmented reality (AR). Terming this phenomenon the “metaverse,” Mark Zuckerberg nodded to Stephenson, the begrudging prophet who had coined this term three decades earlier.

 

Meta now offers metaverse gaming experiences to users with its Oculus Quest VR headset. By logging into Horizon Worlds, a social media meets VR platform, individuals can enter 3D, computer-generated environments. Here, users create digital avatars, play games, and interact with friends in thousands of custom virtual worlds. Horizon Worlds offers the same promise that Stephenson satirized in Snow Crash: evading the Big, Bad World by escaping into immersive online spaces.

 

Finding fantasy in immersive fictions can be an antidote to our real-world problems. That’s why the COVID-19 pandemic saw record numbers seeking comfort in gaming consoles, while lockdowns boosted fiction book sales. The metaverse seems like an even more immersive fiction for those seeking escape, which is hardly surprising given its nominal relation to Stephenson’s sci-fi novel. But while the lawlessness of VR games like zombie shooting might be a fantasy for some, for others, these have become sites for virtual gropingsexual harassmenthumiliation, and abuse, raising novel issues to which the arts are immune. 

 

On 26th November 2021, a beta tester of Meta’s Horizon Worlds reported something deeply troubling. Posting in a beta tester support group, she revealed that she had watched as her avatar had been groped by an unknown player’s avatar. “Not only was I groped last night, but there were other people there who supported this behaviour which made me feel isolated,” she wrote.  A darker picture of the metaverse begins to emerge once you start tracing similar reports of the non-consensual and sexual touching of avatars in virtual worlds. Overwhelmingly, female-presenting players are experiencing these virtual groping attacks on their female-presenting avatars. Nina Jane Patel, another participant in Meta’s virtual venues, reports how, “[w]ithin 60 seconds of joining” the VR world, her avatar “was verbally and sexually harassed” by “3–4 male avatars, with male voices” who “essentially, but virtually gang raped my avatar and took photos.”  As she tried to move her avatar away, “they yelled” “don’t pretend you didn’t love it” and “go rub yourself off to the photo.” Jordan Belmaire, a player of Quivr, a zombie-shooting game, described how a player “chased,” “grabb[ed]” and “pinch[ed]” her avatar, then “shoved his hand toward…[her] virtual crotch and began rubbing.”  Chanelle Siggens experienced her avatar being groped and ejaculated on by another avatar, despite asking the player to stop, when playing shooter game Population One using the Oculus Quest VR headset. Testimonies of women having experienced similar conduct in virtual worlds proliferate, so that it seems seeking relief in the metaverse from the unpleasant reality of sexism is a privilege that remains unattainable for many.

 

The gendered abuse that has sullied metaverse gaming experiences for many makes one thing very clear. While we might revel in the lawlessness of the metaverse like we revel in dystopian literature, we shouldn’t assume the fantasy of virtual worlds entitles us to freedom from content moderation in these spaces. Putting on VR goggles and diving into the metaverse seems to promise a liberating escape from the weary 9-5. But this does not entitle metaverse players to escape stringent moderation of their words and actions in online and virtual worlds.

 

However, the idea that we should moderate content in the metaverse has received push-back. There are legitimate fears that cleaning up the metaverse via automated content moderation or constant video surveillance would infringe on the freedoms players crave and deserve, for example. To some degree, these concerns are warranted. In an age of surveillance capitalism, Big Tech players are incentivized to harness as much user data as possible for targeted advertisements and predictive algorithms, in many cases sacrificing user privacy for profits.

 

Meta, like other metaverse players, is engaged in collecting user data in the metaverse, making privacy concerns understandable. The data gathered by their Oculus VR headset, which tracks players’ position and environments in real-time down to the millisecond, is intimate, personal, and biometric, including facial expressions, vocal inflections, and even vital signs. Not only is this bodily data being aggregated and collected, but some Meta applications even use it to generate insights about players. Meta application Super Natural integrates data gathered from the Oculus headset on users’ vitals to make inferences about their fitness and target them with workouts “calibrated” to suit their bodies. Meta also generates insights from user data to target them with advertisements. Crucially, the default setting for users of Oculus will be for Meta to ‘use information related to [their] use of VR and other Facebook products to show [them] personalised content, including ads, across Facebook products.’ This secondary use of data for targeted advertisements is easily accomplished since Oculus users currently have no choice but to log into VR platforms using their Facebook accounts.

 

It’s evident that all players that enter Meta’s virtual worlds using their Oculus VR headset are sacrificing a degree of their informational privacy. But this does not mean they should also sacrifice safety from harassment, abuse, and bullying.  This is not to say that VR worlds with killer zombies should be moderated exactly the same as virtual professional workspaces. Different virtual contexts will warrant different approaches to moderating content. Nonetheless, all metaverse spaces should have a base line of stringent content moderation in place before players are invited to participate in them.

 

Many metaverse companies do have content moderation systems – and other user safety mechanisms— in place, including Meta. Their problem, however, lies in how difficult it is to ensure compliance with these content moderation efforts in virtual worlds. The wealth of user protocols Meta has initiated – and iterated— since launching its metaverse shows a clear regard for user safety. For one, the Oculus VR headset takes a rolling buffer of what players see in their virtual worlds and saves this locally. This buffer is then able to be sent to Meta for human review if an incident is reported. In addition, human guides, who have been trained by employees to know the policies and behavioral rules in Horizon Worlds, greet and brief new players. Some of the policies that Meta currently has in place regarding conduct in VR include prohibitions against “touching someone in a sexual way or making sexual gestures.” It’s unclear at what point Meta developed this policy, whether before or after reports surfaced of groping and harassment in their VR spaces. However, despite any benefits of these policies and practices for deterring bullying, harassment, and sexual misconduct in VR, Meta – like other metaverse hosts – struggles to ensure compliance with them. Meta’s VR policies suggest that anyone who’s found guilty of breaking these rules could have their account suspended or banned. But investigations into the limited ways that these rules are enforced in practice suggest that most prohibited user activity in Meta’s VR spaces will probably evade detection. This disjunct between policy and practice is, in part, because Meta predominately leverages its user communities to police its VR spaces, relying on user blocks, mutes, and reports to violations of VR policies.

 

Andrew Bosworth, Meta’s chief technology officer, hinted at why such a hands-off approach was necessary for content moderation enforcement last year: “We can’t record everything that happens in VR indefinitely — it would be a violation of people’s privacy.” Bosworth reemphasized Meta’s focus on maintaining user privacy, clarifying that, even though Oculus devices constantly record users’ experiences in VR, this footage is overwritten over time unless a user files an abuse report. Finally, Bosworth added, Meta will always give notice to users when footage of them has been reported and observed by a human operator.

 

Bosworth’s statements imply Meta is facing a trade-off between protecting users’ privacy and enforcing moderation policies. This trade-off is one that will only be heightened if the metaverse increases in popularity in the coming years. With more users in the metaverse, more harassment, bulling, and abuse will undountably follow. At the same time, the increasing population of virtual worlds will generate more user data for Big Tech players to pool insights about user playing and spending habits. Clearly, then, the question of how to simultaneously protect user privacy and safety in the metaverse will continue to plague metaverse hosts.

When considering issues with content moderation and privacy in virtual worlds, we cannot overlook the current regulatory climate for metaverse players. The reality — as it stands — is that metaverse hosts like Meta are not incentivized to proactively moderate user behavior in VR spaces themselves given the immunity from liability they might be offered by Section 230 of the US Communications Decency Act. As online intermediaries that host speech, metaverse hosts are shielded by Section 230 from being wholly responsible for user behavior that does not qualify for certain criminal complaints. This reality makes it easier, perhaps, to rely on user communities to moderate content, despite this being an imperfect solution to the problem. Proponents of Section 230 reform might suggest that Metaverse hosts can afford to rely on imperfect solutions to content moderation problems so long as they are shielded from accountability by the legal provision.

As we wait for clarity on how Section 230 of the Communications Decency Act will be applied to virtual worlds and influence content moderation in these contexts, one thing is certain. We must resist assuming that freedom of expression must triumph over freedom from harassment in the metaverse. While the metaverse might be fantastic, and while it might be a place of escape for many, it cannot be a place of unfettered freedom to do harm with real-world consequences.

 

 
Previous
Previous

“Promise me you won’t feel upset”: The Deepfake Pornography Crisis and Regulatory Roadblocks

Next
Next

Deep Patient and Deeper Problems: Barriers to AI-powered Diagnoses in Healthcare