Bullying is a problem in VR, and it’s likely to get worse

Bullying has a long history in digital spaces, but being literally in front of virtual reality can make it a whole lot more real than on a flat screen. And the companies behind many of the most popular social VR apps don’t want to talk about it: Meta, as well as the most popular social VR apps. VRChat and Rec Room declined interview requests from CNN Business about how they combat harassment in virtual reality.

But the problem is sure to become more common as cheaper, more powerful headsets make more people shell out for the tech: you can currently pick up the Quest 2 for $299, making it cheaper (and easier to find) than a Sony PlayStation 5.

“I think [harassment] is an issue we need to take seriously in virtual reality, especially if we want it to be a welcoming online space, a diverse online space,” said Daniel Castro, vice president of the Information Technology & Innovation Foundation. . “Even though you see really bad behavior happening in the real world, I think it can get worse online.”

Bubbles, blocks and mute

Virtual reality didn’t become accessible to the masses overnight: for Meta, it started with the company’s purchase of Oculus VR in 2014, and in the years since, the company has rolled out a series more and more efficient, affordable and portable helmets. That work is paying off, as Meta’s Quest headsets accounted for about 80% of VR headsets shipped last year, according to Jitesh Ubrani, research director at technology market researcher IDC.

And as more and more people spend time in virtual reality, the bad behaviors that can occur are increasingly highlighted. It’s hard to say how widespread VR harassment is, but a december report from the non-profit Center for Countering Digital Hate gives an idea of ​​its prevalence in VRChat. There, researchers identified 100 potential violations of Meta’s VR policies, including sexual harassment and abuse, during 11 hours and 30 minutes spent recording user activity. (While VRChat declined an interview, Charles Tupper, VRChat’s community manager, provided details via email about its security tools and said the company regularly has more than 80,000 people using VRChat – the majority of them with VR headsets – during peak hours on weekends. )
It's not the Apple Store.  This is the Meta Store

In hopes of stopping and preventing bad behavior, social VR apps tend to offer a number of common tools people can use. These tools range from the ability to set up an invisible bubble of personal space around you to prevent other avatars from getting too close to you, to mute people you don’t want to hear, to Block it out completely so they can’t see or hear you and vice versa.

Reporting bad behavior and moderation practices in place in virtual reality can be similar to those in online games. Users can sometimes vote to kick someone out of a VR space – I experienced this recently when I was asked to vote on whether to kick someone out of place in Meta’s Horizon Worlds after they repeatedly approached me and other users saying, “By the way, I’m single.” (This user got the boot.) Human moderators are also used to respond to complaints of bad behavior, and apps can suspend or ban users if their behavior is egregious enough.

Horizon Worlds, VRChat, and Rec Room all offer these kinds of security features. Horizon Worlds added its own default four-foot buffer around user avatars in February, about three months after the app launched. VRChat and Rec Room, which have been around for years, also start users with a default buffer.

“These steps are the right direction,” Castro said, though he acknowledges that different apps and platforms – as well as public VR spaces where anyone can stop, as opposed to private spaces where invitations are limited – will come with different content moderation challenges.

These tools will also evolve over time as more and more people use virtual reality. In a statement, Bill Stillwell, Product Manager for VR Integrity at Meta, said, “We will continue to make improvements as we learn more about how people interact in these spaces.”

A burden for the victims

While some of today’s tools can be used proactively, many of them only come in handy after they’ve already been nagged, pointed out Guo Freeman, assistant professor of human-centered computing at Clemson University. who studies games and social virtual reality. Because of this, she feels they are putting a burden on the victims.

An effort to make it easier to detect and report harassment comes from a company called Modulate. Its software, known as ToxMod, uses artificial intelligence to monitor and analyze what users say, then predicts when someone spouts harassing or racist language, for example, and doesn’t just indulge in insulting words. ToxMod can then alert a human moderator or, in some cases, be configured to automatically disable offending users. The games room is try it in some public areas of the VR app.
Magic Leap raised billions but his helmet failed.  Now he's trying again

It makes sense that app makers are grappling with the moderation challenges that come with scaling and wondering if new kinds of automation might help: the VR market is still tiny compared to that of console video games, but it is growing rapidly. IDC estimates nearly 11 million VR headsets were shipped in 2021, a 96% jump from the 5.6 million shipped a year earlier, Ubrani said. Over the two years, Meta’s Quest helmets made up the majority of these expeditions.

In some ways, ToxMod is similar to the number of social media companies that already moderate their platforms, with a combination of humans and AI. But the sense of acute presence that users tend to experience in VR — and the fact that it relies so heavily on spoken rather than written communication — might make some people feel like they’re being spied on. (Modulate said that users are notified when they enter a virtual space where ToxMod can be used, and when a new app or game starts using ToxMod, Modulate’s Community Manager will generally communicate with users by line – like through a game’s Discord channel – to answer any questions about how it works.)

“It’s definitely something we spend a lot of time thinking about,” said Modulate CEO Mike Pappas.

There are no set standards

A primary challenge in addressing harassment in virtual reality is the lack of agreement on what even counts as harassment in a virtual space versus a physical space. In part, that’s because while virtual reality itself isn’t new — it’s been around in different incarnations for decades — it’s new as a mass medium, and it’s changing accordingly all the time.

This novelty means there are no set standards, which can make it difficult for anyone behind a headset to figure out what’s right or wrong when interacting with other people in VR. A growing number of children are also entering virtual spaces and, as Freeman pointed out, what a child may consider playing (such as running and acting wild) an adult may consider bullying.

“Often in our research, participants feel very confused about whether or not this is playful or harassing behavior,” Freeman said.

A screenshot taken while using VRChat shows tips appearing briefly on the screen before entering the app.

Harassment in virtual reality can also take new forms that people cannot have offline. Kelly Guillory, comic book illustrator and editor of an online virtual reality magazine, had this experience last year after blocking an old friend in VRChat who had started acting controlling and having outbursts. emotional.

Once she blocked it, she could no longer see or hear it in VRChat. But Guillory was, strangely, still able to sense his presence nearby. On several occasions, while chatting with friends on the app, the avatar of her stalker would approach the group. She thinks he suspected his avatar was there, as his friends often said his name out loud. He was joining in the conversation, talking to the other people she was interacting with. But since Guillory couldn’t see or hear his avatar, it seemed like his friends were having a one-sided conversation. For Guillory, it was as if his stalker was trying to get around his block and impose his virtual presence on him.

“The first two times it happened, it was boring,” she said. “But then it just kept happening.”

It may seem real

Such virtual reality experiences can feel extremely real. Freeman said in his research, people reported that having their avatar grabbed by another person’s avatar felt realistic, especially if they used full-body tracking to replicate their limb movements. . A woman reported that another virtual reality user approached her face, looking like he had kissed her – an action that frightened her, she told Freeman, because it looked like someone doing the same in the offline world.

Why you can't (yet) have legs in VR

“Because it’s immersive, the epitome of social virtual reality, these behaviors kind of feel realistic, which means they can feel damaging because they’re physical, the threat,” Freeman said.

That was the case for Guillory: She developed anxiety about it and lost trust in people online, she said. She eventually spoke out on Twitter about the harassment, which helped.

“I still like it here, but I want people to do better,” she said.

About Alma Ackerman

Check Also

Robert Shiller: The Fed risks “disgrace” if it does not control inflation

A version of this story first appeared in CNN Business’ Before the Bell newsletter. Not …