Blog

voice tech

Four Perspectives on How to Combat Voice Chat Toxicity in Games: A Look Back at GDC 2023

Otto Söderlund

Mar 28, 2023

4 min read

The Game Developers Conference (GDC) was back in full force for 2023. This came with new product announcements, plenty of content to consume, and nearly 30k gaming industry professionals all gathered in San Francisco. This also gave me the opportunity to gather a wide ranging perspective on the persistent issue of toxicity in online games.

  • Copy link

  • Mail

  • LinkedIn

  • Facebook

  • Twitter

The Game Developers Conference (GDC) 2023 was packed with announcements, sessions, and game industry professionals. It also offered me the chance to speak with more than two dozen people about their views on the persistent issue of toxicity in online games. 

Developers, product managers, and trust and safety professionals are increasingly aware of the negative impact that toxic behavior has on player experience, particularly in voice and text chat communications. Reports from ADL, Pew, The Wall Street Journal, and Speechly all confirm the problem is significant and complex. 

However, opinions differ about the best approach to address the issue. Last week, most of the perspectives I heard fell into one of four categories: toxic islands, user control, moderator assistance, and proactive moderation

1. Create Toxic Islands

The toxic island approach is based on three ideas. Persistently toxic players are few, they are likely to be okay with toxicity from others, and keeping them away from mainstream players will reduce overall harm. When players receive multiple reports of toxic behavior, they are sequestered on a “toxic island” and can only play with others who also have the same stigma. 

While this approach doesn’t reduce the incidence of toxicity, it decreases the frequency that mainstream players are subjected to bad behavior. The toxicity still exists. It is just more likely to be directed at other players that are also labeled as toxic. And it will almost certainly reduce the number of complaints submitted, which is a relief to many moderation and customer service teams. This method has also been used by some game makers to address alleged cheaters.

2. Let the User Control the Experience

Some developers believe that the problem can be solved with more user controls. They argue that encouraging users to play exclusively with friends and enabling them to mute toxic players in other circumstances provides an adequate response to toxicity that is found in gaming communications.

This is attractive to some game titles because simply adding muting functionality is far easier than managing moderation technology and human in the loop processes. Granted, selective muting can significantly impact the game experience for every player in a session if everyone is not hearing the same communications. It also begs the question - should gamers be solely responsible for managing toxicity while playing games online?

3. Use AI to Assist Moderators

Others argue that AI solutions could be employed to improve the moderation process. This group believes, for a variety of reasons, it is impractical to conduct proactive voice chat monitoring and raises concerns about false positives that could bog down the moderation process.

However, they also acknowledge that customer service and community management teams are often overwhelmed by user reports, which can lead to lengthy enforcement times, inconsistent enforcement, and little impact on the behavior of rule-breaking players. Their argument favors AI solutions to analyze voice chat exchanges and help human moderators become more efficient, accurate, and consistent in their decision-making.

A key starting point is to record and/or transcribe voice chats that protect player privacy but also provide data that can be analyzed by specially trained AI tools. The data are also important because otherwise there is no evidence for moderators to use in the complaint investigation process. 

4. Use AI to Proactively Flag Toxic Behavior

A fourth group suggests that the problem can only be solved with proactive monitoring and enforcement. They point out that a key challenge in combating toxicity is the slow feedback loop of the moderation process. Today, game makers only know about voice chat toxicity if a complaint is submitted, which means they are always going to be two steps behind the bad actors.

Monitoring could be implemented in real-time, similar to text chat. However, the process is a bit different, as the redaction of bad words is not practical for voice chat communications. In addition, voice chat toxicity can only be accurately identified when the context of the game and conversation is taken into account. This approach is about rapid intervention and also about having a complete view of the scale, scope, and nature of toxicity present in the game’s community.

AI can be used to automate moderation decisions - such as automatically muting or kicking a player from a game. Proactive moderation also helps flag toxic behavior that should be reviewed by moderators.

This is important as almost all voice chat moderation practices today rely on user-generated complaints to kick off an investigation. We know from our research that only a small percentage of toxic incidents are followed by a complaint. That means a lot of the bad behavior falls through the cracks and is never acted upon without some form of proactive monitoring.

Where We Are Headed

Combating cheating remains the top priority for game studios. Addressing toxicity in chat communications has emerged as a close second. Game developers have invested heavily in anti-cheating technologies and policies, as well as text chat moderation, but voice chat moderation has received significantly less attention. 

That inaction is often based on the game makers not knowing what course of action to take, lack of tools to proactively address the problem, and concerns about cost. The result is a voice chat moderation gap for online games. 

Many companies at GDC wanted to talk to Speechly because several top studios engaged us over the past year to help address those issues. We have learned a lot about the gaps and the nature of the problem from our work and many conversations with industry professionals. If you are considering how to best combat voice chat toxicity in your game, we would be happy to share our learnings.

About Speechly

Speechly is a YC backed company building tools for speech recognition and natural language understanding. Speechly offers flexible deployment options (cloud, on-premise, and on-device), super accurate custom models for any domain, privacy and scalability for hundreds of thousands of hours of audio.

Latest blog posts

company news

Speechly is joining Roblox

Hannes Heikinheimo

Sep 19, 2023

1 min read

voice tech

4 Voice Chat Solutions for Virtual Reality

Voice chat has become an expected feature in virtual reality (VR) experiences. However, there are important factors to consider when picking the best solution to power your experience. This post will compare the pros and cons of the 4 leading VR voice chat solutions to help you make the best selection possible for your game or social experience.

Matt Durgavich

Jul 06, 2023

5 min read

company news

Speechly Has Received SOC 2 Type II Certification

Speechly has recently received SOC 2 Type II certification. This certification demonstrates Speechly's unwavering commitment to maintaining robust security controls and protecting client data.

Markus Lång

Jun 01, 2023

1 min read