AI Bots Are Policing Toxic Voice Chat in Videogames
BY SARAH E. NEEDLEMAN
In the videogame “Gun Raiders,” a player using voice chat could be muted within seconds after hurling a racial slur. The censor isn’t a human content moderator or fellow gamer—it is an artificial intelligence bot. Voice chat has been a popular part of videogaming for more than a decade, allowing players to socialize and strategize. According to a recent study, nearly three-quarters of those using the feature have experienced incidents such as name-calling, bullying and threats.
New AI-based software aims to reduce such harassment. Developers behind the tools say the technology is capable of understanding most of the context in voice conversations and can differentiate between playful and dangerous threats in voice chat.
If a player violates a game’s code of conduct, the tools can be set to automatically mute him or her in real time. The punishments can last as long as the developer chooses, typically a few minutes. The AI can also be programmed to ban a player from accessing a game after multiple offenses.
The major console makers— Microsoft Corp., Sony Group Corp. and Nintendo Co.—offer voice chat and have rules prohibiting hate speech, sexual harassment and other forms of misconduct. The same goes for Meta Platforms Inc.’s virtual-reality system Quest and
Discord Inc., which operates a communication platform used by many computer gamers. None monitor the talk in real time, and some say they are leery of AI-powered moderation in voice chat because of concerns about accuracy and customer privacy.
The technology is starting to get picked up by game makers.
Gun Raiders Entertainment Inc., the small Vancouver studio behind “Gun Raiders,” deployed AI software called ToxMod to help moderate players’ conversations during certain parts of the game after discovering more violations of its community guidelines than its staff previously thought. “We were surprised by how much the N-word was there,” said the company’s operating chief and co-founder, Justin Liebregts. His studio began testing ToxMod’s ability to accurately detect hate speech about eight months ago. Since then, the bad behavior has declined and the game is just as popular as it was before, Mr. Liebregts said, without providing specific data.
Traditionally, game companies have relied on players to report problems in voice chat, but many don’t bother and each one requires investigating.
‘Gun Raiders’ uses a bot that temporarily mutes players who violate the game’s code of conduct. GUN RAIDERS ENTERTAINMENT
Developers of the AI-monitoring technology say gaming companies may not know how much toxicity occurs in voice chat or that AI tools can identify and react to the problem in real time.
“Their jaw drops a little bit,” when they see the behaviors the software can catch, said Mike Pappas, chief executive and co-founder of Modulate, the Somerville, Mass., startup that makes ToxMod. “A literal statement we hear all the time is: ‘I knew it was bad. I didn’t know it was this bad.’ ”