This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Addressing online harms in real-time: AI meets ingame voice moderation in Call of Duty III

In a recent blogpost, Activision have confirmed they will be using an artificial intelligence tool called ToxMod to automatically moderate "toxic" in-game behaviour in its upcoming title Call of Duty: Modern Warfare III. 

Voicechat in multiplayer games (including, historically, COD in particular) has always been somewhat of a wild-west in terms of toxicity (to put it mildly). Moderating such spaces over hundreds of thousands of users in a fair way is fundamentally difficult for text content and historically has proven almost impossible for voice chat. 

Indeed, some developers have gone as far as to simply turn off voice-chat functionality entirely to sidestep the issues, leaving users to alternative platforms such as Discord and Steam.

Using AI to scale moderation in real-time in this way is a bold new step; real-time voice-chat moderation may well be transformative for these online spaces and help the largest multiplayer platforms address the increasing regulatory pressures embodied in the EU Digital Services Act and (particularly) the UK's forthcoming Online Safety bill. 

However, there will likely be a large body of gamers that will reject such moderation as "censorship" and simply use alternative un- or less-moderated options. That might well prove an acceptable compromise for developers and publishers concerned about online harms, even if it means increasing fragmentation of online spaces.

If you're interested in reading more from our team on these topics please see our recent article on Online Safety and Games, our Online Safety comparative analysis and/or our Gaming blog series.

Subscribe to our Tech Insights blog for insights, updates and news from our experts - subscribe now!

Tags

gaming, online safety