Login with Patreon
Heroes of the StormApr 19, 2018 10:00 am CT

Heroes of the Storm is counting on technology to silence abusive players

Earlier this week, Heroes of the Storm Community Manager Nate Valenta announced the game was rolling out new technology to assist in validating reports of abusive chat. While the exact details of that technology were slim, Nate Valenta states this technology will increase the rate at which Blizzard can respond to such reports and take action against offensive and harassing language.

Originally Posted by Nate Valenta (Official Post)

Starting today, we’re implementing new technology that enhances our ability to validate the accuracy of reports. This will allow us to increase the rate that we issue account silences and ranked suspensions to players who are frequently and consistently reported for using offensive language. These actions are aimed at those who regularly use offensive language to harass, antagonize, and abuse one or more of their teammates. This type of behavior not only ruins the experience for those who are targeted, but also damages team morale, effectively degrading the fun for everyone in a match.

We will continue to issue weekly suspension and ban waves for non-participation and intentionally dying. You can check that forum thread regularly to keep up with our latest round of account actions. Additionally, if you’ve recently reported another player for going AFK, refusing to participate, or intentionally dying, be sure to keep an eye on the email inbox associated with your account for any updates regarding actions we’ve taken against that player as a result of your report.

We’d like to thank those of you who use the relevant in-game reporting options when you spot negative behavior in your matches and encourage you to continue doing so. Your actions have the largest positive impact on the health of the community.

Admittedly, there isn’t much information here. But it’s likely the announcement was made in response to the Heroes of the Storm community. It’s been generally unhappy in regards to the developers’ lack of communication. Last week’s community address and Reddit AMA were both held in direct response to that community outcry, so this announcement may be an effort to continue that communication rather than reverting to leaving players in the dark about important issues.

And rampant toxicity in Blizzard’s games is, of course, an important issue. Nothing is demoralizing to a playerbase as the impression that nothing is being done about misbehavior. When repeat offenders continue slipping through the cracks, it isn’t difficult to form such an impression. Hopefully, this new technology — as vague as it may be — can turn turn that impression around.

It’s possible this “new technology” is connected to the efforts of reducing toxicity in Overwatch, which has had similar problems. Jeff Kaplan recently spoke about Blizzard’s experiments with using machine learning and an AI to curb toxicity in Overwatch, essentially teaching a machine to recognize abusive language before it’s even reported. This may be a small step in that direction. Blizzard will continue to rely on player reports, and use them as an educational tool for their AI. If the AI can accurately detect abusive language in players’ reports, the next step would be to allow the AI to detect abusive language on its own. This would leave only the more complex issues and judgment calls to human moderators, such as abusive gameplay.

Of course, we truly have no details in this case, leaving us only to speculate. That isn’t an uncommon thing, however. Blizzard rarely details exactly how its moderation tools work. They keep the specifics close to their chest so abusive players can’t try to circumvent the system.

Blizzard Watch is made possible by people like you.
Please consider supporting our Patreon!

Advertisement

Join the Discussion

Blizzard Watch is a safe space for all readers. By leaving comments on this site you agree to follow our  commenting and community guidelines.

Toggle Dark Mode: