Tuesday, May 21, 2013

Online Game Tries Tribunal to Increase Civility

Via Ars Technica, the gaming company that made League of Legends, a massively multiplayer online game, has come up with an interesting way to manage abusive comments.

The moderation seems to consist of a Tribunal process, which was designed and implemented by staff on the gaming company who hold advanced degrees in psychology and related fields. The Tribunal is described as follows:
"The Tribunal is basically a community-based court system where the defendants are players who have a large number of reports filed against them by other players. League players can log in to the Tribunal and see the cases that have been created against those players, viewing evidence in the form of sample chat logs and commentary from the players who filed the reports.

 Cases in the Tribunal were evaluated independently by both player juries and staff from Riot Player Support. In over a year’s worth of cases, Riot found that the community verdict agreed with the decision of the staff moderators 80 percent of the time. The other 20 percent of the time, the players were more lenient than Riot’s staff would have been (players were never harsher than the staffers).

Riot’s takeaway from the Tribunal experiment was that League players were not only unwilling to put up with toxic behavior in their community, but they were willing to be active participants in addressing the problem. This success inspired Riot to assemble a team of staffers that would make up its formal player behavior initiative, launched just over a year ago."
It's an interesting concept and implementation, and cool that the company devoted resources to taking civility and the safety of its participants seriously.  As I've blogged over the years and engaged in many different forums, I'll admit that I've often secretly wished that there was some sort of Internet Court who one could appeal to in Internet Debates who would authoritatively tell everyone that I was right and that someone else was being a total ass.

I've found that oftentimes, the presence of just a few people can really disrupt an entire community or conversation, by continually acting in a toxic manner and it's particularly frustrating when those who actually hold power, via their ownership of the forum, condone such behavior.  In my experience, if I'm finding a person to be problematic, other people often are as well, even if they don't publicly say so. And, over the years, I've also acquired a certain skill in recognizing when a person is going to become a problem in a conversation and, if moderation is not taken seriously in a forum, when I should therefore dis-engage (or take other actions).

League's experiment analyzed player chat logs and gathered data that allowed the company to predict with "up to 80% accuracy" which players would go on to show bad behavior. The article states that the company doesn't plan to pre-emptively ban such players. However, the capacity for that sort of pattern-recognition is interesting to note. After all, the language we use is often a good indicator both of how we think and of the extent to which we care about the effect our words have on others. (See also, On Sock-Puppeting and Entitlement).
 
In addition to imposing bans on abusive players, the system also explored imposing positive rewards on those who were civil. Players could earn kudos from other players, which would contribute to their overall Honor status. 

Lastly, the article didn't mention sexist and misogynistic commentary, which somewhat concerns me. The article alludes to homophobic and racial slurs, as well as the more general "bullying." Yet, cultivating hostile environments toward girls and women can sometimes be its own sort of hivemind, especially in gaming culture, and even in communities that more readily "see" the problematic nature of other types of abuse.

No comments: