MCO 427

Platforms & Policies on Misinformation

A look at Reddit

Reddit points to Rules 2 & 5 in the content policy as their addressing of misinformation. Rule 2 focuses on users posting “authentic content” and not engaging in “content manipulation.” Rule 5 recommends that users not pretend to be another person or unit. While searching for a misinformation policy, I was able to find discussion on the fact that an earlier policy that had been removed by Reddit due to misuse by  users.

Bozarth et.al (2023) notes that Reddit tends to task their moderators to police misinformation throughout the subreddit communities. The moderator code of content details that their roles are necessary to ensure that the community stays a positive place. Reddit also states that users can individually fact check stories they see which puts them in partnership with the volunteer moderators and the Reddit employees.

In December 2019, a report came out about a certain 6-year Russian disinformation campaign that tried to infiltrate Reddit amongst other platforms; it was found out that all the accounts had been banned and had their content removed.

The three tiers of moderation on Reddit are site, community, and user level. At the site level is the content policy pushed out by the company, which when violated results in post removal and banning. The community level are the moderators who police individual subreddits based on additional rules set for that space. At the last level, individual users can rate and comment on posts and address misinformation that they see.

Based on my experience on the platform, I would say that Reddit does a moderate job of combatting misinformation that is uploaded on the platform.  The fact that individual users and moderators can do the work of flagging and correcting inaccurate information while bringing it to the company’s attention goes a long way to stopping these “disinformation campaigns” before they take root.

I would say that an improvement Reddit can make would be to update their policy on misinformation. Rule 5 of the content policy does not elaborate enough on what constitutes misinformation and Reddit’s stance on it. Additionally, I would say that the moderator code of conduct should be updated to provide tips on how moderators can report misinformation that they see. Reddit has done a decent job of allowing people to express their ideas and opinions, but there are always improvements that need to be done.

A look at Discord

Rule 17 of Discord’s community guidelines directly addresses misinformation. In it, Discord notes that misleading content will be removed if it has the potential of causing harm to the public. Also linked is their misinformation policy explainer which addresses health misinformation and civic disruption. Other policies such as the deceptive practices policy and identity authenticity policy go into more detail on techniques such as phishing, fraud, and impersonation, and the consequences for those. They note that sites such as Snopes and PolitiFact are used to fact check claims placed on the platform.

Discord has come under fire in the past few years for incidents like the classified document leak. Discord claimed that the content was never reported to the company and that the server had little oversight. Because of this, Discord implemented its warning system to better combat misinformation and hateful content.

Discord initially notifies users of any violations through a direct message and practices account suspensions and bans dependent on the severity of the violation. If a user feels like he was unjustly penalized on the platform, he can send an appeal request.

Discord also has three tiers of moderation through individual users, and platform and community moderation. Any user can submit a report through the direct application (desktop or mobile) to the moderators or support team. The platform and community is moderated by the content guidelines and volunteers.

Based on my experience with the platform, I would say that Discord also does a moderate job of combatting misinformation that is uploaded. Users and moderators can flag information that they see, and it is especially helpful that Discord has a dedicated tool for those reports to be made. Discord also has made a very easily accessible policy hub that details what content is not tolerated on the platform, including a misinformation policy.

Discord can improve how they handle private servers. As was noted in the classified data leak, users should not be allowed to have unlimited access to servers with no oversight at all. Additionally, Discord should make changes to their policy of data storage in the case of criminal investigations. With improvements like this to standard policy, incidents such as major document leaks can presumably be caught at an earlier stage before significant damage is done.