Discord updates its Community Rules with a clause that prohibits sharing information that it deems “false or misleading” and “likely to cause physical or societal harm” in the event of an action. The rule could apply to a lot of information, but the rhetoric of Covid-19 is the main example given. The chat service does not want to be a source of “anti-vaccination content” or advice not accepted by the medical community, such as the use of unproven home remedies.
In short, Discord will not allow individuals to “post, promote, or organize communities around false or misleading health information that may cause harm,” wrote Alex Anderson, the Discord platform’s senior policy specialist. , in one blog post explaining the update.
Discord defines false or misleading health information as any health information that “directly and unequivocally contradicts the most recent consensus reached by the medical community,” and it offers a surprising amount of detail on what that means.
The following is a list of topics that Discord warns against making “false or misleading” claims about:
- the safety, side effects or effectiveness of vaccines;
- vaccine ingredients, development or approval;
- unapproved alternative treatments for the disease (including claims promoting harmful forms of self-medication, as well as claims advocating refusal of the vaccine or alternatives);
- the existence or prevalence of a disease;
- disease transmission or symptoms;
- health advice, opinions or mandates (including false claims regarding preventive measures and actions that could impede the resolution of a public health emergency);
- availability or eligibility for health services; and,
- content that involves a health conspiracy by malevolent forces (including claims that may cause social unrest or cause the destruction of critical infrastructure).
On its own, the list could be interpreted as a blanket ban on expressing distrust of any local health mandate or even recommending “alternative” traditional medicines. However, Anderson says Discord will consider context as intent and not act unless it believes the messages are “likely to cause some form of harm.”
“This policy is not intended to punish polarizing or controversial viewpoints,” he wrote. “We allow the sharing of personal health experiences; opinions and commentary (as long as those opinions are based in fact and will not cause harm); good faith discussions of medical science and research ; content intended to condemn or debunk health misinformation; and satire and humor that manifestly and deliberately intends to poke fun at false or misleading health claims.”
People with polarizing or controversial viewpoints are unlikely to agree with the statement that they aren’t being targeted, though it’s worth mentioning that Discord users who mostly stick to smaller groups may not notice any change, regardless of what they say on the platform.
When I spoke to Discord about privacy in 2019, he told me he didn’t proactively monitor text and voice chat on any given server – with over 150 million monthly active users, how could he do it? Instead, moderators largely respond to user reports, which are very likely to come from large public servers.
I think it remains unlikely that Discord will scan the chat logs of every 20-person server for stories related to Covid-19 vaccines and microchips, although there is precedent for proactive moderation on Discord. In 2018, after a few publications reported that the relative privacy offered by Discord made it a white supremacist hideout, the company made a public effort to get rid of hate group servers. Following this example, it is possible that Discord will seek out and shut down servers that openly present themselves as anti-vax hubs, if such servers exist. (If I had to guess, I’d say they do.)
the new Discord policies will come into effect on March 28. “Malicious impersonation” is also prohibited by the new guidelines, with the note that “satire and parody are acceptable”, and Discord has given itself permission to consider “relevant off-platform behavior” when it acts on user reports, such as “hate group membership or association, illegal activity, and hateful, sexual, or other acts of violence.”
Discord also says it will crack down on “false, malicious, or spammy” reports. “If you are found to be reporting in bad faith, we may take action against your account,” the company says.
As someone who doesn’t use Discord as a forum for vaccine-related commentary in one way or another, the news mostly serves as a reminder that the conversations happening on the platform aren’t entirely private. , even on so-called private servers. It’s a moderated social network, so if someone files a report, it’s possible for Discord mods to check your chat logs and issue warnings, suspensions, or bans. For those who want Discord-like functionality without joining a social network, companies like TeamSpeak still offer paid private VOIP servers. (At the moment, I’m not too worried about the encryption of my D&D group’s endless planning conversations.)