How to adapt your content moderation strategy against fake news?

How to adapt your content moderation strategy against fake news?

Why fake news is becoming a major content moderation issue

In recent years, the threat brought by fake news has become ubiquitous. Around 86% of online users have already faced and been duped by fake news content. As this false or deceiving information is getting into everyone's hands, they are bringing a lot of economic and social disruption.

This also involves organizations and businesses. They have started to consider the impact of fake news on their own IT infrastructure, and are applying preventive measures through their content moderation systems.

Content moderation managers have long focused on offensive, harmful and hateful content. But now, they are increasingly reconsidering their moderation strategy against misinformation (unintended spread of false claims) and disinformation (purposefully deceiving news-spreading campaigns).

Take the case of Facebook or Twitter. As they have suffered from political scandals and public backlash (like Cambridge Analytica), they are now ramping up their features against fake news posts. This is also the result of intense regulatory pressures.

Governments and whistleblower organizations are exerting greater control over digital content platforms. For example, in November 2022, the EU put into place its Digital Services Act to ensure digital platforms address illegal content. It forces them to moderate flagged disinformation campaigns and be transparent about these measures.

In response to these regulations, Facebook and Twitter have strengthened their content moderation capabilities and added flagging features against misinformation and disinformation.

These risks can also apply to companies like yours, that have social or brand platforms that engage communities in online conversation. Any platform that allows user-generated content can be a vector of transmission of fake news.

By taking moderation measures against this threat, you prevent yourself from :

  • Legal risks and fines from anti-fake news regulations.
  • Controversies from cyberpropaganda campaigns dealing with false and politically charged topics.
  • Brand and PR damage through the diffusion of defamatory content or named attack on your company.
  • Users churn through the diffusion of deceiving or inaccurate information.

What are the different fake news threats in content moderation?

Not all fake news threats are equal. It depends both on the intention behind these campaigns and the technology employed.

Unintened Misinformation

The first and most common type of fake news threat for companies is the result of users unintentionally sharing or posting inaccurate information - what is also called misinformation.

Platforms’ recommendation algorithms are biased toward high-engagement posts, and it turns out that type of content is also often false. With social media encouraging clickbait content, false news stories are 70% more likely to spread than true stories are.

So it’s ever more likely that some of your users will promote inexact information that will mislead your customers and damage your company's reputation. This information can come from a variety of sources:

Genuine journalistic error

Journalists are at the forefront when it comes to creating and sharing news. This doesn’t make them immune to mistakes and inaccuracies. It doesn’t help that some news outlets are seeking to generate buzz with psychologically-enticing titles and claims, or are promoting real-time news without prior rigorous verifications.

That’s what happened to AFP which claimed the death of Martin Bouygues, a famous French entrepreneur as soon as they picked up the information. They later discovered that this referred to another French citizen with the same name, and not Martin Bouygues himself. This kind of misinformation might undermine the trust your users place in your platform’s content.

Satirical news websites

Some media organizations are spreading intended satirical and humorous news to amuse their readers. There are even satirical news websites dedicated to creating this viral content, like ClickHole in the US or LeGorafi in France.

They can be entertaining but also lead to incorrect conclusions without further consideration.

Marketing claims and clickbait

Marketers and advertisers also play a significant role when it comes to spreading unverified claims. This is especially the case in social and brand platforms. Through echo chamber mechanisms, social media are designed to encourage distorted or misleading news and posts. Some marketing content can also impersonate genuine news, which can be harmful to your users and discredit your company.

Anti-brand campaign

The second threat is of utmost importance for your marketing and PR departments. It involves ill-intentioned organizations that attack your brand reputation through wide-scale campaigns. They can rely on influencers or news-spreading bots to spread damaging narratives over online channels. They can also leverage these tools to infiltrate your user-generated platform and pollute it with defamatory claims about your company. These can be activist, financially motivated, or competitive actors.

For example, US-based company Wayfair was subjected to the attack of conspiracy theorists. They claimed that furniture names listed on their website were referring to missing or kidnapped girls. These statements were spread on many online channels and discredited the company. Other companies, Pfizer and BioNTech, fought back against a disinformation campaign aimed at their COVID-19 vaccine. Several influencers with followers in the hundreds of thousands were asked to spread misleading information that raised suspicion against their product.

Cyberpropaganda

Online influence campaigns can involve not only organizations but also political and extremist groups. That’s what cybersecurity experts call cyber propaganda, or the use of disinformation for influence and political goals.

Cyber propaganda propagators seek to capture the attention of the public through any promising communication platforms they can find. Unmoderated or poorly moderated forums, applications, or websites provide the ideal medium to disseminate disinformation content.  

The consequences? Your user-generated platform might become a communication channel for emotionally, politically, or socially-charged content. This can create tensions among your users and polarize your community. It can also come with its fair share of hateful and offensive speech. As you have guessed, this does not bode well for your company's reputation.

Phishing attacks

Malicious individuals are also leveraging brand, corporate or social platforms to launch cyberattacks. Scammers and cybercriminals are using user-generated channels to spread false and click-enticing news and profit from unsuspecting minds. They seek to steal individuals by seducing them with misleading content and pushing them to type their personnel or financial information on unverified websites (what is called phishing).

When your platform is publicly open to queries, this is a pretty common threat, and it can significantly harm your users when these stealing attempts meet their goals.

That’s a common occurrence in social media platforms. Facebook, for example, had to deal multiple times with conversation groups run by scammers. They shared fake news related to political or ideological propaganda like QAnon with almost 200 000 users. All these accounts linked to Vietnamese or Bangladesh organizations seeking to profit from disinformation.

How to implement fake news moderation on your UGC platform

In light of all these threats, managers need to expand their content moderation efforts. They need to be able to monitor and react in real time to misinformation and disinformation spread within their public or internal content platforms. Here’s a step-by-step approach to implementing a fake news moderation system:

#1 Choosing a moderation method

As you might know, there are many different moderation methods to detect, verify and get rid of unwanted content. That’s also the case for moderating fake news on your UGC platform. The choice depends on your moderation goals and needs:  

  • Pre-moderation: do you want to verify the truthiness of every post and news submitted by users prior to publishing them on your platform? This is the surest but also the slowest moderation approach.
  • Post-moderation: do you want to check their reliability after they have been posted? This values the right of every user to post.
  • Reactive moderation: do you want to rely on user flagging and detecting fake news to prevent them from spreading? This is the most common and economical way to prevent disinformation.
  • Distributed moderation: do you want to allow users to vote and deliberate about the removal of potentially fake content? This enables a faired but slower decision.
  • Automated moderation: do you want to rely on predefined rules and automatic systems to eliminate false claims and statements? This is the most effective approach.

#2 Defining your moderation scope

Which type of content should be removed or which shouldn’t be? This question is key when moderating posts against misinformation and disinformation. Besides being hard to detect, fake news can involve several moral, ethical, and political issues. The decisions might heavily depend on your brand values and stance regarding free speech.

To figure out your moderation scope, you have to deal with difficult questions like :

  • Do you want to accept only authority claims with verified sources, or also opinions posts?
  • Do you want to remove any politically-charged content from your platforms and only accept consensual expressions? It’s hard to judge the reliability of ideological pieces.
  • Do you want to exempt or include satirical content in your moderation decisions? Humorous texts can be easily mistaken as fake content.
  • Do you want to remove any posts with uncertain or controversial topics? Some topics might just create heated debates (like COVID-19 at the beginning of the pandemic).  

#3 Adding anti-fake news features for users

When you’ve got your moderation approach, it’s time to inform and involve your users in this moderation strategy. Users have to be an integral part of the process. You can put them to contribution by :

  • Adding informative banners and displays on potentially false posts or with dubious references.
  • Flagging features to enable users to alert you about false or misleading content
  • Enabling your users to filter and sort recommended content.

#4 Leveraging automatic fact-checking solutions

Automated solutions have become valuable weapons for content moderation managers. These technologies can also help fight against misinformation and disinformation in your UGC platform. Fact-checking solutions seek to identify misinformation and disinformation in the most accurate way.

They can involve different approaches according to a related study :

  • Language and semantics: using linguistic features (like the presence of exclamation points) or text-to-text analysis to spot fake news.
  • Topic-agnostic: based on off-text signals like the name of the author, the number of ads…
  • Knowledge-base: relying on a rule-based understanding of facts and words.
  • Machine-learning: training algorithms with specific data sets to spot fake news.
  • Hybrid: these technologies rely on every approach to maximize their efficiency.

Adding fact-checking capabilities to content moderation: Buster.Ai

Buster.Ai strengthens your content moderation strategy with fact-checking capabilities. Buster.Ai is a deep-learning-powered language model. While traditional language models rely on word-for-word correlations, it can understand sentences, connect them to trusted sources and assess the claim’s reliability

By adding Buster.Ai’s API to your moderation system, you can determine a reliability score for every post and textual content submitted by users on your platform. You can connect submitted texts to related sources supporting or refuting the claim. You can determine a match score between the claim and verified sources. These capabilities also work with groups of text documents and rely on predefined sources.

With Buster.Ai, you can thus moderate in real-time the flow of information coming from your user platform. Contact us and prevent false and harmful content to spread among your community.