Content moderation has become a key capability for social and brand platforms to ensure their compliance with regulations and protect their reputation. As fake news is spreading everywhere, content moderation managers might also strengthen their protective measures against false or/and deceiving posts. Let’s see why and how.
In recent years, the threat brought by fake news has become ubiquitous. Around 86% of online users have already faced and been duped by fake news content. As this false or deceiving information is getting into everyone's hands, they are bringing a lot of economic and social disruption.
This also involves organizations and businesses. They have started to consider the impact of fake news on their own IT infrastructure, and are applying preventive measures through their content moderation systems.
Content moderation managers have long focused on offensive, harmful and hateful content. But now, they are increasingly reconsidering their moderation strategy against misinformation (unintended spread of false claims) and disinformation (purposefully deceiving news-spreading campaigns).
Take the case of Facebook or Twitter. As they have suffered from political scandals and public backlash (like Cambridge Analytica), they are now ramping up their features against fake news posts. This is also the result of intense regulatory pressures.
Governments and whistleblower organizations are exerting greater control over digital content platforms. For example, in November 2022, the EU put into place its Digital Services Act to ensure digital platforms address illegal content. It forces them to moderate flagged disinformation campaigns and be transparent about these measures.
In response to these regulations, Facebook and Twitter have strengthened their content moderation capabilities and added flagging features against misinformation and disinformation.
These risks can also apply to companies like yours, that have social or brand platforms that engage communities in online conversation. Any platform that allows user-generated content can be a vector of transmission of fake news.
By taking moderation measures against this threat, you prevent yourself from :
Not all fake news threats are equal. It depends both on the intention behind these campaigns and the technology employed.
The first and most common type of fake news threat for companies is the result of users unintentionally sharing or posting inaccurate information - what is also called misinformation.
Platforms’ recommendation algorithms are biased toward high-engagement posts, and it turns out that type of content is also often false. With social media encouraging clickbait content, false news stories are 70% more likely to spread than true stories are.
So it’s ever more likely that some of your users will promote inexact information that will mislead your customers and damage your company's reputation. This information can come from a variety of sources:
Journalists are at the forefront when it comes to creating and sharing news. This doesn’t make them immune to mistakes and inaccuracies. It doesn’t help that some news outlets are seeking to generate buzz with psychologically-enticing titles and claims, or are promoting real-time news without prior rigorous verifications.
That’s what happened to AFP which claimed the death of Martin Bouygues, a famous French entrepreneur as soon as they picked up the information. They later discovered that this referred to another French citizen with the same name, and not Martin Bouygues himself. This kind of misinformation might undermine the trust your users place in your platform’s content.
Some media organizations are spreading intended satirical and humorous news to amuse their readers. There are even satirical news websites dedicated to creating this viral content, like ClickHole in the US or LeGorafi in France.
They can be entertaining but also lead to incorrect conclusions without further consideration.
Marketers and advertisers also play a significant role when it comes to spreading unverified claims. This is especially the case in social and brand platforms. Through echo chamber mechanisms, social media are designed to encourage distorted or misleading news and posts. Some marketing content can also impersonate genuine news, which can be harmful to your users and discredit your company.
The second threat is of utmost importance for your marketing and PR departments. It involves ill-intentioned organizations that attack your brand reputation through wide-scale campaigns. They can rely on influencers or news-spreading bots to spread damaging narratives over online channels. They can also leverage these tools to infiltrate your user-generated platform and pollute it with defamatory claims about your company. These can be activist, financially motivated, or competitive actors.
For example, US-based company Wayfair was subjected to the attack of conspiracy theorists. They claimed that furniture names listed on their website were referring to missing or kidnapped girls. These statements were spread on many online channels and discredited the company. Other companies, Pfizer and BioNTech, fought back against a disinformation campaign aimed at their COVID-19 vaccine. Several influencers with followers in the hundreds of thousands were asked to spread misleading information that raised suspicion against their product.
Online influence campaigns can involve not only organizations but also political and extremist groups. That’s what cybersecurity experts call cyber propaganda, or the use of disinformation for influence and political goals.
Cyber propaganda propagators seek to capture the attention of the public through any promising communication platforms they can find. Unmoderated or poorly moderated forums, applications, or websites provide the ideal medium to disseminate disinformation content.
The consequences? Your user-generated platform might become a communication channel for emotionally, politically, or socially-charged content. This can create tensions among your users and polarize your community. It can also come with its fair share of hateful and offensive speech. As you have guessed, this does not bode well for your company's reputation.
Malicious individuals are also leveraging brand, corporate or social platforms to launch cyberattacks. Scammers and cybercriminals are using user-generated channels to spread false and click-enticing news and profit from unsuspecting minds. They seek to steal individuals by seducing them with misleading content and pushing them to type their personnel or financial information on unverified websites (what is called phishing).
When your platform is publicly open to queries, this is a pretty common threat, and it can significantly harm your users when these stealing attempts meet their goals.
That’s a common occurrence in social media platforms. Facebook, for example, had to deal multiple times with conversation groups run by scammers. They shared fake news related to political or ideological propaganda like QAnon with almost 200 000 users. All these accounts linked to Vietnamese or Bangladesh organizations seeking to profit from disinformation.
In light of all these threats, managers need to expand their content moderation efforts. They need to be able to monitor and react in real time to misinformation and disinformation spread within their public or internal content platforms. Here’s a step-by-step approach to implementing a fake news moderation system:
As you might know, there are many different moderation methods to detect, verify and get rid of unwanted content. That’s also the case for moderating fake news on your UGC platform. The choice depends on your moderation goals and needs:
Which type of content should be removed or which shouldn’t be? This question is key when moderating posts against misinformation and disinformation. Besides being hard to detect, fake news can involve several moral, ethical, and political issues. The decisions might heavily depend on your brand values and stance regarding free speech.
To figure out your moderation scope, you have to deal with difficult questions like :
When you’ve got your moderation approach, it’s time to inform and involve your users in this moderation strategy. Users have to be an integral part of the process. You can put them to contribution by :
Automated solutions have become valuable weapons for content moderation managers. These technologies can also help fight against misinformation and disinformation in your UGC platform. Fact-checking solutions seek to identify misinformation and disinformation in the most accurate way.
They can involve different approaches according to a related study :
Buster.Ai strengthens your content moderation strategy with fact-checking capabilities. Buster.Ai is a deep-learning-powered language model. While traditional language models rely on word-for-word correlations, it can understand sentences, connect them to trusted sources and assess the claim’s reliability
By adding Buster.Ai’s API to your moderation system, you can determine a reliability score for every post and textual content submitted by users on your platform. You can connect submitted texts to related sources supporting or refuting the claim. You can determine a match score between the claim and verified sources. These capabilities also work with groups of text documents and rely on predefined sources.
With Buster.Ai, you can thus moderate in real-time the flow of information coming from your user platform. Contact us and prevent false and harmful content to spread among your community.