Both misinformation and disinformation refer to the spread of fake news, but how do these two notions differ? Does the distinction matter when protecting your organization from them? Let’s see how these two types of fake news compare and how to adapt your preventive measures.
Let’s start with a clear definition. Misinformation is an unintentional way to spread inaccurate and misleading information.
There are three elements in this definition that help you determine whether one case of fake news is misinformation :
Misinformation can sometimes also refer to any fake news that hasn't yet been proven to be deceptive. It can thus qualify in general for any type of information disorder. But it’s better to use misinformation specifically to describe unintentional fake news.
As you have guessed, misinformation is caused by misinterpretations, inaccuracies, or distorted beliefs. So it’s quite frequent. During the 2016 presidential election, 16% of American citizens admitted having shared false information unknowingly.
There are many different types of misinformation :
Journalists and reporters are at the forefront when it comes to creating and sharing news. As such, they are also more susceptible to spreading insufficiently verified claims. Media organizations today are especially prone to this problem, as journalists are asked to provide real-time news and increase audience numbers via viral content.
For example, in 2015, a BBC journalist tweeted about the death of Queen Elizabeth II 7 years earlier. Several news outlets like the German newspaper Bild or India’s Hindustan Times relayed the news without prior confirmation, which caused false news alerts all over the world.
More unexpectedly, misinformation can also be caused by humorous news. Some media organizations can create intended satirical and parodic content to express their opinion and amuse their readers. That’s even more the case for websites exclusively dedicated to satire like ClickHole, The Onion in the US or LeGorafi in France. This content can be very entertaining, but it can also confuse the less vigilant users.
In a revealing survey, researchers asked US citizens about news generated by The Babylon Bee famous satirical news website. They found that up to 26% of them could be fooled by the claims of The Babylon Bee posts.
Platforms and social media can also contribute to misinformation issues. Their recommendation algorithm favors high engagement and click-rate posts, and thus in general news most likely to be fake. That’s why it’s easy for readers to be fooled by titles that don’t reflect accurately the article content or the real fact. Out of their context, words -and even more pictures- can convey something entirely different from the original message.
Marketers and advertisers can also play a role when it comes to sharing these exaggerated or overstated claims.
Another bias of social media algorithms is that they feed users with like-minded content. The more we engage in specific content, the more we’ll be recommended posts that express the same views or opinions. What we call “the echo chamber mechanism” especially enables extremists and conspiracy groups to thrive on these platforms. This implies that their ungrounded posts can reach more people, and make them believe questionable claims.
Of course, this last category of misinformation is more complicated to define, since it involves people’s opinions and sentiments.
All these types of unintended information disorders can bring significant damage to your organization. They can bring costly disruption to many departments:
Compared to misinformation, disinformation is a deliberate and deceiving way of spreading fictitious or inaccurate information.
To refer to a claim as disinformation, you need thus to make sure it is:
In short, disinformation is a form of personal or company-named attack that uses the power of make-believe. That’s why organizations are increasingly considering it as a threat to their data, brand, or employee integrity.
As disinformation is caused by the actions of individuals or groups of individuals, they all depend on their intention and motivation. Here are the most common types of disinformation campaigns :
The first type of disinformation involves individuals or organizations that deliberately attack well-known brands or public figures. They can be activist, financially motivated, or competitive actors that seek to achieve their goals through wide-scale communication campaigns.
One famous example is companies Pfizer and BioNTech fighting back against a disinformation campaign aimed at their COVID-19 vaccine. Influencers with hundreds of thousands of followers were asked to spread misleading information about their products.
Disinformation campaigns can also be conducted by political and ideological groups. That’s what cybersecurity experts call cyber propaganda, or the use of disinformation for influence and political goals.
Cyber propaganda propagators create and share manipulated information at a wide scale through all online platforms possible. They feel free to distort news and spin narratives in their favor, to convert new believers and change public opinion.
Malicious individuals are also leveraging disinformation campaigns to launch cyberattacks. Scammers and cybercriminals are using click-enticing messages and posts to infiltrate a company's internal network or steal individuals’ personal data. They seek to seduce unsuspecting minds via psychologically seducing narratives and push them to action.
For instance, they can rely on overpromising ads or deceiving email chains to push users to provide their credentials. In a real case, scammers have posed as health charities pretending to help coronavirus victims. In reality, they were just stealing money from unaware minds.
The power of disinformation can be enhanced even further via AI-powered picture, voice, or video distortions. Deep fakes are sophisticated data construction that helps hackers to impersonate corporate or public figures.
These manipulations are very convincing, and can thus damage the reputation of brands and individuals, or easily push victims to take harmful decisions. They are the latest innovation in the disinformation field, and maybe the most dangerous form of all.
As disinformation is motivated and planned attacks, they make companies and organizations particularly vulnerable to them. They can cause damage to many core functions:
Now you know what misinformation and disinformation are. But what if you’re facing an unclear case of fake news? In that case, it’s important to differentiate between genuine news, misinformation, and disinformation to take protective measures.
It’s a delicate job, so you need to have continuous vigilance and robust fact-checking skills. You can ask yourself the following questions when examining a dubious claim :
You can also follow UNESCO guidelines about journalistic fact-checking best-pratices.
Both disinformation and misinformation are major threats to an organization's integrity. So to protect your organization from their impact, you need to implement every preventive measure possible. This includes:
As described above, fact-checking best practices enable your employees to increase their awareness and take the necessary actions against fake news. You can integrate these practices via collective workshops and widely-shared guidelines. You can also reinforce training by preparing simulated and unannounced exercises.
This will leave them surprised, enable them to put your guidelines into practice, and make the lessons even more memorable.
What happens when employees face false and potentially harmful information? To make everybody know about it, they need to be able to raise the alarm quickly and widely. This means adding signaling systems that trigger visible and noticeable notifications.
You can also prevent misinformation by providing automated features that flag posts or messages as satirical, false, or inaccurate. It should work the same way email boxes flags suspected scams and spam.
When a case of disinformation or misinformation has been detected, you can also implement a way to inform every employee about why it is false or misleading. Through that communication tool, you would be able to debunk and deconstruct every misleading story, and connect them to trusted and verified sources.
That way, you ensure your employees are not conducting their daily job with inaccurate knowledge or erroneous beliefs.
There have been many tools that have been designed over the years to help you detect fake news in real time. These fact-checking websites or apps could help you in one click know whether a claim is true or false.
They could come in handy when your company is exposed to a wave of misinformation (such as during COVID-19) or a cyber propaganda campaign. You’ll make sure each of your employees has a way to verify important news or data.
Whether facing misinformation or disinformation, Buster.AI helps you verify any dubious claims or content entering your communication systems.
As a deep-learning-powered fact-checking app, Buster.AI can understand sentences, connect them to trusted sources, and provide you with a reliability score.
By adding Buster.AI’s API to your company IT, you can determine a matching score between posts and content submitted by users and trusted sources or documents. These features also work with groups of text documents and rely on predefined sources.
With Buster.AI, you can thus prevent and mitigate the impact of misinformation and disinformation within all your internal communication systems!
Book a demo and learn how to set up a defense strategy against these threats.