What is Misinformation?
A quick definition of Misinformation
Let’s start with a clear definition. Misinformation is an unintentional way to spread inaccurate and misleading information.
There are three elements in this definition that help you determine whether one case of fake news is misinformation :
- Unintentional: users propagate alleged fake news with no purpose to deceive, and genuinely believe their claim is true.
- False and inaccurate: the statement deviates in slight or major ways from checked facts. By testing this claim against trusted sources or documents, you can tell a difference.
- Misleading: due to its falsity, the statement makes readers or users believe things that are inaccurate or fictitious. They make them believe false things about people, organizations, and events, causing them to make poor decisions.
Misinformation can sometimes also refer to any fake news that hasn't yet been proven to be deceptive. It can thus qualify in general for any type of information disorder. But it’s better to use misinformation specifically to describe unintentional fake news.
The different sources of Misinformation
As you have guessed, misinformation is caused by misinterpretations, inaccuracies, or distorted beliefs. So it’s quite frequent. During the 2016 presidential election, 16% of American citizens admitted having shared false information unknowingly.
There are many different types of misinformation :
Journalists and reporters are at the forefront when it comes to creating and sharing news. As such, they are also more susceptible to spreading insufficiently verified claims. Media organizations today are especially prone to this problem, as journalists are asked to provide real-time news and increase audience numbers via viral content.
For example, in 2015, a BBC journalist tweeted about the death of Queen Elizabeth II 7 years earlier. Several news outlets like the German newspaper Bild or India’s Hindustan Times relayed the news without prior confirmation, which caused false news alerts all over the world.
Satire and parody
More unexpectedly, misinformation can also be caused by humorous news. Some media organizations can create intended satirical and parodic content to express their opinion and amuse their readers. That’s even more the case for websites exclusively dedicated to satire like ClickHole, The Onion in the US or LeGorafi in France. This content can be very entertaining, but it can also confuse the less vigilant users.
In a revealing survey, researchers asked US citizens about news generated by The Babylon Bee famous satirical news website. They found that up to 26% of them could be fooled by the claims of The Babylon Bee posts.
Clickbait and out-of-context content
Platforms and social media can also contribute to misinformation issues. Their recommendation algorithm favors high engagement and click-rate posts, and thus in general news most likely to be fake. That’s why it’s easy for readers to be fooled by titles that don’t reflect accurately the article content or the real fact. Out of their context, words -and even more pictures- can convey something entirely different from the original message.
Marketers and advertisers can also play a role when it comes to sharing these exaggerated or overstated claims.
Ungrounded beliefs and conspiracies
Another bias of social media algorithms is that they feed users with like-minded content. The more we engage in specific content, the more we’ll be recommended posts that express the same views or opinions. What we call “the echo chamber mechanism” especially enables extremists and conspiracy groups to thrive on these platforms. This implies that their ungrounded posts can reach more people, and make them believe questionable claims.
That’s the case of QAnon or COVID-19-related conspiracies that have been reported to flourish these last ten years on social media.
Of course, this last category of misinformation is more complicated to define, since it involves people’s opinions and sentiments.
Impact of Misinformation on your organization
All these types of unintended information disorders can bring significant damage to your organization. They can bring costly disruption to many departments:
- Finance: incorrect or false data can disrupt financial analysis and market research. As a result, your organization can make inaccurate calculations or unprofitable investment decisions.
- Management and Executives: without reliable fact-checking, decision-makers can rely on false or inexact news. This can lead them to company-wide decisions with unpredictable consequences.
- IT: when your IT departments need to train their AI models on a high number of data sets, misinformation can reduce or worsen their quality. This is even more worrying when using, for example, automated decision systems or AI-generated content.
What is Disinformation?
Disinformation quick definition
Compared to misinformation, disinformation is a deliberate and deceiving way of spreading fictitious or inaccurate information.
To refer to a claim as disinformation, you need thus to make sure it is:
- Ill-intentioned: misinformation propagators specifically made to attain harmful goals (political influence, propaganda, financial incentives…).
- False or Inaccurate: propagators rely on altered, manipulated, or fabricated narratives to achieve their goals.
- Deceiving: they make sure to leverage psychological bias to make people believe in harming things and do bad decisions. For example, by creating clickbait content.
In short, disinformation is a form of personal or company-named attack that uses the power of make-believe. That’s why organizations are increasingly considering it as a threat to their data, brand, or employee integrity.
The different types of disinformation
As disinformation is caused by the actions of individuals or groups of individuals, they all depend on their intention and motivation. Here are the most common types of disinformation campaigns :
The first type of disinformation involves individuals or organizations that deliberately attack well-known brands or public figures. They can be activist, financially motivated, or competitive actors that seek to achieve their goals through wide-scale communication campaigns.
One famous example is companies Pfizer and BioNTech fighting back against a disinformation campaign aimed at their COVID-19 vaccine. Influencers with hundreds of thousands of followers were asked to spread misleading information about their products.
Disinformation campaigns can also be conducted by political and ideological groups. That’s what cybersecurity experts call cyber propaganda, or the use of disinformation for influence and political goals.
Cyber propaganda propagators create and share manipulated information at a wide scale through all online platforms possible. They feel free to distort news and spin narratives in their favor, to convert new believers and change public opinion.
Malicious individuals are also leveraging disinformation campaigns to launch cyberattacks. Scammers and cybercriminals are using click-enticing messages and posts to infiltrate a company's internal network or steal individuals’ personal data. They seek to seduce unsuspecting minds via psychologically seducing narratives and push them to action.
For instance, they can rely on overpromising ads or deceiving email chains to push users to provide their credentials. In a real case, scammers have posed as health charities pretending to help coronavirus victims. In reality, they were just stealing money from unaware minds.
The power of disinformation can be enhanced even further via AI-powered picture, voice, or video distortions. Deep fakes are sophisticated data construction that helps hackers to impersonate corporate or public figures.
These manipulations are very convincing, and can thus damage the reputation of brands and individuals, or easily push victims to take harmful decisions. They are the latest innovation in the disinformation field, and maybe the most dangerous form of all.
Impact of Disinformation on your Organization
As disinformation is motivated and planned attacks, they make companies and organizations particularly vulnerable to them. They can cause damage to many core functions:
- Marketing and PR: disinformation campaigns can significantly impact your brand reputation. This can cause your customers to lose trust, and require a counter-communication strategy to repair the damage.
- HR: personal attack company’s public figures can drive away talents and put distrust in your company management.
- Cybersecurity: Disinformation and phishing attacks can lead to the leaking of sensitive data and put your information systems under significant threat.
Misinformation and disinformation: how to differentiate them
Now you know what misinformation and disinformation are. But what if you’re facing an unclear case of fake news? In that case, it’s important to differentiate between genuine news, misinformation, and disinformation to take protective measures.
It’s a delicate job, so you need to have continuous vigilance and robust fact-checking skills. You can ask yourself the following questions when examining a dubious claim :
- What are the authors of the claim (journalist, blogger, random user,...), and what are their possible intention? If this news comes from rarely active or unidentified users it can be the result of fake-news spreading bots.
- Is this based on connected sources, authorities, or documents? If yes, check these sources and find the original input. If not, you should be cautious about unsourced posts and articles.
- Does the domain or media publishing the news look trustworthy? Make a quick search about it. Some disinformation websites can impersonate existing media outlets, so be vigilant.
- Do the pictures or videos backing this claim feel slightly off? Where are they from?
You can also follow UNESCO guidelines about journalistic fact-checking best-pratices.
How to protect your organizations from these two informational threats?
Both disinformation and misinformation are major threats to an organization's integrity. So to protect your organization from their impact, you need to implement every preventive measure possible. This includes:
#1 Training Employees on fact-checking best practices
As described above, fact-checking best practices enable your employees to increase their awareness and take the necessary actions against fake news. You can integrate these practices via collective workshops and widely-shared guidelines. You can also reinforce training by preparing simulated and unannounced exercises.
This will leave them surprised, enable them to put your guidelines into practice, and make the lessons even more memorable.
#2 Enabling real-time signaling or flagging
What happens when employees face false and potentially harmful information? To make everybody know about it, they need to be able to raise the alarm quickly and widely. This means adding signaling systems that trigger visible and noticeable notifications.
You can also prevent misinformation by providing automated features that flag posts or messages as satirical, false, or inaccurate. It should work the same way email boxes flags suspected scams and spam.
#3 Debunking existing informational threats
When a case of disinformation or misinformation has been detected, you can also implement a way to inform every employee about why it is false or misleading. Through that communication tool, you would be able to debunk and deconstruct every misleading story, and connect them to trusted and verified sources.
That way, you ensure your employees are not conducting their daily job with inaccurate knowledge or erroneous beliefs.
#4 Using a fact-checking tool
There have been many tools that have been designed over the years to help you detect fake news in real time. These fact-checking websites or apps could help you in one click know whether a claim is true or false.
They could come in handy when your company is exposed to a wave of misinformation (such as during COVID-19) or a cyber propaganda campaign. You’ll make sure each of your employees has a way to verify important news or data.
Buster.AI: automated fact-checking to prevent informational threats
Whether facing misinformation or disinformation, Buster.AI helps you verify any dubious claims or content entering your communication systems.
As a deep-learning-powered fact-checking app, Buster.AI can understand sentences, connect them to trusted sources, and provide you with a reliability score.
By adding Buster.AI’s API to your company IT, you can determine a matching score between posts and content submitted by users and trusted sources or documents. These features also work with groups of text documents and rely on predefined sources.
With Buster.AI, you can thus prevent and mitigate the impact of misinformation and disinformation within all your internal communication systems!
Book a demo and learn how to set up a defense strategy against these threats.