Fake news on social media: how to detect and protect yourself from it
How does fake news spread on social media?
Social media is awash with fake news. According to Statista, 67% of US adults say they have encountered false information on social media. Moreover, this misinformation attracts a lot of attention, as fake-news spreaders make up 17 % of the most engaging accounts on social media.
What explains the prevalence of misinformation on social media?
The first thing to note is that social media platforms are not deliberately favoring fake news content. However, their algorithms are designed to promote high-engaging content -which sometimes turns out to be misinformation. They are especially good at triggering psychologically ingrained biases, that make people attracted to fake news.
For misinformation researchers, these are the top factors of fake news propagation on social media :
When it comes to making real-life decisions, our brain is wired to follow the crowd. That explains why we instinctively run when we see others doing the same on the street. This is also the reason why social media users often confuse popularity with reliability.
On social media, we tend to like posts that are widely liked and shared by other people, regardless of their truth. This creates a positive loop, that is favoring viral and potentially fake news items.
Another factor is our natural preference for content echoing our own views and opinions. Driven by confirmation bias, we tend to consume politically similar content and reject pieces of information with different world views.
This plays a huge role in social media: since we tend to subscribe to and befriend like-minded sources, we are all the more exposed to the same chains of information. This makes us more vulnerable to polarized news sources and thus to fake news.
Information saturation is yet another cause. The more our attention is saturated, the more we tend to consume and share false information. As we can only pay attention to a few news items, we rely on our instinctive preferences -which makes us more inclined to unreliable information.
This is especially the case for social media feeds that are feeding minds with unlimited news items. Due to the overload of information, we develop a selective response for high-popularity and high-engagement posts. This response makes us more likely to consume misinformation.
Fake bot accounts
Information manipulators take advantage of all these biases by designing fake news-spreading bots. They create false accounts that automatically share deceiving posts on every social group and conversation.
That makes them able to orchestrate wide-scale propaganda campaigns like during election times, and deeply influence minds. The most pernicious is when they impersonate human users and spark heated arguments about people and organizations.
Do you want to stay away from social media misinformation? Here’s what it looks like on each social platform.
The state of misinformation in each social media platform
As regulatory pressures and public awareness have increased, social media platforms have deployed preventive measures against misinformation. Depending on the policy and moderation strategy of each platform you use, you are thus more or less likely to come across fake news:
Following the fallout of Cambridge Analytica, Facebook is the first social platform that has deployed moderation measures against misinformation. They have fine-tuned their recommendation algorithms, added more reactive flagging features, and partnered up with third-party fact-checking organizations to improve detection. During the Covid-19 pandemic, they also have added labels that inform users about potentially misleading information and provide accurate information in response. Studies show that these measures have proven effective, as the number of user interactions with false information sharply declined since 2016.
Yet, there is still a lot of work to do. Researchers still point out that health misinformation had an estimated total reach of 3.8 billion views in 2020, while a 2021 study found that misinformation-inclined sources got six times more engagement on the platform. Despite the removal of suspicious accounts, this alarming situation persists due to users recreating content from misinformation accounts that have been previously banned. So you might stay vigilant while browsing Facebook.
Contrary to Facebook, Twitter didn’t put moderation features into action before 2020. Only in the face of the COVID-19 pandemic, did the Twitter team deploy specific flagging and labeling features to counter growing health misinformation. They strengthened their post removal and account suspension decisions around claims not verified by local authorities.
They also added warning flags to alert users about unfounded claims and enabled them to rate published claims via the BirdWatch platform. This strategy demonstrates some results as misinformation gained little traction during the pandemic, but it still attracted a lot of engagement, especially via YouTube videos.
Furthermore, since November 2022, Twitter is no longer applying moderation measures against Covid-19. This could question the efforts made to protect against widespread fake news.
YouTube is another well-known spreader of fake news that had to adapt to Covid-19 misinformation. In 2020, the YouTube team started to remove any videos that shared misinformation about vaccines. They also have put efforts to deliberately decrease the reach of borderline misinformation.
Despite these preventive measures, approximately 11% of YouTube’s most viewed videos on COVID-19 vaccines contradicted information from official health sources. They had less exposure than factual videos, but still attracted larger engagement. YouTube has promised to put more effort into moderating controversial and misleading videos, but there’s still a long way to go.
TikTok is increasingly used as a primary source of information for younger generations. But it is also very susceptible to users sharing electoral, health, and financial-related fake news. In fact, in 2022, News Guard estimated that 22% of videos contained misinformation.
For example, during the Covid-19 pandemic, when you were searching “vaccine Covid-19” on TikTok search engines, the first results were suggesting content around COVID vaccine injury. TikTok has since then started to provide users with more reliable content and access to fact-checking dedicated centers. They have also partnered with independent fact-checkers and leveraged machine-learning solutions to remove misleading content and accounts. But there are still many videos that are sharing false claims about sensitive topics.
How to protect yourself from social media fake news: 4 best practices to implement
Now that you realize the magnitude of social media misinformation, there are specific cognitive techniques that you can use to prevent it. They will help you better spot fake news on social media and protect your organization from its spread:
#1 Being aware of the warning signs
The power of fake news relies on our emotional response. Social media only amplifies this effect. To avoid falling into this trap, you can rely on some telltale signs:
- Poor grammar and use of simple words. For example, they can regularly use uppercase letters and short sentences, like in this Facebook post : “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement- SPREAD THIS EVERYWHERE”
- Emotionally-charged messages that want to elicit a visceral reaction from you. For example, they can talk about controversial or shocking subjects. For example, in 2016, fake news spread about the supposed death of Mark Zuckerberg.
- Negative narratives targeting specific people and organizations. This could be a case of malicious propaganda.
When you see these signs, it is likely that you’re facing a case of misinformation.
#2 Checking the post origin
When you’ve come to suspect a social media post spreading disinformation, you can seek to investigate on :
- The piece of information: reviewing its sources and evidence, and checking for contradictory sources of information:
- The origin account that has spread the information: check its posting record, and subscribers, and look for dubious linked domains.
- The other accounts that have shared the post: their posting record, their network, and whether they are bots.
- Whether these posts have spread to other platforms.
#3 Signaling suspicious posts
If you’ve found evidence of misinformation, you can then flag this information via website-embedded features. Usually, you can find this tool next to the post or on the “report” section of the social media platform.
The faster you report them, the fewer people will be impacted. So it’s a good practice to keep in mind.
#4 Using a fact-checking tool
Wouldn’t it be great to monitor in real-time each piece of news that you’re consuming? That’s what fact-checking tools allow you to do. They can automatically confront the claims of social media posts to relevant and verified sources of information.
That way, you can know right away if you can rely on what you’re consuming.
Buster.AI: fact-checking for social-media posts
Buster.AI automated fact-checking app enables you to verify every social media post on your feed.
As a deep-learning-powered fact-checking app, Buster.AI understands sentences, links them to trusted sources, and scores their reliability.
By submitting your social media content in the Buster.AI app, you can determine a matching score between this content and trusted sources or documents. These features can also work with groups of text documents and can rely on predefined sources.
Book a demo with Buster.AI and prevent yourself from social media misinformation!