How do social networks manage content moderation to combat misinformation?

How do social networks manage content moderation to combat misinformation?

What is the situation regarding content moderation?

At the beginning of 2023, seven social networks have more than one billion monthly users and the main one, Facebook, has more than a third of the world's population. In a way, they have become the Roman forums of the 21st century, where anyone can speak out about the affairs not just of the city, but of the world. It is not surprising, therefore, that in these gigantic public squares, certain individuals, groups, and organizations use misinformation to rally opinions to their causes or to serve their financial interests.

A few key figures can illustrate this situation. According to an Odoxa Dentsu-Consulting survey conducted in 2019, 30% of French people admit to having already relayed fake news. For those for whom social networks are the main information channel, this figure reaches 45%. Also, 86% of the French believe that individuals on social networks and blogs often spread false information. Also, as the MIT judges, intoxication "spreads significantly further, faster, deeper, and wider than the truth, across all categories of information and, in many cases, by an order of magnitude. A dynamic that we find in a survey conducted by the Ipsos Institute. It reveals that 43% of French people, 46% of Americans, and 62% of Brazilians admit to having mistakenly believed a piece of information until they discovered that it was false.

Thus, citizens seem to be aware of the omnipresence of misinformation online. According to the Ipsos survey, a majority of French people (51%) as well as 64% of Brazilians and 69% of Americans believe that there are more "lies and misleading facts in politics and the media than there were 30 years ago". However, individual awareness of our exposure is hampered by our cognitive biases. Indeed, 52% of the French, 68% of Brazilians, and 65% of Americans claim to be able to distinguish between real news and fake news.

For platforms, the moderation of content presenting false information presents two challenges. Firstly, to prevent their propagation and their impact on democratic and economic life, which also makes it possible to combat conspiracy theories. Secondly, to restore people's trust in information and the media, whose content is relayed on platforms. For these reasons, public authorities are placing increasing importance on the moderation duties of platforms.

Regulation of social networks has become stricter in Europe

Germany and France as forerunners in content moderation

On 30 June 2017, the Bundestag, the German parliament, passed the Netzwerkdurchsetzungsgesetz (NetzDG), which came into force on 1 October 2017. It aims to punish the dissemination of infomercials, but also hate messages and online harassment. To achieve this, it requires social networks to remove such content within 24 hours, with a deadline of up to one week for the most ambiguous content, with the risk of fines of up to €50 million if they fail to do so.

The following year, in 2018, the law on the fight against information manipulation, also known as the "anti-fake news law", was adopted and promulgated in France. This text aims to regulate the dissemination of advertisements and content on electoral or social issues, offering special provisions for their suppression in case of the dissemination of intoxicating material during the election period. In addition, the “Autorité de régulation de la communication audiovisuelle et numérique” (The french Regulatory Authority for Audiovisual and Digital Communication), known as ARCOM, has become the regulator of social network moderation services in France. This text takes up the main lines and obligations that the Digital Service Act (DSA) will impose on a European scale from 2023.

Adoption of the Digital Service Act

In Europe, the adoption of the Digital Service Act (DSA) marks a tightening of legislation for platforms with more than 45 million users. In addition to the obligation to appoint a delegate in each of the 27 countries of the European Union, their obligations to fight against disinformation, the sharing of illegal content, and online harassment will be regulated by this text, both in terms of moderation and auditing. To learn more about the measures deployed by the European Union, we invite you to read our dedicated article, RGPD, DMA, DSA, and how does the European Union want to frame the digital world?

How does the US regulate social networks?

On the other side of the Atlantic, the European initiatives first gave rise to concerns from the authorities about the economic impact on platforms, as the Secretary of Commerce, Gina Raimondo, stated, "We have serious concerns that these texts will have a disproportionate impact on American businesses". In January 2022, a meeting was held between Cedric O, the French Secretary of State for Digital Affairs, and his American counterparts to clarify these points. In the United States, the notion of freedom of expression is sacred, and protected by the First Amendment to the Constitution, which prohibits Congress from passing any law that might restrict it.

Thus, social networks are legally regulated by Section 230 of the Communications Decency Act, which was adopted by Congress in 1996. This text provides that "no provider or user of an interactive computer service may be treated as the publisher of information provided by another content provider. Thus, some political and judicial authorities are calling for platforms to be considered as "common carriers", a legal status that does not exist in Europe, and which originally included Internet access providers and telephone operators. This would, in short, reinforce the principle that platforms are not the authors of the content, but merely the medium for it, and cannot be held responsible for what is said on them.

What are the moderation measures adopted by social networks?

Moderation, a complex discipline

To counter online disinformation, but also hate speech and harassment, platforms operate within a relatively restricted framework, delimited on the one hand by their legal responsibilities, by the protection of their users against the misdeeds of disinformation, and on the other hand, the obligation to guarantee the free expression of opinions, to moderate without censoring legitimate ones or truthful information.

A point that the Meta group (Facebook, Instagram) highlights on their transparency center, as they state that "we remove false information when it is likely to directly contribute to the risk of imminent physical harm [...] directly promote interference with the operation of political processes and some particularly misleading manipulated media."

Information considered misleading or false that does not fall within this framework is moderated in another way. Meta says it is working to ensure that the virality of such misinformation is contained, i.e. that it is not relayed by the algorithms. "Aware of the frequency of such speech, we are working to slow the spread of viral fake news and misinformation, and to direct users to reliable information."

To identify content that does not comply with their truthfulness policy, social networks rely on two distinct moderation methods, the first human and manual, the second automated through artificial intelligence and algorithms. In addition, social networks surround themselves with experts, groups of journalists and fact-checkers in order to improve their processes and increase the nuance and depth of analysis during moderation exercises.

In this sense, Twitter has defined three concepts of risky information, which categorize 'misleading information', 'disputed claims', and 'unverified claims', on which moderation focuses.

Manual moderation

Moderators intervene to judge whether the content is defamatory, violent, or illegal mainly when a user reports it on the platform. If the post does not comply with the platform's security policy, it will be deleted. In certain circumstances, the user who made the post may have their account temporarily suspended or permanently deleted. Users whose content has been moderated for a breach of the rules can also appeal to have it dealt with by humans rather than algorithms.

By referring to the various reports that platforms must submit annually to ARCOM, it is possible to understand a little better the resources allocated to moderation by social networks. Thus, we learn that Meta employs 15,000 moderators. TikTok, which was recently criticized for the massive volume of misinformation circulating on it, mentions "several thousand experts" without specifying exactly how many, which would be 10,000 worldwide. In 2021, before Elon Musk bought Twitter, the network said it employed just under 2,000 moderators.

The moderation services of social networks do not stop at the operational staff who process reports and moderate content. Upstream, almost all social networks surround themselves with information professionals. Journalists, fact-checkers, and academics, the role of these specialists is to accompany the networks in drafting their moderation policies, in defining and then qualifying content constituting disinformation, as well as in understanding their characteristics to enable detection.

Automated moderation

This solution, which is based on artificial intelligence, makes it possible to process all the content present on the platforms, whether it is text, photos, or videos, but also to analyze content containing external links. The aim is to detect sensitive content and then moderate it automatically if its nature goes against the defined policies. The automation of these verification processes makes it possible to process a much larger volume of content than with so-called manual moderation. These algorithms are based on deep learning technology. Feedback from the moderation teams, but also from users, a principle developed by Twitter in particular, let improve the relevance of the verdicts issued.

Two interdependent methods

To meet the challenges of moderation, it is essential to use both methods. Where artificial intelligence allows for the rapid processing of a large amount of content, the manual processing of posts makes it possible, in theory, to analyze some of them more precisely, to arbitrate divergent points of view, but also to examine the verdicts delivered by the algorithms to identify possible errors of interpretation of the latter and thus to contribute to their improvement. Indeed, the detection of certain types of content, hateful or disinformation, is very complex and requires extremely precise technologies. As Mark Zuckerberg, founder and CEO of Facebook, pointed out during his 2019 European tour, "99% of terrorist content is removed before anyone sees it. But for hate content, it takes a much more nuanced job. We need to appreciate these nuances even though our platform is available in 150 languages. This inevitably creates room for error.

Thanks to these collaborations with information specialists, supported by artificial intelligence, social networks are increasingly pointing out content that is likely to touch on topics prone to misinformation and warning users about the potential misinformation conveyed by a publication.

In the first half of 2021 alone, Twitter tells ARCOM that it removed more than 4.7 million tweets that conveyed false information, which accounted for about 0.1% of total impressions on the network. Over the whole of 2021, 3,455 accounts were suspended for publishing false information. For its part, TikTok indicates in a report that it deleted 81 million videos in 2021, without specifying the proportion of reasons linked to disinformation.

These findings contrast with those published by Snapchat. In its report to ARCOM, the firm states that it has identified 971 accounts and content containing false information. In response to the question, "Of these identified contents, how many were detected by automated tools", Snapchat indicates that none were detected by this means. In fact, all of this content was reported by Snapchat users and handled by moderators.

How do Buster.Ai solutions support moderators?

Thanks to deep learning artificial intelligence, the solutions published by Buster.Ai intervene throughout the moderation process. Developed on a deep learning model, Buster.Ai's algorithms can read statements, deduce their substance and message and compare them with millions of sources, which encompass the entire spectrum of information (press, academic and scientific articles, reports, tabular data, etc.). The technology then identifies and isolates the relevant passages that confirm or deny the initial statement and delivers a verdict to the user.

The API connection to Buster.Ai's solutions allows organizations wishing to moderate content to automate the verification process. The algorithms will automatically detect statements, compare them and then establish their degree of veracity, while proposing elements that confirm or refute the verdict. Buster.Ai's ambition is to provide technological support to platforms and their moderators in their verification tasks, allowing a significant productivity gain.

Contact us for a demonstration and find out how Buster.Ai can support your organization's activities.