In early 2025, an announcement by Meta took the journalistic world largely by surprise: the fact-checking third party program which was launched in 2016 to offset the exposure to misinformation and disinformation is now to be terminated, starting in the US.
Meta’s CEO pointed the finger to fact checkers saying they “have been too politically biased” and have “destroyed more trust than they created.”
Many have argued that the move is a direct result of the company’s need to better align with the new US administration. But how did we get here and what might be the effects of such a turn from one of the world’s leading social media platforms?
It all basically started in 2016 when Donald Trump was first elected as president in the United States, with Facebook being blamed for a torrent of fake news and conspiracy theories swirling around his election. Mark Zuckerberg wrote back then an apologetic post announcing a series of steps his platform would be taking to tackle false and misleading information, one of which was to work with professional fact-checkers. The problem is that Facebook had originally made a system where lies could be repeated so often that people couldn’t tell when something debatable was true or false, and that at a very large scale. Fact checking was Facebook’s response to this.
And when we talk about fact checkers, one might ask who they are, what do they do differently and why do they matter?
Well, fact-checkers are for the most part journalists (but can also be researchers or policy making experts), dedicated in the detection and analysis of specific misinformation and disinformation pieces of information, which they verify and come up with a report.
All steps follow a strict ethics code and are engraved with work deep rooted in traditional journalism: double checking sources, presentation of information, juxtaposition with claims in review. To adopt the fact-checkers’ solution, Meta also decided to put forth specific rules, some of which were strange to key journalistic principles but were accepted, such as the requirement from the partner organizations to avoid debunking content and opinions from political actors and celebrities and avoid debunking political advertising. As time went by, fact-checking became increasingly complicated (also due to the advent of technology and especially AI) and initial aspirations of it becoming a magical weapon what would normalize public discourse and eliminate polarization, faded away.
However, multiple studies have shown that Facebook’s fact-checks were quite effective at reducing belief in false information, as well as at reducing how often such content is shared. More precisely, a study published last year in the journal Nature Human Behavior highlighted the positive role warning labels, like those used by Facebook, play to alert users about false information. According to it, they likely reduced belief in falsehoods by 28% and false content sharing by 25%. Even when it comes to right wing users, researchers showed that they were indeed most distrustful of fact-checks, however the fact-checking interventions resulted in them reducing their belief in false content.
Furthermore, the contribution of fact-checking in curbing much of the harmful misinformation at time of COVID and the effectiveness of vaccines was immeasurable. Despite this, in recent months, Elon Musk’s X stopped using professionals to fact-check posts, relying now heavily on its own users to police its site for misinformation in a program called Community Notes, while YouTube has also begun testing a similar feature.
Is misinformation truly dangerous for societies?
Other than ensuring a sound informational ecosystem in social media platforms that attract hundreds of millions of people, putting faith in journalism and fact-checking is essential for avoiding real life violence that comes as a result of online polarization, especially in countries were institutions are quite weak. A good example of this is the case of Myanmar.
Both Meta and the UN concluded that Facebook was responsible for the genocide that happened in the country, with lies being laced with fear, anger and hate. Online violence immediately became real world violence. The termination of fact-checkers from platforms where so many people interact every day is not a free speech issue. It is merely a safety one. Getting rid of facts on a global platform is going to have a significant impact on minorities. And that’s of course on top of the financial struggle of fact-checking groups that relied on this funding to maintain their operations.
It is my understanding that social media platforms never quite understood nor respected the role journalism. Facebook became at some point the world’s ultimate gatekeeper and the free speech narrative that is now at the forefront of the decision reasoning was the key factor for the corruption of our public online information ecosystem. And to be completely honest, the fact-checking program that came to the rescue wasn’t even the best solution in the first place. The best one (certainly not feasible) would have been to change the design.
Now, without fact-checking in place, without the presence of a shared reality on social media, how can states argue that they run a democracy that works? Because if we tell a lie in real life, we are held responsible. But what happens online? Shouldn’t the online environment be considered an extension of real life itself?