Meta Drops Fact-Checking Because of Politics, but Also Because It Wasn’t Working
The change is politically motivated, yes, but the program never really achieved trust or scale. Issue #133
On Tuesday, Mark Zuckerberg announced that Meta would be ending its third party fact-checking program, citing concerns about the biases of fact-checkers and the importance of allowing free expression on the company’s platforms.
The move has been lauded by many conservatives, including Trump himself, as a “win for free speech.” On the Blue side, it’s being condemned as political capitulation to the incoming administration. Zuckerberg certainly does seem to have adopted Red language and framing in his comments… and he also just donated a $1M to Trump’s inauguration and appointed friend-of-Trump Dana White to Meta’s board.
Yet it’s also true that, despite the excellent work of many individual fact checkers, the fact checking program as a whole was struggling in important ways. For such a program to work, it has to accomplish at least three things:
Accurately and impartially label harmful falsehoods
Maintain audience trust
Be big enough and fast enough to make a difference
In this post we’ll consider the evidence for all three. Despite repeated claims of bias, there’s good reason to believe fact checkers were mostly quite accurate and fair. But the program never really operated at platform scale or speed, and rapidly became distrusted among exactly the people who were most exposed to falsehoods.
There was, maybe, an alternate path that might have preserved trust among conservatives; we now have research that gives us a tantalizing what-if.
Fact checking never achieved scale
Meta launched its fact-checking program in 2016, following concerns about the role that misinformation played in that year’s election. Over the years it has worked with dozens of partners globally including Poynter’s International Fact-Checking Network (IFCN).
Meta’s systems identified suspicious content based on signals like community feedback and how fast a post was spreading. Fact checking partners could see these posts through an internal interface. They freely chose which claims to check, and then did original reporting to determine the accuracy of those claims. After this, Meta would decide whether to apply a warning label, reduce distribution, or sometimes remove the post entirely.
Meta says it will stop fact checking, but this shouldn’t be confused with content moderation. Almost all content that is removed—millions of items per day—is reviewed by armies of content moderators (Meta says 40,000 of them) and related AI systems, not fact checkers. While moderation does target some politically contentious items such as hate speech and health misinformation, mostly it removes fraud, spam, harassment, sexual material, graphic violence, and so on. Zuckerberg said that certain content moderation rules will be relaxed and the teams involved will move to Texas, but the massive operation appears otherwise unchanged. The update to moderation rules may turn out to be the most impactful change by far.
Fact checkers, by comparison, can check only a handful of posts per day. Meta never released any data on the operation of the program, but a Columbia Journalism Review article mentions that all US fact checkers combined completed 70 fact checks in five days—or 14 items per day. A more comprehensive report by Popular Information gives not just the scope but the speed:
There were 302 fact checks of Facebook content in the U.S. conducted last month. But much of that work was conducted far too slowly to make a difference. For example, Politifact conducted 54 fact checks of Facebook content in January 2020. But just nine of those fact checks were conducted within 24 hours of the content being posted to Facebook. And less than half of the fact checks, 23, were conducted within a week.
This is slightly less than 10 fact checks per day in the US. And if fact checks take days to complete, then most people will view viral falsehoods before any label is applied.
Fact checking lost the trust of its most important audience
In America, trust in fact-checkers quickly split along partisan lines. As a 2019 study from Pew shows, Republicans in particular tend to think that fact-checkers are biased:
Generally speaking, many conservative Americans feel as negatively about fact-checkers as they do about mainstream liberal media more generally.
One reason for asymmetrical trust in fact-checking is that, simply put, there’s more misinformation on the right than on the left. No one really wants to review the evidence on this point—this statement probably sounds obvious if you identify as Blue, and obviously biased if you identify as Red. (See also: Proof that the 2020 Election Wasn’t Stolen, Which No One Will Read.)
But just to drop a few links: In recent years researchers have found a strong correlation between conservatism and low-quality news sharing across platforms. A massive study with internal Facebook data found that there’s way more Red misinformation than Blue shared on that platform. Even conservative journalists say that their side of the industry is lower quality, according to a 2020 research project. All of this means that conservative Americans are going to be disproportionately impacted even if fact checkers are being completely fair.
But perhaps the best reason for conservative distrust is that the fact-checking community is closely intertwined with the professional journalism world, and journalists today are mostly quite left of center. Fact checkers are de-facto censors, and it’s completely rational to distrust censors who don’t share your politics. Unsurprisingly, research bears this out. In a 2023 study researchers found that fact-checks were much more likely to backfire (52% more likely!) when they came from a member of the political outgroup.
Conservatives actually mostly agree with liberal fact checkers
You may wonder how researchers know there’s more misinformation on the conservative side if misinformation is defined by potentially biased fact checkers. To account for this possibility, researchers at MIT compared the ratings of professional fact checkers with those from politically-balanced groups of laypeople. They found that the assessments by such bipartisan groups differed little from those of the professional fact-checkers. This replicates an earlier study which found the same thing.
This has two big implications. First, it means that the majority of those fact checks were in fact accurate and impartial! When you include conservatives in the process, they largely agree with the calls that liberal fact checkers made. Although there were certainly exceptions, by and large the pros were competent and fair.
Second, the obvious trust-building move would have been for fact checking organizations to visibly employ conservative fact checkers. This may have achieved goal #2, trust, without sacrificing goal #1, accuracy. Why didn’t this happen at the time? Our guess is that polarization made it impossible for fact checking orgs to hire conservatives. Deep suspicion runs both ways; there’s simply too much distrust of conservatives within professional journalism.
In fairness, Meta’s fact-checking program did take at least a few steps to include a politically diverse array of partners. In 2018, it took flak from left-leaning outlets for adding the Weekly Standard, a conservative magazine, as one of its fact-checking partners. Even more controversially, for a while the Daily Caller was a fact checker, and sometimes fact checked other fact checkers like Politifact. (Both the Weekly Standard and the Daily Caller were later removed from the program.) This is all culture war nonsense. But what would have happened if, say, Politifact had hired conservative staff directly and resolved disputes internally?
Will community notes be better?
Is it possible that Community Notes, the system Zuckerberg said will be implemented instead of the third-party fact-checking program, could actually be an improvement? The move was inspired by X’s Community Notes, and YouTube also recently rolled out a similar program.
As Tech and Social Cohesion pointed out, the critical limitation facing Community Notes is timing. They cite a study which found that this program does help reduce the spread of misinformation on X, but,
Notes typically become visible 15 hours after a tweet is posted… by which time 80% of retweets have already occurred. As a result, the overall reduction in misinformation reshares across all tweets is modest—estimated at around 11%.
But, they add, there are already efforts underway to leverage AI to “rapidly synthesize the most helpful elements of multiple user-generated notes into a single, high-quality ‘Supernote,’” which could greatly reduce the time before a misleading post is labelled.
What’s more, Twitter’s original community notes experiments suggested that community-based fact-checking is effective both in terms of correcting belief in falsehoods and producing audience trust. Independent researchers recently replicated these results, finding that “Across both sides of the political spectrum, community notes were perceived as significantly more trustworthy than simple misinformation flags.”
X’s Community Notes have thus far struggled to match the scope of false information on the platform. However, in all the criticism we’ve never seen a direct comparison to the human fact checking program in terms of scale or speed or trust. Considering all available evidence, it’s not clear that professional checkers were ever any faster, more comprehensive, or trusted than this crowdsourced system. Notably, a broad array of platform professionals, academics, and civil society organizations recently endorsed Community Notes and similar “bridging” systems in a massive paper on the future of the digital public square.
On the whole, we think that losing professional fact checking can’t be good for a platform. We do take concerns about bias seriously, but we now know that when conservatives are asked to fact-check, they mostly agree with professional fact checker ratings. And however good the algorithms get, they aren’t a substitute for well-informed professionals watching what’s going viral. But if crowdsourced corrections are implemented well, they might just rise to the twin challenges of scale and trust in a way that the fact checking partnership never quite did.
Quote of the Week
At his press conference today, President-elect Donald Trump said, “We have inflation, I believe, at a level that we never had before.” Today, year-over-year inflation is 2.7%. That figure ranged from 12% to 15% in the 1970s and early 1980s.
Covid was not mentioned once in this piece - that was a big contributor to downfall of fact checking. And the Trump/Russia nonsense. Not to mention that fact checking can be accurate most of the time but all it takes is one glaringly obvious biased fact check (or for instance, fact checking the Babylon Bee) and the whole enterprise is undermined. That there is more misinformation on the right than the left (wouldve been nice to have a link to whatever source/proof you have of this) can be blamed on on the left: journalists are left and news is no longer "just the facts." And there is a heavy dose of whatever (left) bias the writer has; there is also the fact that the media doesnt give quite the same coverage of democrat malfeasance as the right. As a result, conservative readers are likely pushed to less reliable sources. If news and coverage were more balanced, this might not be the case.
Am I the only one who recalls a certain reptile-lidded-eyed CEO at a Senate hearing once stating that this would scale with AI?