A New Model for Fact Checking

Misinformation is undermining democracy. What can be done?

By Evan Hansen 11/30/2022

ABSTRACT: This white paper evaluates the health and effectiveness of current fact checking methods in reducing and slowing the spread of false information online, and examines the pros and cons of a “trust-less” verification model based on Web3 technology and crowdsourcing. The core features and anticipated vulnerabilities of such a system are also discussed.


Introduction

The rapid spread of false information online is a serious threat to social stability around the world and a threat to democracy. A recent Pew poll ranked it just behind climate change as the top global threat as reported by 19 countries.

The most common approach to dealing with this problem is to enlist professional fact checkers to evaluate and investigate disputed claims and publish findings, such as debunks. The well-regarded fact checking site Snopes, which arose from an online encyclopedia founded in 1994 that focused on urban legends, is a frequently cited example.

Since 2015, the fact checking industry has grown from 68 to 378 active professional organizations operating in 69 countries, according to Duke University’s Duke Reporters’ Lab. Yet the false information problem continues to outstrip these efforts. Meanwhile, fact checking itself has become suspect for some people as fake fact checking sites have proliferated and legitimate ones face questions about funding from politically motivated actors and potential bias.

In 2019, the Columbia Journalism Review published a critical article examining the growth and application of fact checking in “The Fact-Check Industry: Has our investment in debunking worked?” citing Professor Mike Ananny, of the University of Southern California, who produced a Tow Center report highlighting one of the early concerns of Facebook’s fact checking work:

“An almost unassailable, opaque, and proprietarily guarded collective ability to create and circulate news, and to signal to audiences what can be believed—this kind of power cannot live within any single set of self-selected, self-regulating organizations that escapes robust public scrutiny,” he wrote.

Fact checkers have wound up in the curious position of fact checking other self-proclaimed fact checkers, with no clear consensus on which sources are the more trustworthy. CJR author Emily Bell, founding director of the Tow Center for Digital Journalism, noted in the above-mentioned article that Facebook’s “emphasis on nonpartisanship opened the door to partners such as Check Your Fact, which is funded by the Daily Caller, a right-wing site that is itself adrift from truth. The Weekly Standard was welcomed in, too; last September, when it ‘fact checked’ a story by ThinkProgress about Roe v. Wade, Judd Legum, the founder and former editor in chief of ThinkProgress, described Facebook’s fact-checking initiative as a ‘farce.’”

The challenge was further highlighted when Facebook announced that it would not fact check material posted by politicians, even when it violates the company’s rules, and in November 2022 announced it would not fact check former Pres. Donald Trump.

‘Backfire effect’

What has emerged is a more complex problem than simply digging into and exposing the facts. “Facts” may be marshaled for and against various claims, with more or less authority; but the mechanisms for deciding what narratives deserve attention and investigation, the processes for unpacking and debunking claims, and the motives of the actors involved bring inherent questions of potential bias and conflicts of interest. In pursuit of truth, which frequently requires taking a side, even reputable fact checkers risk a loss of trust.

As a March 2022 article in the Digital Journalism Journal, vol. 10, noted “Fact-checking can have the adverse ‘backfire effect’ by making the journalist appear partial, which can undermine trust in the accuracy of information or, in some examples, increase belief in the falsehood under review (Nyhan and Reifler 2010; Nyhan, Reifler, and Ubel 2013). Thus, existing scholarship suggests that under certain conditions fact-checking may serve to undermine readers’ trust in news reporting.”

Analyzing the dynamics of how misinformation spreads is key to understanding potential solutions. A 2018 MIT study concluded that people, not bots, are the primary vectors.

“We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” said Sinan Aral, a professor at the MIT Sloan School of Management and co-author of a paper detailing the findings. Moreover, the scholars found, the spread of false information is essentially not due to bots that are programmed to disseminate inaccurate stories. Instead, false news spreads faster through Twitter due to people retweeting inaccurate news items.

Another way

The explosion of misinformation comes at a time when trust in authority is declining, a phenomenon on display at the height of the COVID crisis, when “do your own research” became a popular slogan amid conflicting recommendations and policy reversals by medical experts. “The whole strength of science is that people who have different ideological bents can do experiments, transcend their prior beliefs, and try to build a foundation of facts,” Janet Woodcock, MD, principal deputy commissioner of the Food and Drug Administration (FDA) recently told the journal of the American Association of Medical Colleges, a nearly 150-year old institution that established the foundations of medical education standards in the United States. “Now we have whole groups of people who don’t believe that.”

Given what is known about the dynamics of misinformation, it seems clear now that solutions focusing solely on the “facts themselves” without also considering the problems of persuasion, and generating social consensus about the facts, are incomplete.

There is strong practical evidence that suggests some of the most successful methods of persuasion and consensus-building include participation (getting people involved in solving a problem); the art of questioning (actively seeking ideas and input); and generating competition (creating “sides” to choose from).

By creating a competitive environment that incentivizes active questioning and invites everyone to participate equally, it might be possible to engage a large enough number of ordinary citizens with divergent opinions in the fact finding enterprise to generate not only well-founded evidence but persuasion and consensus.

A number of objections to a wide open citizens' fact checking system come to mind, for example, ordinary people lack the subject matter expertise to solve complicated problems; people are biased and inevitably introduce noise that will skew the results; opening fact checking widely to the public is an invitation to gaming the system. In fact, research shows ordinary people in aggregate perform as well as or better than experts in many cases; adversarial systems that openly pit contrary ideas against each other work to reduce rather than increase bias; decentralized Web3 technology based on blockchain and cryptography can secure open public processes from tampering and provide transparency to detect discrepancies.

A Science research paper from Sept. 2021 looking at the effectiveness of crowdsourced misinformation labeling examined 207 news articles flagged for fact-checking by Facebook algorithms, and compared accuracy ratings of three professional fact-checkers who researched each article to those of 1,128 Americans from Amazon Mechanical Turk who rated each article’s headline and lede. According to the abstract, “[t]he report found the average ratings of small, politically balanced crowds of laypeople (i) correlate with the average fact-checker ratings as well as the fact-checkers’ ratings correlate with each other and (ii) predict whether the majority of fact-checkers rated a headline as ‘true’ with high accuracy.”

The authors went on to note that “a large literature shows that even if the ratings of individual laypeople are noisy and ineffective, aggregating their responses can lead to highly accurate crowd judgments. For example, the judgment of a diverse, independent group of laypeople has been found to outperform the judgment of a single expert across a variety of domains, including guessing tasks, medical diagnoses, and predictions of corporate earnings (14, 15, 18, 19).” Moreover, advanced methods of vote counting, such as Bayesian Truth Serum, can be used to improve accuracy of crowdsourced decisions in cases where there is little reliable evidence, or the majority opinion is not correct, for example, as described in the 2013 MIT study “Finding truth when the majority is wrong”.

Similar results have emerged in prediction betting markets, such as PredictIt, which allows anyone to place a bet on a particular outcome, such as who will win the next U.S. presidential election, and collect a reward if they are right. Such betting frequently creates an eerily accurate prediction of actual outcomes. Extending this mechanism to offer the ability for anyone to stake against a factual claim, while allowing others to dispute it, might create both a strong signal about the accuracy of statements on social media, and an economic incentive for participation.

There are numerous real world examples of the effectiveness of randomly selected groups, or  juries, in problem solving and content moderation.

When Ireland in 2018 broke a seemingly intractable political deadlock over women’s right to choose it was a Citizen’s Assembly that laid the groundwork by recommending a referendum on abortion in a deeply Catholic society that politicians were unwilling or unable to broker. A 2010 study of judicial juries in the UK found that, contrary to popular belief, juries were found to be fair, effective and efficient. The study – Are Juries Fair? – was ­carried out by Professor Cheryl Thomas of University College, London, and was based, for the first time, on interviews with more than 1,000 jurors after their cases. It also undertook a separate study of 68,000 jury verdicts to examine the sensitive issue of how jurors make their decisions.

As the Editor in Chief at Periscope from 2016 to 2019, I personally witnessed the power of citizen juries as effective moderators of digital content. In the app, viewers of live video broadcasts could post comments which appeared in the video feed, and frequently veered into toxicity. We launched a community feature allowing users to flag comments for review by a small group of other viewers, who could vote in real time on whether an author should be temporarily banned from further commenting. Moreover, Twitter’s recently released Community Notes program, which allows average users to annotate suspect Tweets, is an even more powerful application of crowdsourced verification that shows such an approach is feasible.

Why decentralization matters

Several methods of decentralized fact checking on blockchains have been proposed, including Newsblocks, Fact Protocol, Ideamarket and Factland.org, among others. (Factland.org was co-founded by the author of this paper, following conversations with digital cash pioneer David Chaum, who seeded the initial idea of building a betting market for facts adjudicated by anonymous random sample juries.) This approach has also been endorsed by former Twitter CEO Jack Dorsey, who in 2020 re-tweeted a call for fact-checking through open-source tech rather than new intermediaries.

The reality is Web3 technologies offer a number of as yet unrealized opportunities to create a novel secure decentralized system that empowers average people to get involved and have a say in what’s true and false, what’s myth and reality, as it unfolds in real time on their social feeds. Composability, a key feature of Web3, in theory should allow such crowdsourced annotations to be displayed directly on disputed content wherever it might appear, using a shared system for flagging, reviewing and adjudicating claims, while offering the opportunity for anyone anywhere to participate in the process.

Six key features of such a system emerge that currently are not present in centralized fact checking methodologies:

  1. TRUST-LESS: Rather than ask people to trust a closed central body and its research, the system must be open and subject to inspection so anyone can verify for themselves how decisions were made.
  2. PARTICIPATORY: Everyone must be allowed to participate by submitting claims and/or evidence to be considered, and have the opportunity to help adjudicate claims.
  3. REVISABLE: The system must be resilient to mistakes and provide an accessible challenge or appeals process.
  4. ADVERSARIAL: The system must encourage people with different beliefs to engage with each other and share opposing evidence side by side. This has been shown in research to reduce bias.
  5. REPRESENTATIVE: The system must issue opinions that represent the verifiable consensus view of the community, rather than the opinion of a skewed minority.
  6. CONFLICT FREE: Any financial or monetary rewards issued must be emergent from the system itself, and not funded by sources that may have political or other ties that create perceived conflicts of interest.

Of course, it is not easy to achieve such a perfect system with all of these features. Several key problems can already be anticipated:

  1. SYBIL ATTACKS: It is necessary to be able to reliably prevent individuals from creating multiple accounts in order to control jury voting and pack claims.
  2. PRIVACY: Strong anonymity/pseudonymity will be required to prevent vote tampering and ensure the security of jurors and other participants.
  3. 51% ATTACKS: Mitigations will be necessary to prevent whales, such as state actors, from cornering the market on claims or the ecosystem as a whole.
  4. GOVERNANCE: Any attempt to construct such a system must address the question of governance, in such a way that suspicions of bias can be directly examined and verified by the community without reference to a centralized “trusted” body. As seen above, fact checking organizations unwittingly undermine their own credibility through opaque funding and decision making, creating doubts among some groups about their provenance and political motives.

Conclusion

A close examination of misinformation dynamics reveals a core problem of trust that prevents even the most well documented facts from persuading groups to shift their opinions and form a consensus. Placing the question of trust at the foundation of any such system requires trade-offs that appear counterintuitive but are in fact likely essential to long term success. Granting no special role for experts, embracing radical inclusiveness, and trading traditional concepts of authority for transparency and resiliency are compromises we should be willing to make in order to test new methods of truth finding that address known failures in current approaches.

It is not only desirable but likely necessary to completely decentralize the fact finding enterprise, in order to remove the sources of mistrust that have stymied traditional centralized approaches while increasing the scale and speed of decision-making, given a large enough group of participants.


Evan Hansen is the co-founder and CEO of Factland.org, a token-based platform that incentivizes fact-finding, critical reasoning, and emergent truth.