AI Initiative
The Ethics + Governance of Artificial Intelligence Initiative

News

   News

AI Initiative Supporting Independent, Empirical Evaluation of Approaches to Online Misinformation

Machine learning and automation play a major role in the spread of information online, defining everything from the choice of what stories social media platforms recommend to how those stories are presented. As we come to grips with the complex role that these systems play in the distribution of misinformation and disinformation, one looming question has been what kinds of alternative designs or improvements we should implement that might be better.

To make good decisions on this question, we believe that the public needs grounded, independent evaluations of the likely impact of interventions against online misinformation and disinformation, and the hard trade-offs that we might face in implementing various proposals.

Today, we’re excited to announce that the AI Initiative is awarding a $275,000 grant to Gordon Pennycook (University of Regina) and David Rand (MIT Sloan) to advance empirical research taking a close look at existing and prospective proposals for dealing with online misinformation and disinformation by testing these ideas out with real users.  

Doing this can turn up some counterintuitive results. It turns out, for instance, that the labeling of “fake news” can reduce the credibility of the labelled news while simultaneously boosting the credibility of bad information which fails to be labeled, what they call an “implied truth” effect. They have also found that people actually show less partisan bias than one might imagine, such that Americans from across the political spectrum trust mainstream media outlets more than fake news or hyper-partisan outlets - suggesting that upranking content from trusted sources may be a promising approach. More generally, their research suggests that falling for fake news is more about inattention than willful ignore or self-deception, which provides reason for some optimism in the fight against misinformation.

Ultimately, misinformation isn’t just about algorithms. Understanding the complex interplay of cognition and psychology with automation will be crucial as society works to define how these systems will play a role in shaping the public sphere going forwards. We’re thrilled to be partnering with David and Gordon as they launch a new series of experiments in the coming months.

Tim Hwang