AI Initiative Announcing $750,000 Challenge on News and Information Quality
AI Initiative Supporting Independent, Empirical Evaluation of Approaches to Online Misinformation
We’re excited to announce that next month, we will launch an open call for ideas aimed at shaping the influence artificial intelligence (AI) has on the field of news and information.
The challenge asks an overarching question: How might we ensure that the use of artificial intelligence in the news ecosystem be done ethically and in the public interest?
The open call will invest $750,000 in the best ideas we receive.
AI Initiative to Partner with the ACLU of Massachusetts on Accountable AI
Machine learning and automation play a major role in the spread of information online, defining everything from the choice of what stories social media platforms recommend to how those stories are presented. As we come to grips with the complex role that these systems play in the distribution of misinformation and disinformation, one looming question has been what kinds of alternative designs or improvements we should implement that might be better.
To make good decisions on this question, we believe that the public needs grounded, independent evaluations of the likely impact of interventions against online misinformation and disinformation, and the hard trade-offs that we might face in implementing various proposals.
AI Initiative Partnering with the Human Rights Data Analysis Group
Over the past year, the AI Initiative has worked closely with Kade Crockford and the Technology for Liberty program at the ACLU of Massachusetts on issues at the intersection of civil rights, civil liberties, and automation. Most recently, we collaborated with the ACLU to suggest guiding principles and urge caution on a recent proposal in the Massachusetts legislature which would mandate statewide use of risk assessment instruments in the pre-trial context.
AI Initiative Supporting Project to Translate and Contextualize Chinese AI Policy
We’re announcing today that the Ethics and Governance of AI Initiative will be partnering up with Kristian Lum and the team at the Human Rights Data Analysis Group (HRDAG) to support ongoing technical work in evaluating the effectiveness of automated risk assessment in the criminal justice system and develop ways to integrate ideas from causal modeling into existing frameworks.
We’re thrilled to announce today that the AI Fund will be making a grant to support DigiChina, a collaborative effort from the New America Foundation that seeks to understand China’s digital policy developments, primarily through translating and analyzing Chinese-language sources.