AI Initiative
The Ethics + Governance of Artificial Intelligence Initiative

News

   News

Meet the 66 finalists in the AI and the News Open Challenge

When we launched the AI and the News Open Challenge in September, we received more than 500 applications - with half coming from outside the United States. It’s clear that the impact of artificial intelligence (AI) in the news ecosystem is a global concern. Today, we're excited to share the 66 finalists who are vying for a share of the $750,000 to address specific problems at the intersection of AI and the news.

Read More
Tim Hwang
Guest Blogger: Francesco Marconi - "3 strategies journalists can use to uncover the effects of AI"

Artificial intelligence is everywhere. It shapes our interactions with friends, our investment decisions, the information we see (and don’t see), the products we purchase, our likelihood to qualify for a bank loan - and at times - even who we fall in love with.

If society is defined as the sum of the interactions of individuals and groups, then algorithms are the invisible force influencing these connections – up to the point where algorithms become society.  

Read More
Tim Hwang
AI Initiative Supporting Research on the Limits (and Alternatives) to Supervised Learning in Addressing Fairness & Explainability

One looming question that we’ve been keeping an eye on at the AI Initiative has been this: what will it really take to ensure that machine learning systems deployed in the real world can be both fair and explainable? For critical applications like lending, hiring, policing, and mediating the conduits through which news propagates, these traits seem indispensable, and yet to date pinning down the right problem formulations and reducing actionable systems to practice has been an ongoing research challenge.

Following the spirit of our call to support research that helps transition AI away from “alchemy” towards a sturdier foundation of scientific knowledge and engineering principles, we’re excited to announce today that the AI Initiative will be awarding a $400,000 grant to support research teams led by Moritz Hardt (UC Berkeley) and Zachary Lipton (Carnegie Mellon University) to similarly advance work aimed at grounding fairness and explainability.

Read More
Tim Hwang
Guest Blogger: Latoya Peterson - "Why journalists need to understand artificial intelligence"

Artificial intelligence is truly a black box.

Journalists are reporting on a phenomenon that is hard to explain, even for experts in the field. And compounding matters, most of the important conversations are taking place behind closed doors. Many of the major advances in the field are proprietary, and the public is overly reliant on one- sided corporate press releases that maximize shareholder benefit and minimize risks. Meanwhile, publicly- available information is heavily academic, requiring advanced knowledge of the field to decipher anything beyond the executive summary.

Read More
Tim Hwang
AI Initiative Incubates The Markup, a New Investigative Journalism Venture Focused on Revealing the Societal Impact of Technology

Today, The Markup, a new journalism venture founded by Sue Gardner, former head of the Wikimedia Foundation, and Julia Angwin and Jeff Larson, investigative journalists formerly at ProPublica, officially launches.

The Markup will focus on investigative journalism that seeks to uncover how powerful institutions are using and abusing technology in ways that harm real people and damage society.

Read More
Tim Hwang
AI Initiative Supporting Prototypes Linking Legal and Technical Interpretability

“Interpretability” - the problem of understanding why machine learning systems make the decisions that they do - is not a matter of crossing some mystical threshold of technological progress. A useful explanation of why depends on who is asking, and for what reasons they are asking.

To that end, achieving interpretability will be a domain-specific exercise, requiring the technology to meet the needs of the people who will be implementing and using AI systems every day. On this front, a major piece of the work of interpretability will come from practitioners working on translating broad requirements into a technical reality.

We’re glad to announce a $135,000 grant today to Finale Doshi-Velez and her lab at the School of Engineering and Applied Sciences at Harvard to advance precisely this kind of work.

Read More
Tim Hwang
AI Initiative Partners with Princeton’s Center for Information Technology Policy

We’re excited to announce today that the AI Initiative is partnering with the teams at Princeton’s Center for Information Technology Policy (CITP) and their colleagues at the University Center for Human Values (UCHV) to develop and release a series of case studies in the coming year. CITP is an interdisciplinary center at Princeton University that we’ve long admired as a nexus of expertise in technology, engineering, public policy, and the social sciences. The scholars at UCHV bring deep knowledge of ethics and political theory to the workshops and case study work.

Read More
Tim Hwang
AI Initiative Announcing $750,000 Challenge on News and Information Quality

We’re excited to announce that next month, we will launch an open call for ideas aimed at shaping the influence artificial intelligence (AI) has on the field of news and information.

The challenge asks an overarching question: How might we ensure that the use of artificial intelligence in the news ecosystem be done ethically and in the public interest?

The open call will invest $750,000 in the best ideas we receive.

Read More
Tim Hwang
AI Initiative Supporting Independent, Empirical Evaluation of Approaches to Online Misinformation

Machine learning and automation play a major role in the spread of information online, defining everything from the choice of what stories social media platforms recommend to how those stories are presented. As we come to grips with the complex role that these systems play in the distribution of misinformation and disinformation, one looming question has been what kinds of alternative designs or improvements we should implement that might be better.

To make good decisions on this question, we believe that the public needs grounded, independent evaluations of the likely impact of interventions against online misinformation and disinformation, and the hard trade-offs that we might face in implementing various proposals.

Read More
Tim Hwang
AI Initiative to Partner with the ACLU of Massachusetts on Accountable AI

Over the past year, the AI Initiative has worked closely with Kade Crockford and the Technology for Liberty program at the ACLU of Massachusetts on issues at the intersection of civil rights, civil liberties, and automation. Most recently, we collaborated with the ACLU to suggest guiding principles and urge caution on a recent proposal in the Massachusetts legislature which would mandate statewide use of risk assessment instruments in the pre-trial context.

Read More
Tim Hwang