From combating the spread of misinformation to expanding awareness of the impact of artificial intelligence on society, seven projects shaping the effects that artificial intelligence has on the field of news and information received $750,000 in funding today.
Read MoreWe’ve been absolutely blown away by the interest and excitement around our AI and the News Open Challenge. After several months and the efforts of a small army of amazing reviewers to sift through the 500+ applications we received, we’re excited to finally announce the winners of the challenge today.
Read MoreWhen we launched the AI and the News Open Challenge in September, we received more than 500 applications - with half coming from outside the United States. It’s clear that the impact of artificial intelligence (AI) in the news ecosystem is a global concern. Today, we're excited to share the 66 finalists who are vying for a share of the $750,000 to address specific problems at the intersection of AI and the news.
Read MoreWe need a free, public and easily accessible source of information about the AI systems that might be used to watch us. And I’ve found it in a surprising place: the federal trademark register.
Read MoreArtificial intelligence is everywhere. It shapes our interactions with friends, our investment decisions, the information we see (and don’t see), the products we purchase, our likelihood to qualify for a bank loan - and at times - even who we fall in love with.
If society is defined as the sum of the interactions of individuals and groups, then algorithms are the invisible force influencing these connections – up to the point where algorithms become society.
Read MoreWe’re excited to announce today that the AI Initiative will be supporting researchers Naz Modirzadeh and Dustin Lewis in a two-year research program that will work to strengthen international debate and inform policy-making on the ways that artificial intelligence and algorithms are reshaping war.
Read MoreOne looming question that we’ve been keeping an eye on at the AI Initiative has been this: what will it really take to ensure that machine learning systems deployed in the real world can be both fair and explainable? For critical applications like lending, hiring, policing, and mediating the conduits through which news propagates, these traits seem indispensable, and yet to date pinning down the right problem formulations and reducing actionable systems to practice has been an ongoing research challenge.
Following the spirit of our call to support research that helps transition AI away from “alchemy” towards a sturdier foundation of scientific knowledge and engineering principles, we’re excited to announce today that the AI Initiative will be awarding a $400,000 grant to support research teams led by Moritz Hardt (UC Berkeley) and Zachary Lipton (Carnegie Mellon University) to similarly advance work aimed at grounding fairness and explainability.
Read MoreArtificial intelligence is truly a black box.
Journalists are reporting on a phenomenon that is hard to explain, even for experts in the field. And compounding matters, most of the important conversations are taking place behind closed doors. Many of the major advances in the field are proprietary, and the public is overly reliant on one- sided corporate press releases that maximize shareholder benefit and minimize risks. Meanwhile, publicly- available information is heavily academic, requiring advanced knowledge of the field to decipher anything beyond the executive summary.
Read MoreToday, The Markup, a new journalism venture founded by Sue Gardner, former head of the Wikimedia Foundation, and Julia Angwin and Jeff Larson, investigative journalists formerly at ProPublica, officially launches.
The Markup will focus on investigative journalism that seeks to uncover how powerful institutions are using and abusing technology in ways that harm real people and damage society.
Read MoreJust last month, we announced that the Ethics and Governance of AI Initiative would be hosting a $750,000 open call for fresh approaches to four pressing problems at the intersection of artificial intelligence and the field of news and information.
Today, we’re excited to announce that the challenge is officially open!
Read More“Interpretability” - the problem of understanding why machine learning systems make the decisions that they do - is not a matter of crossing some mystical threshold of technological progress. A useful explanation of why depends on who is asking, and for what reasons they are asking.
To that end, achieving interpretability will be a domain-specific exercise, requiring the technology to meet the needs of the people who will be implementing and using AI systems every day. On this front, a major piece of the work of interpretability will come from practitioners working on translating broad requirements into a technical reality.
We’re glad to announce a $135,000 grant today to Finale Doshi-Velez and her lab at the School of Engineering and Applied Sciences at Harvard to advance precisely this kind of work.
Read MoreWe’re excited to announce today that the AI Initiative is partnering with the teams at Princeton’s Center for Information Technology Policy (CITP) and their colleagues at the University Center for Human Values (UCHV) to develop and release a series of case studies in the coming year. CITP is an interdisciplinary center at Princeton University that we’ve long admired as a nexus of expertise in technology, engineering, public policy, and the social sciences. The scholars at UCHV bring deep knowledge of ethics and political theory to the workshops and case study work.
Read MoreWe’re excited to announce that next month, we will launch an open call for ideas aimed at shaping the influence artificial intelligence (AI) has on the field of news and information.
The challenge asks an overarching question: How might we ensure that the use of artificial intelligence in the news ecosystem be done ethically and in the public interest?
The open call will invest $750,000 in the best ideas we receive.
Read MoreMachine learning and automation play a major role in the spread of information online, defining everything from the choice of what stories social media platforms recommend to how those stories are presented. As we come to grips with the complex role that these systems play in the distribution of misinformation and disinformation, one looming question has been what kinds of alternative designs or improvements we should implement that might be better.
To make good decisions on this question, we believe that the public needs grounded, independent evaluations of the likely impact of interventions against online misinformation and disinformation, and the hard trade-offs that we might face in implementing various proposals.
Read MoreOver the past year, the AI Initiative has worked closely with Kade Crockford and the Technology for Liberty program at the ACLU of Massachusetts on issues at the intersection of civil rights, civil liberties, and automation. Most recently, we collaborated with the ACLU to suggest guiding principles and urge caution on a recent proposal in the Massachusetts legislature which would mandate statewide use of risk assessment instruments in the pre-trial context.
Read MoreWe’re announcing today that the Ethics and Governance of AI Initiative will be partnering up with Kristian Lum and the team at the Human Rights Data Analysis Group (HRDAG) to support ongoing technical work in evaluating the effectiveness of automated risk assessment in the criminal justice system and develop ways to integrate ideas from causal modeling into existing frameworks.
Read More