We need a free, public and easily accessible source of information about the AI systems that might be used to watch us. And I’ve found it in a surprising place: the federal trademark register.Read More
Artificial intelligence is everywhere. It shapes our interactions with friends, our investment decisions, the information we see (and don’t see), the products we purchase, our likelihood to qualify for a bank loan - and at times - even who we fall in love with.
If society is defined as the sum of the interactions of individuals and groups, then algorithms are the invisible force influencing these connections – up to the point where algorithms become society.Read More
We’re excited to announce today that the AI Initiative will be supporting researchers Naz Modirzadeh and Dustin Lewis in a two-year research program that will work to strengthen international debate and inform policy-making on the ways that artificial intelligence and algorithms are reshaping war.Read More
One looming question that we’ve been keeping an eye on at the AI Initiative has been this: what will it really take to ensure that machine learning systems deployed in the real world can be both fair and explainable? For critical applications like lending, hiring, policing, and mediating the conduits through which news propagates, these traits seem indispensable, and yet to date pinning down the right problem formulations and reducing actionable systems to practice has been an ongoing research challenge.
Following the spirit of our call to support research that helps transition AI away from “alchemy” towards a sturdier foundation of scientific knowledge and engineering principles, we’re excited to announce today that the AI Initiative will be awarding a $400,000 grant to support research teams led by Moritz Hardt (UC Berkeley) and Zachary Lipton (Carnegie Mellon University) to similarly advance work aimed at grounding fairness and explainability.Read More
Artificial intelligence is truly a black box.
Journalists are reporting on a phenomenon that is hard to explain, even for experts in the field. And compounding matters, most of the important conversations are taking place behind closed doors. Many of the major advances in the field are proprietary, and the public is overly reliant on one- sided corporate press releases that maximize shareholder benefit and minimize risks. Meanwhile, publicly- available information is heavily academic, requiring advanced knowledge of the field to decipher anything beyond the executive summary.Read More
Today, The Markup, a new journalism venture founded by Sue Gardner, former head of the Wikimedia Foundation, and Julia Angwin and Jeff Larson, investigative journalists formerly at ProPublica, officially launches.
The Markup will focus on investigative journalism that seeks to uncover how powerful institutions are using and abusing technology in ways that harm real people and damage society.Read More
Just last month, we announced that the Ethics and Governance of AI Initiative would be hosting a $750,000 open call for fresh approaches to four pressing problems at the intersection of artificial intelligence and the field of news and information.
Today, we’re excited to announce that the challenge is officially open!Read More
“Interpretability” - the problem of understanding why machine learning systems make the decisions that they do - is not a matter of crossing some mystical threshold of technological progress. A useful explanation of why depends on who is asking, and for what reasons they are asking.
To that end, achieving interpretability will be a domain-specific exercise, requiring the technology to meet the needs of the people who will be implementing and using AI systems every day. On this front, a major piece of the work of interpretability will come from practitioners working on translating broad requirements into a technical reality.
We’re glad to announce a $135,000 grant today to Finale Doshi-Velez and her lab at the School of Engineering and Applied Sciences at Harvard to advance precisely this kind of work.Read More
We’re excited to announce today that the AI Initiative is partnering with the teams at Princeton’s Center for Information Technology Policy (CITP) and their colleagues at the University Center for Human Values (UCHV) to develop and release a series of case studies in the coming year. CITP is an interdisciplinary center at Princeton University that we’ve long admired as a nexus of expertise in technology, engineering, public policy, and the social sciences. The scholars at UCHV bring deep knowledge of ethics and political theory to the workshops and case study work.Read More
We’re excited to announce that next month, we will launch an open call for ideas aimed at shaping the influence artificial intelligence (AI) has on the field of news and information.
The challenge asks an overarching question: How might we ensure that the use of artificial intelligence in the news ecosystem be done ethically and in the public interest?
The open call will invest $750,000 in the best ideas we receive.Read More
Machine learning and automation play a major role in the spread of information online, defining everything from the choice of what stories social media platforms recommend to how those stories are presented. As we come to grips with the complex role that these systems play in the distribution of misinformation and disinformation, one looming question has been what kinds of alternative designs or improvements we should implement that might be better.
To make good decisions on this question, we believe that the public needs grounded, independent evaluations of the likely impact of interventions against online misinformation and disinformation, and the hard trade-offs that we might face in implementing various proposals.Read More
Over the past year, the AI Initiative has worked closely with Kade Crockford and the Technology for Liberty program at the ACLU of Massachusetts on issues at the intersection of civil rights, civil liberties, and automation. Most recently, we collaborated with the ACLU to suggest guiding principles and urge caution on a recent proposal in the Massachusetts legislature which would mandate statewide use of risk assessment instruments in the pre-trial context.Read More
We’re announcing today that the Ethics and Governance of AI Initiative will be partnering up with Kristian Lum and the team at the Human Rights Data Analysis Group (HRDAG) to support ongoing technical work in evaluating the effectiveness of automated risk assessment in the criminal justice system and develop ways to integrate ideas from causal modeling into existing frameworks.Read More