The Ethics + Governance of Artificial Intelligence Initiative

News

   News

AI Initiative Supporting Research on the Limits (and Alternatives) to Supervised Learning in Addressing Fairness & Explainability

One looming question that we’ve been keeping an eye on at the AI Initiative has been this: what will it really take to ensure that machine learning systems deployed in the real world can be both fair and explainable? For critical applications like lending, hiring, policing, and mediating the conduits through which news propagates, these traits seem indispensable, and yet to date pinning down the right problem formulations and reducing actionable systems to practice has been an ongoing research challenge.

Following the spirit of our call to support research that helps transition AI away from “alchemy” towards a sturdier foundation of scientific knowledge and engineering principles, we’re excited to announce today that the AI Initiative will be awarding a $400,000 grant to support research teams led by Moritz Hardt (UC Berkeley) and Zachary Lipton (Carnegie Mellon University) to similarly advance work aimed at grounding fairness and explainability.

Their work under this grant will be to produce research which aims at two goals. First, they will explore the fundamental limitations of supervised learning as naively conceived and commonly applied, vis-a-vis the goals of achieving fairness and explainability. Can a model conceived in a framework insufficiently expressive to capture the effects of actions truly address fairness? Can common techniques relying on regularizers and parity constraints truly get at the essence of fairness? Can a model that doesn’t reason explain its reasoning?

Their work will explore the inherent limitations of supervised learning and characterize the longer-term effects of actors employing supervised learning via the dynamics of the complex systems in which machine learning is deployed. Second, they will pursue a line of research seeking to leverage advances in causal reasoning and measurement principles as a means to understanding and addressing fairness and explainability. Third, they will consider human interactivity with machine learning systems, exploring the ways that such systems can leverage human-in-the-loop learning, and investigating sample-efficient algorithms by which humans might audit machine-learning-based systems in the wild.

We’re especially excited about this project because of the promise to tackle the issues from both critical, theoretical, and practical perspectives, with deliverables including both theoretical research, open-source implementations of tools and techniques in the analysis, and academic as well as non-technical reports. You can keep up with the latest on their work on Twitter @mrtz and @zacharylipton.

Tim Hwang