The Ethics + Governance of Artificial Intelligence Initiative

News

   News

AI Initiative Supporting Prototypes Linking Legal and Technical Interpretability

“Interpretability” - the problem of understanding why machine learning systems make the decisions that they do - is not a matter of crossing some mystical threshold of technological progress. A useful explanation of why depends on who is asking, and for what reasons they are asking.

To that end, achieving interpretability will be a domain-specific exercise, requiring the technology to meet the needs of the people who will be implementing and using AI systems every day. On this front, a major piece of the work of interpretability will come from practitioners working on translating broad requirements into a technical reality.

We’re glad to announce a $135,000 grant today to Finale Doshi-Velez and her lab at the School of Engineering and Applied Sciences at Harvard to advance precisely this kind of work.

These funds will go to support research into legally-operative explanations in the credit and bail decision contexts. The team, in tight collaboration with legal experts, will leverage their existing work in the medical space to build a prototype explanation tool which meets the requirements in these other domains. The lab will then conduct a randomized trial to examine how these explanation systems might shape the decision-making processes of domain experts in practice.

We’re excited to see how these results might work to inform and shape decisions about larger deployments of these technologies in the field, shedding light on how the specifics of human-computer interaction may have broader policy implications.

Tim Hwang