AI Initiative Partnering with the Human Rights Data Analysis Group
We’re announcing today that the Ethics and Governance of AI Initiative will be partnering up with Kristian Lum and the team at the Human Rights Data Analysis Group (HRDAG) to support ongoing technical work in evaluating the effectiveness of automated risk assessment in the criminal justice system and develop ways to integrate ideas from causal modeling into existing frameworks.
This $300,000 grant will go to support research which sheds light on two key empirical questions in the ongoing debates around the use of criminal risk assessment systems. First, the project will probe a number of the underlying assumptions made by existing studies which evaluate the effectiveness of risk assessment systems based on partial views of the available data around bail decisions. Second, the grant will advance work enriching our understanding of the ways in which biased or gamed data inputs - such as booking charges determined by an arresting officer - contribute to the production of racially biased classifications.
The project will also go towards supporting a collaboration with researchers at the MIT Media Lab investigating the role that causal modeling might play in operationalizing ideas of “intervention over prediction” as an alternative to the dominant paradigms guiding the deployment of these automated systems in the criminal justice context.
For more than 25 years, HRDAG has worked with human rights advocates to build scientifically defensible, evidence-based arguments. Their work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations on five continents. We’re excited to have the chance to work with such an experienced team as we expand the AI Initiative’s work in the space.