Launched in 2017, the Ethics and Governance of AI Initiative is a hybrid research effort and philanthropic fund that seeks to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicate social values of fairness, human autonomy, and justice.
The Initiative is a joint project of the MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society. It incubates a range of research, prototyping, and advocacy activities within these two anchor institutions and across the broader ecosystem of civil society.
At present, the Initiative is supporting work in three domains that we believe to be among the most impactful, near-term arenas of automation and machine learning deployment.
AI and Justice
What legal and institutional structures should govern the adoption and maintenance of autonomy in public administration? How might approaches such as causal modeling rethink the role that autonomy has to play in areas such as criminal justice?
Information Quality
Can we measure the influence that machine learning and autonomous systems have on the public sphere? What do effective structures of governance and collaborative development look like between platforms and the public? Can we better ground discussions around policy responses to disinformation in empirical research?
Autonomy and Interaction
What are the moral and ethical intuitions that the public brings to bear in their interactions with autonomous systems? How might those intuitions be better integrated on a technical level into these systems? What role does design and interface - say, in autonomous vehicles - play in defining debates around interpretability and control?