AI Initiative
The Ethics + Governance of Artificial Intelligence Initiative



Meet the 66 finalists in the AI and the News Open Challenge

When we launched the AI and the News Open Challenge in September, we received more than 500 applications - with half coming from outside the United States. It’s clear that the impact of artificial intelligence (AI) in the news ecosystem is a global concern. Today, we're excited to share the 66 finalists who are vying for a share of the $750,000 to address specific problems at the intersection of AI and the news.

We’re thrilled by the level of participation. We think it suggests how urgent these issues are, and highlights the opportunity to bring a broader scope of voices into the conversation.

Three key themes emerged across all the project submissions we received:

  1. There’s a great deal of interest in asking a key question: How do we make artificial intelligence work for unique community needs and cultural contexts? We think that’s critical. We firmly believe that AI should not be a one-size-fits-all technology. Finding ways of making AI work for specific geographic, cultural and professional communities, as you’ll see the finalists propose, is important.

  2. There’s a broad recognition that journalists have a major role to play in informing the public around AI, and that misreporting on the technology is itself producing issues. We’ve received a number of proposals that work towards grounding the public discussion around AI, and demystifying what can frequently be a jargon-filled space.

  3. There is a strong indication that AI’s impact on news and information is a multidisciplinary problem that requires multidisciplinary approaches. Many of the strongest proposals we received are collaborations between journalist, researchers and technologists.

Finalists came from the U.S., Canada, Europe, Latin America, Asia and Africa. The proposals emerged from a wide range of organizations including  academic institutions, policy centers, major national and local news enterprises, new digital media companies and start-ups.

A complete list of the finalists is below.  The Ethics and Governance of AI Initiative will announce winning ideas in the first quarter of 2019. 

Congrats to the finalists, and thanks to everyone who submitted an idea!

Organization Title Project
A.I. Poli & UCL Transparency Metrics for Healthy News Consumption Habits Give users self-editing tools to improve media literacy and promote healthy online content consumption. Check Worthy Track stories on social media to address more forms of misinformation, such as hoaxes, conspiracies and pseudo-science, with a FakeRank score.
Arizona State University Foundation/Cronkite School of Journalism Fake News Video AI Trainer Leverage human eye data to train a machine learning system to identify potential “deep fake” news video content.
Associated Press AI in News Standards Initiative Convene thinkers from news, academia and beyond to create a blueprint for and launch a guide to best practices and standards for AI and automation throughout the news ecosystem – including news gathering, production and distribution – and promulgate them to journalists through the AP Stylebook, training and news industry events.
Bad Idea Factory TV Kitchen In a Box Smoosh the technology behind the Internet Archive's TV Archive into a box that newsrooms can use to turn local television into data streams; then use that data to improve re-contextualizing of videos by searching TV and generating "big picture" comic book summaries.
British Broadcasting Corporation Fabricated Audio and Video Detection Develop machine learning models and practical tools to help journalists identify fabricated audio and video media created using AI techniques, drawing on the established expertise of specialist journalists and linguists in BBC News worldwide who track and verify media from across the globe, identifying fraudulent and untrustworthy content.
Caroline Sinders Viz Lab Visualize the evolution of meme images. This searchable repository uses computer vision to help journalists and researchers trace permutations of any jpg, png, or gif that morphs over time, fueling political misinformation.
Center for a New American Security Exploring Governance Options: Policy Solutions for the AI Era Explore policy and regulatory solutions to ensure artificial intelligence is used for public good in the information space.
Center for Long Term Cybersecurity Tattle Address the challenge of misinformation on WhatsApp in India by using technologies like AI and machine learning to supplement the work of existing fact checking groups; making information more accessible to every day WhatsApp users; and enabling research on content on the platform.
Chequeado Ethics of Algorithms in Latin America Produce a special series of investigative reports on the ethics of algorithms and their implications in Latin America, to build knowledge on the current state of the art and to know which conflicts are unique and which are similar to those documented in other regions.
CIJA US Empowering Journalists Investigating Bad Actors Strengthen journalists’ AI-driven investigative work, making it possible for this valuable data to be utilized in later trials, by bringing criminal justice expertise to the conversation.
Cochran360 Ethics and Standards Toolkit for AI in Newsrooms Create an ethics and standards toolkit for using AI in newsrooms, offering ready-to-use principles and guidelines pre-considered by news ombudsmen and standards editors from many countries.
The Brown Institute for Media Innovation at Columbia Journalism School & Associated Press Guidelines for Reporting with Advanced Models Provide much needed guidance on the use of advanced statistical models in reporting, helping professional newsrooms with their transition to a computational future.
Craig Newmark Graduate School of Journalism CUNY Foundation Training Community and Ethnic Media to Cover AI Systems' Impact on Immigrant and Low-income Communities Train community and ethnic media journalists to uncover and analyze artificial intelligence systems used by companies and local government agencies, and to produce news pieces that expose the disproportionate impact of this technology on immigrants and low income communities.
Dekko Creative Algorithmic Content Management Create a deep neural network architecture to stop the spread of fake news, particularly so-called deep fakes.
DePaul University, School of Computing Automatic Construction of Narrative Event Flows from News Develop methods for automatically constructing narrative event flows in the form of event graphs from online news sources in order to assess the reliability of news stories, identify fake stories and assist journalists in discovering key connections among news events.
Development Seed Queryable Earth - AI for a Changing Planet Help journalists and decision makers better track change on earth by building open source tools, algorithms and metadata standards to support the analysis of satellite imagery.
The Digital Public Library of America, The Underlay and Minitex Contextualizing Current Events Through Library Data Combine structured data from research collections and institutional repositories to reflect the variety of views about a given topic for journalists to understand the context and connections across different knowledge repositories.
Factico Periodismo Movil Factico Trivia Create a mobile game to proactively detect potential fake-news trends using AI, then have it identify topics that are about to create a fake news trend on social networks, and reply by sending verified stories notifications
Factmata Tackling Language Deception in Political News Develop an artificial intelligence algorithm that is trained by communities of journalists and experts to identify language deception in political news.
Global Disinformation Index Foundation Global Disinformation Index Create a rating system for global media sites showing the risk of those sites carrying “disinformation” and provide this information to the public, media sites, digital media platforms and advertisers. Practical Predictions To help journalists develop AI tools to improve reporting efficiency and improve reader experience via GovTrack’s prediction and text mining techniques.
Groundswell Consulting Artificial Intelligence in Media (AIM) Project Develop "AIM,” a fellowship, training and hub project that focuses on arming journalists and non-technical practitioners across West Africa with specialized skills, an online network and a physical environment to keep up with the latest technology to combat the unethical use of AI, drive public interest news and influence the policy agenda.
Indiana University Detecting Malicious Coordinated Campaigns on Social Media Develop novel AI methods based on unsupervised learning to detect deceptive social bots and their coordinated, misinformation campaigns.
International Consortium of Investigative Journalists Machine Learning Investigations Lab Launch a machine learning investigations lab for developers and journalists, which will allow for the testing of how machine learning techniques can be best applied to data-driven investigative journalism.
Iridescent and USC Annenberg School of Journalism AI Literacy for Journalists: To include diverse voices in the AI conversation Equip journalists with training and resources to increase their ability to investigate the rapidly developing field of AI and include low-income voices in the AI conversation.
Daily Milk Foundation Daily Milk Draw upon experiences from building an internet meme encyclopedia to combine AI methods for news aggregation and journalistic oversight to provide an index of pertinent coverage of the day’s stories.
Legal Robot Automating Public Accountability with AI Use public records laws to request millions of city, county and state contracts, automate the extraction and analysis with Legal Robot's machine learning tools, then post data visualizations and the graph database online for journalists, academics and everyone else to scrutinize.
Logically Tech-tionary Provide journalists with an automated dictionary that provides tech terminology for non-specialists, and helps them avoid AI tech jargon.
Center for Ethics in Science and Journalism (CESJ) GASP Develop GASP, the first Global Coalition for Al Surveillance, Reporting and Public Awareness, a network with three goals: investigate the role of AI, train journalists to better cover it and raise awareness of how to use it responsibly.
Michigan State University Dear Algorithm, Am I attractive to news? Build a citizen science tool to reveal how algorithmic classification of users’ interests shapes exposure to news on social media and helps to reinforce systematic inequalities in access to civic information.
Modulate Modulate Watermark Build custom audio watermarking that can be used to distinguish otherwise realistic, synthesized audio from real audio.
MuckRock Foundation Sidekick: Community-trained machine learning to analyze gnarly documents Extend the MuckRock/DocumentCloud platform with Sidekick, which will make it easy for newsrooms, researchers and activists to work with their communities to help train and deploy machine learning classifiers that can meet the challenge of massive document dumps.
Nanyang Technological University Singapore Experiments on the Effect of Social Network Affordances on the Health of Online Discussions Conduct a series of field experiments to determine whether different social network site (SNS) affordances or design features can help or hinder the health of online discussion.
Numina Open Source Tools for Redacting Image PII Create open-source tools for computationally detecting and redacting personally identifiable information (PII) from images, while educating users about what constitutes PII.
Leon Yin Disinfo Doppler Develop a web application that seeks to empower newsrooms, researchers and members of civil society groups to easily research and debunk coordinated hoaxes, harassment campaigns and racist propaganda that originate from anonymous image-sharing message boards online.
Organized Crime and Corruption Reporting Project OCCRP Data Develop Automate Aleph, a search tool software that uses AI to identify patterns and connections across some 400 million records to inform cross-border money laundering and corruption investigations.
Public Citizen Foundation Governance of Artificial Intelligence in Trade Agreements Promote accountability in the use of AI in news dissemination by activating an international network of civil society experts who will influence trade agreements that set binding international rules on AI governance, and by developing a set of research-based policy recommendations for trade negotiators.
Research Center for Open Digital Innovation, Purdue University FACT: Federated AI for Communication and Trust Develop an open, federated AI platform that will continuously evaluate professional and civic news streams, assign trust rating to original sources and messages and progressively adjust their ratings based on the quality of successive modifications to the news as it is shared.
Rochester Institute of Technology Robustly Detecting Deep Fakes Explore the use of deep learning techniques to detect evidence that video or audio was generated by Deep Fakes or other techniques, helping journalists quickly and easily spot suspicious media.
Rochester Institute of Technology The Immigrant and Ethnic Media Interface Project Amplify news produced by ethnic and immigrant media in the U.S. by testing and developing AI, informed by first-person interviews with immigrant and ethnic media producers, to aggregate and analyze their news and social media posts.
Rutgers University All That’s Old is News Again: What the Field of AI and the News Can Learn from Bookstores, Libraries and Supermarkets Spur creativity and reflection on design choices for technologists working in AI and news, by developing a typology of how organizations that have occupied a structural position similar to contemporary tech platforms (e.g. libraries, bookstores, supermarkets) have imagined and performed their role as information intermediaries.
School of Information Sciences, University of Illinois at Urbana-Champaign Making AI-driven Detection of Fake and Biased News Transparent and Understandable Create an extension to Fakey (, a news literacy game, to enhance articles with visual cues that AI-solutions use to evaluate data, and allow people to explore, understand and improve AI-detected causes for biases and misinformation in news through human-in-the-loop processes.
Seattle Times Reporting from the Epicenter of A.I. Inform and engage citizens in the social, ethical and practical dimensions of this rapidly spreading technology, by reporting from its epicenter on whether productivity gains will be shared with those who are displaced, or concentrated among its creators.
Shift Design Seeking and Perceiving: Algorithmically Assigning Social Humans Labels to Social Media Data Sets Explore the ethical dimensions of using algorithms in an application to assign Social Humans SH-A labels (a prototype label system developed at the Harvard Library Innovation Lab) to social media content to help archivists, researchers and journalists spot and call out bad actors spreading disinformation.
SkyTruth Artificial Intelligence (AI) and the Environment Build an AI-driven environmental monitoring platform that allows users in advocacy and research communities to track land use change, a boost for advocates working on health and environmental issues in historically overlooked areas, and who often do not have these tools or capacity at the ready.
Sludge Media News Alerts on Tech Company Lobbying and A.I. Regulations Offer free, localized, exclusive news alerts on technology company lobbying and AI regulatory issues to local newsrooms.
Syracuse University Decoding Democracy: A Social Media Accountability Project Create an AI-enabled browser component to analyze online communication and provide real-time escalating warnings, awareness and accountability features to forestall and reduce the dissemination of misinformation.
Tempo / TempoSMS, Kibera News Network, Families United for Racial and Economic Equality and Multimer Grassroots AI: Inclusive Co-Design, Co-Development, and Co-Deployment for Participatory News Organizations Have this team of journalists, organizers and technologists around the world design and deploy tools and techniques to support ethical AI in participatory journalism, specifically methods for visualizing and analyzing large datasets accumulated via the crowd-based platforms used by citizen journalists and data collectors.
Textgain Better Explainability with Grasp Develop free machine learning software that focuses on simplicity and explainability towards non-experts, along with in-depth case studies on manually-annotated datasets of undesirable social media content.
University of Illinois Re-engineering the News Ecosystem with Evolutionary Diversity in AI Develop and refine a prototype information system aimed at fundamentally reengineering and promoting topic diversity in online news platforms.
The Centre for Internet and Society First Contact Create a bilingual (English and Hindi) website that uncovers and explains our daily encounters with AI and machine learning in access to news and information.
Dartmouth Understanding the Impact of Misinformation Across Cultures to Inform Globally Robust Interventions Perform an international, comparative study to build an understanding of how diverse audiences respond to misinformation, to inform the development of ways to prevent the spread of misinformation.
UC Berkeley DeepFake Forensics, Fact-Checking, and Filtering Develop forensic techniques to detect fake, AI-generated content and use this technology to automatically fact check news sites and stories.
UNC Chapel Hill School of Media and Journalism Story Prospector: Transparent AI Tool for Finding Story Insights for Journalists Create a tool for journalists to quickly run multiple AI algorithms and statistical models on structured and unstructured data in order to extract insights that are visualized and presented in natural language.
DeepTrace DeepTrace Build technology based on deep learning for detecting forgery in videos and to explain to users where the manipulation occurred on the images.
University of Colorado, Boulder From Editorial Voice to System Objectives: Closing the Gap Between Public Service and Personalization for News Develop a prototype news delivery system featuring transparent, accountable and auditable integration of editorial policies into the machine learning algorithms that underlie personalized news presentation - a system that can be explainable and controllable by both users and editorial teams.
University of North Carolina at Chapel Hill Ethical Disinformation Detection via Fact Verification and Logical Entailment Models Propose to ethically detect and prevent the spread of disinformation within online social and news channels via explainable/interpretable end-to-end neural networks.
University of Utah Beyond Bots: Empowering Journalists Through an Analysis of Guidelines of AI Emerging in Newsrooms Use newsroom observation and interviews with journalists, news organizations and companies shaping AI culture in the US to outline pathways for ethical and inclusive guidelines for AI policy making among journalists, newsrooms and developers.
University of Virginia, Data Science Institute: Center for Data Ethics and Justice, Establish and Share Journalism Source Metadata with Wikimedia Counter disinformation and support digital media literacy by curating the news source metadata in Wikipedia and Wikidata to enable humans and bots to judge the reliability and social context of publishers.
University of Washington, eScience Institute Exploring Real-time Emotional Responses to Online News and Mindful Design Explore technological interventions for algorithmic news platforms by recording the emotional effect of the things users read in real time, building a representative dataset of affective text, and spurring mindful consumption of media.
Urban Institute Results Speak Louder than Coefficients: Visualizing All Inputs-to-Outputs without Opening AI’s Black Box Enable journalists and community groups to better understand and describe how AI algorithms work by delivering a tool and guide to help them visualize where small changes in model inputs make big differences in an AI algorithm outputs.
WITNESS Integrating Marginalized Voices into Content Moderation Governance Ensure that AI systems used by major technology companies protect critical public interest media content.
WURD Radio AI and the AA (African American) Ensure that African-Americans are included in how AI is covered, discussed and shared so that the digital divide is narrowed, not widened.
WVU Reed College of Media Innovation Center Hiding in Plain Sight: White Nationalism in Appalachia Develop, pilot and test an early warning system designed to expose the tactics and impact of white nationalists recruiting and influencing vulnerable youth in Appalachia through online platforms.
Thomas Brochhagen Allqu (Algorithmic Quorum) Empower the public through digital literacy by creating a game where players are immersed in the learning process of algorithms, as a way to develop an understanding of algorithmic bias and misinformation.

We’d also like to thank the following people who have taken the time to join us in reviewing the first phase of the projects. The pool of reviewers represents a diverse group of experts from the fields of journalism, technology, research and other disciplines. More than half of our reviewers are people of color and about half are women.

  • Amanda Levendowski, NYU Law

  • Aron Pilhofer, Temple University

  • Clarence Wardell, Results for America

  • Clement Wolf, Google

  • Devin Gaffney, Crayon

  • Erik Reyna, The Washington Post

  • Geraldine Moriba, JSK Fellow at Stanford University

  • Hong Qu, Harvard Kennedy School

  • Jeremy Gilbert, The Washington Post

  • Jessica Forde, Project Jupyter

  • Joy Bonaguro, Corelight

  • Justin Myers, Associated Press

  • Kat Lo, University of California, Irvine

  • Katyanna Quach, The Register

  • Kim Fox, The Philadelphia Inquirer

  • Lillian Ruiz, Civil Media

  • Meredith Broussard, NYU

  • Mi-Ai Parrish, MAP Strategies Group

  • Natalie Nzeyimana, Nuanced

  • Nathan Olivarez-Giles, Apple

  • Nicholas Hagar, Northwestern University

  • Nick Diakopoulos, Northwestern University & Tow Center

  • Orlando Watson, Honeycomb

  • Retha Hill, Arizona State University

  • Sam Greenspan, Bellwether

  • Taylor Nakagawa, TechCrunch

  • Ting Cai, Microsoft/Bing

  • Tricia Wang, Data & Society, Berkman Klein Center for Internet & Society at Harvard University

Tim Hwang