PRESS RELEASE: Artificial intelligence and the news - Seven ideas receive funding to ensure AI is used in the public interest
From combating the spread of misinformation to expanding awareness of the impact of artificial intelligence on society, seven projects shaping the effects that artificial intelligence has on the field of news and information received $750,000 in funding today.
The Ethics and Governance in AI Initiative, a joint project of the MIT Media Lab and the Harvard’s Berkman Klein Center for Internet & Society, awarded the funding through the AI and the News Open Challenge.
Emerging from more than 500 submissions, the winning projects will include an effort to combat the spread of misinformation through the messaging platform WhatsApp, where one group will accelerate the work of fact checkers in India, where the spread of dubious videos there have led to lynchings.
Other projects will help to augment and support the work of journalists by experimenting with the opportunity offered by AI in sifting through reams of information, training reporters to cover this emerging field and supporting an in-depth series on the effects of AI on American workers.
“These winners showcase both the opportunities and challenges posed by artificial intelligence. On one hand, the technology offers a tremendous opportunity to improve the way we work — including helping journalists find key information buried in mountains of public records. Yet we are also seeing a range of negative consequences as AI becomes intertwined with the spread of misinformation and disinformation online,” said Tim Hwang, who leads the initiative. “We’re thrilled to support these winners as they pilot new efforts to ensure these technological breakthroughs have positive social impact.”
Chequeado: Building regional knowledge about AI by producing an investigative series on the ethics of algorithms and the implications in Latin America
Legal Robot: Helping journalists and the public find the connections between public agencies and the companies they hire by using artificial intelligence to extract information from large-scale public records requests
MuckRock Foundation: Making it easier for newsrooms, researchers, and communities to analyze documents through crowdsourcing and machine learning tools.
Craig Newmark Graduate School of Journalism, CUNY Foundation: Training community media journalists to uncover and analyze artificial intelligence systems, and to produce news pieces that center on the impact of this technology on immigrants and low-income communities
Rochester Institute of Technology: Helping journalists more easily spot suspicious media by developing detection techniques that quickly identify and detect evidence that video or audio was faked, using recent developments in machine learning.
Seattle Times: Launching a one-year reporting project that will inform and engage readers in the social, ethical and practical implications of artificial intelligence and its influence on the way people work.
Tattle Civic Technologies: Combating misinformation on WhatsApp and other chat apps in India by increasing the efficiency of fact checking and the accessibility of fact checked information for mobile-first WhatsApp users.
Full project descriptions are below.
Launched in 2017, the Ethics and Governance of AI Initiative is supported by the John S. and James L. Knight Foundation, Omidyar Network, LinkedIn co-founder Reid Hoffman, and the William and Flora Hewlett Foundation. The Initiative is a fiscal sponsorship fund of The Miami Foundation.
“AI’s impact to quality news and information are a global concern,” said Paul Cheung, Knight Foundation’s director of Journalism and Technology Innovation. “We are thrilled that our winners represent a global community of journalists and technologists who are offering a diverse array of solutions to ensure AI is used for public good.”
About the Ethics and Governance of AI Initiative
Launched in 2017, the Ethics and Governance of AI Initiative is a hybrid research effort and philanthropic fund that seeks to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicate social values of fairness, human autonomy, and justice. The Initiative is a joint project of the MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society. It incubates a range of research, prototyping, and advocacy activities within these two anchor institutions and across the broader ecosystem of civil society.
Winners: AI and the News: An Open Challenge (2019)
Organization: MuckRock Foundation
Newsrooms and researchers are winning access to larger and larger document sets, but getting them is just the start. Understanding what is in those PDFs can be just as challenging, requiring hours of sifting and data entry. Sidekick will offer accessible and intuitive crowdsourcing and machine learning tools to help newsrooms and other groups automate turning documents into data, helping quickly analyze tens of thousands of pages while highlighting sections that might go overlooked.
Project: Reporting from the Epicenter of AI
Organization: Seattle Times
The Seattle Times will create a one-year reporting series on artificial intelligence and its implications on society. This work will involve producing major enterprise stories that examine the changing nature of work, assumptions about the jobs Americans will have in the future, and the political and public policy issues that are sparked as changes from AI take hold. From driverless vehicles to advanced robotic systems, technological changes promise to disrupt the nature and structure of work just as dramatically over the next decade as the Internet did over the past quarter century. As it has since the Industrial Revolution, the ongoing transformation of working life will ripple out through society in ways that affect income inequality, culture, notions of community and even social stability. The reporting will engage workers whose lives will be affected by AI technologies and amplify their voices, experiences and perspectives.
Project: Community Media Training to Report on AI Systems
Organization: Craig Newmark Graduate School of Journalism at CUNY
This project seeks to train niche media organizations on how to cover artificial intelligence, with an emphasis on how the technology will directly impact the people they serve. The Newmark Journalism school will offer workshops to train journalists on topics from how AI systems can shape health, social and financial policy, to analyzing who benefits and who is affected by how algorithms are coded. The program will include a help desk that can guide journalists on technical issues and questions as they report their stories. The trainings will be available to community journalists in New York City and niche journalists from other parts of the U.S. can apply to attend workshops. The Newmark school will also produce a primer on AI, which will be published in multiple languages.
Project: Automating Public Accountability with AI
Organization: Legal Robot
Legal Robot will create a free research tool that journalists and the public can use to find and analyze government contracts, so that they may better understand how public entities use the people's resources. Legal Robot will use public records laws to request a large set of city, county and state contracts, then automate the extraction and analysis of the data with its machine learning tools. The project will then post both a database and create data visualizations for the public to scrutinize. Visualizations will be created in partnership with TLM Works, a web development training program at the San Quentin prison. The project’s goal is to promote government transparency by providing journalists with the tools and data they need to discover links between government and their contractors, and to scrutinize any fraud, waste or abuse.
Project: Tattle: Promoting Public Awareness and Combating Misinformation on WhatsApp
Organization: Tattle Civic Technologies
In India, as in other developing countries, WhatsApp is one of the most widely-used social media platforms. Information including misinformation spreads quickly on the platform. The effects can be far ranging — from changes in people’s health choices to greater social tension in communities and in extreme cases violence against individuals. Tattle aims to support and scale existing fact-checking efforts by creating channels for sourcing content from WhatsApp users; using machine learning to categorize and classify multilingual, multimedia content circulated on chat apps; and distributing fact checked information so that is accessible to mobile-first audience. In the process Tattle aims to enable more transparent research on misinformation in closed networks.
Project: Robustly Detecting DeepFakes
Organization: Rochester Institute of Technology
Researchers at the Rochester Institute of Technology will design and evaluate some of the first approaches for robustly and automatically detecting deepfake videos. These detection techniques will combine vision, audio, and language information, including the synthesis of all three for a comprehensive detection that will be much harder to fool. Videos will be labeled with an “integrity score,” signaling to professionals and consumers where media may have been manipulated. A browser extension will color-code the videos for users depending on their score.
Project: Ethics of Algorithms in Latin America
Chequeado will partner with journalists around Latin America to produce an in-depth investigative series on the ethical issues surrounding the implementation of artificial intelligence in the region. Additionally, Chequeado will train local journalists how to cover these emerging technologies and produce a guide with recommendations for journalists covering AI and other relevant issues. This work will be shared within the major journalist networks in Latin America.