Guest Blogger: Francesco Marconi - "3 strategies journalists can use to uncover the effects of AI"
Our new open call for ideas, AI and the News: An Open Challenge, is looking for submissions that address the impact artificial intelligence has on the news and information ecosystem. Here, journalist and Wall Street Journal R&D chief Francesco Marconi offers a few ways that newsrooms can help the public better understand this technology. - Tim Hwang, Director
Artificial intelligence is everywhere. It shapes our interactions with friends, our investment decisions, the information we see (and don’t see), the products we purchase, our likelihood to qualify for a bank loan - and at times - even who we fall in love with.
If society is defined as the sum of the interactions of individuals and groups, then algorithms are the invisible force influencing these connections – up to the point where algorithms become society.
And we trust their judgment. A Harvard Business School study found that people are more likely to follow advice when they are being told it came from an algorithm. Insights like this surface a fundamental challenge in modern life: the misconception of algorithmic neutrality.
Just because a result came from a computer, it doesn’t mean it’s right. The absence of transparency around AI implementation has another unintended consequence. It enables companies and institutions using smart technologies to go unchecked.
Algorithmic accountability reporting is not a new idea. However, the news industry is yet to prove to the general public that these issues are worth their attention. This is no easy task: AI is difficult to audit and, as such, to explain. But why should newsrooms care in the first place.
Because AI is becoming power.
An extreme example of that is China, which last year received almost half the global investment into AI startups at $15.2 billion. Using data from Tencent, Alibaba and other technology companies, the Chinese government has attempted to introduce a social credit system in which citizens would be ranked by their behavior and infractions. Bad driving, lighting a cigarette in non-smoking zones or posting fake news online could lead to a lower score and eventually punishment.
In a world where people can be diluted to mere data points, journalists have a new task at hand: to report on what information tech companies collect and to surface the extent to which public institutions use personal data.
Influence through AI is not reserved to tech companies and governments.
It’s also self-administered.
Monitoring and quantifying life improvement is already a trend in full bloom. According to a survey by market research institute GfK, 45 percent of Americans are already tracking their health data via app, fitness band or smartwatch or have done so in the past. This data is increasingly the focus of AI features on these apps.
The widespread adoption of personal AI tools is fueling the belief that every human interaction and behavior can be quantified. Think about apps that predict mood swings, or algorithms that use social media posts to trace psychographic profiles. In this context, an individual’s happiness faces the risk of being reduced to an index designed to be perpetually optimized. This is not science fiction. There are communities of individuals entirely dedicated to sharing self-tracking best practices.
Even though we have a surplus of metrics about ourselves and the world around us, the algorithms used to analyze data can be flawed and full of human bias. This creates a paradoxical situation. While there’s more information available, the undercurrents structuring people’s everyday lives become increasingly opaque to them. Journalists have an important role to shed light into the black box. Here are three ways:
Crowdsourcing data from readers and comparing how algorithms are impacting different individuals: are they seeing the same ads on a social media platform? getting the same price on an e-commerce site?
Experimenting with “explorable explainers” that show how AI works by letting readers interact with the algorithms behind it and describe its methodology. An example could be a story on AI surveillance that enables people to test facial recognition software by using their own webcam.
Comparing artificial intelligence with the real-world’s legal system. Algorithms are rules, and those should be fair. For example, a predictive policing system that consistently targets poor neighborhoods is in reality discriminating.
The rapid pace of innovation is introducing new challenges to journalists who must continuously learn and adapt to new reporting practices. Fostering collaborations with universities and research centers is another effective approach to keep up with the latest developments in artificial intelligence. But even though technology is always changing, journalistic standards should remain the same.
Francesco Marconi is the R&D Chief at The Wall Street Journal. Francesco was recognized by MediaShift as one of the top 20 digital media innovators and named to the 25 under 35 next generation of publishing leaders by Editor & Publisher Magazine.
Till Daldrup contributed to this article. Till is a master’s candidate at NYU Studio 20 journalism program.