Friday, 22 November 2024

News coverage of artificial intelligence reflects business and government hype — not critical voices

Over the last two years, a multinational research team has analyzed how mainstream Canadian news media covers artificial intelligence. (Shutterstock)

Guillaume Dandurand, Institut national de la recherche scientifique (INRS); Fenwick McKelvey, Concordia University, and Jonathan Roberge, Institut national de la recherche scientifique (INRS)

The news media plays a key role in shaping public perception about artificial intelligence. Since 2017, when Ottawa launched its Pan-Canadian Artificial Intelligence Strategy, AI has been hyped as a key resource for the Canadian economy.

With more than $1 billion in public funding committed, the federal government presents AI as having potential that must be harnessed. Publicly-funded initiatives, like Scale AI and Forum IA Québec, exist to actively promote AI adoption across all sectors of the economy.

Over the last two years, our multi-national research team, Shaping AI, has analyzed how mainstream Canadian news media covers AI. We analyzed newspaper coverage of AI between 2012 and 2021 and conducted interviews with Canadian journalists who reported on AI during this time period.

Our report found news media closely reflects business and government interests in AI by praising its future capabilities and under-reporting the power dynamics behind these interests.

The chosen few

Our research found that tech journalists tend to interview the same pro-AI experts over and over again — especially computer scientists. As one journalist explained to us: “Who is the best person to talk about AI, other than the one who is actually making it?” When a small number of sources informs reporting, news stories are more likely to miss important pieces of information or be biased.

Canadian computer scientists and tech entrepreneurs Yoshua Bengio, Geoffrey Hinton, Jean-François Gagné and Joëlle Pineau are disproportionately used as sources in mainstream media. The name of Bengio — a leading expert in AI, pioneer in deep learning and founder of Mila AI Instituteturns up nearly 500 times in 344 different news articles.

Only a handful of politicians and tech leaders, like Elon Musk or Mark Zuckerberg, have appeared more often across AI news stories than these experts.

Two men, one wearing a suit and one in casual wear, chat with one another while seated. Canadian flags stand in the background.
Prime Minister Justin Trudeau meets with Jean-François Gagné, co-founder and then-CEO of Element AI, at the Fortune Global Forum, in Toronto, in October 2018. THE CANADIAN PRESS/Chris Young

Few critical voices find their way into mainstream coverage of AI. The most-cited critical voice against AI is late physicist Stephen Hawking, with only 71 mentions. Social scientists are conspicuous in their absence.

Bengio, Hinton and Pineau are computer science authorities, but like other scientists they’re not neutral and free of bias. When interviewed, they advocate for the development and deployment of AI. These experts have invested their professional lives in AI development and have a vested interest in its success.

AI researchers and entrepreneurs

Most AI scientists are not only researchers, but are also entrepreneurs. There is a distinction between these two roles. While a researcher produces knowledge, an entrepreneur uses research and development to attract investment and sell their innovations.

The lines between the state, the tech industry and academia are increasingly porous. Over the last decade in Canada, state agencies, private and public organizations, researchers and industrialists have worked to create a profitable AI ecosystem. AI researchers are firmly embedded in this tightly-knit network, sharing their time between publicly-funded labs and tech giants like Meta.

AI researchers occupy key positions of power in organizations that promote AI adoption across industries. Many hold, or have held, decision-making positions at the Canadian Institute for Advanced Research (CIFAR) — an organization that channels public funding to AI Research Chairs across Canada.

When computer scientists make their way into the news cycle, they do so not only as AI experts, but also as spokespeople for this network. They bring credibility and legitimacy to AI coverage because of their celebrated expertise. But they are also in a position to promote their own expectations about the future of AI, with little to no accountability for the fulfilment of these visions.

Hyping responsible AI

The AI experts quoted in mainstream media rarely discussed the technicalities of AI research. Machine learning techniques — colloquially known as AI — were deemed too complex for a mainstream audience. “There’s only room for so much depth about technical issues,” one journalist told us.

Instead, AI researchers use media attention to shape public expectations and understandings of AI. The recent coverage of an open letter calling for a six-month ban on AI development is a good example. News reports centred on alarmist tropes on what AI could become, citing “profound risks to society.”

A middle-aged man with curly grey hair leans on his hand while looking into the camera. Beside him, a screen displays a visual of a human head surrounded by bright blue light with the words 'AI & Deep Learning' across the top.
Computer science professor Yoshua Bengio poses at his home in Montréal in 2016. THE CANADIAN PRESS/Graham Hughes

Bengio, who signed the letter, warned that AI has the potential to destabilize democracy and the world order.

These interventions shaped the discourse about AI in two ways. First, they framed AI debates according to alarmist visions of distant future. Coverage of an open letter calling for a six-month break from AI development overshadowed real and well-documented harms from AI, like worker exploitation, racism, sexism, disinformation and concentration of power in the hands of tech giants.

Second, the open letter casts AI research into a Manichean dichotomy: the bad version that “no one…can understand, predict, or reliably control” and the good one — the so-called responsible AI. The open letter was as much about shaping visions about the future of AI as it was about hyping up responsible AI.

But according to AI industry standards, what is framed as “responsible AI” to date has consisted of vague, voluntary and toothless principles that cannot be enforced in corporate contexts. Ethical AI is often just a marketing ploy for profit and does little to eliminate the systems of exploitation, oppression and violence that are already linked to AI.

Report’s recommendations

Our report proposes five recommendations to encourage reflexive, critical and investigative journalism in science and technology, and pursue stories about the controversies of AI.

1. Promote and invest in technology journalism. Be wary of economic framings of AI and investigate other angles that are typically left out of business reporting, like inequalities and injustices caused by AI.

2. Avoid treating AI as a prophecy. The expected realizations of AI in the future must be distinguished from its real-world accomplishments.

3. Follow the money. Canadian legacy media has paid little attention to the significant amount of governmental funding that goes into AI research. We urge journalists to scrutinize the networks of people and organizations that work to construct and maintain the AI ecosystem in Canada.

4. Diversify your sources. Newsrooms and journalists should diversify their sources of information when it comes to AI coverage. Computer scientists and their research institutions are overwhelmingly present in AI coverage in Canada, while critical voices are severely lacking.

5. Encourage collaboration between journalists and newsrooms and data teams. Co-operation among different types of expertise helps to highlight the social and technical considerations of AI. Without one or the other, AI coverage is likely to be deterministic, inaccurate, naive or overly simplistic.

To be reflexive and critical of AI does not mean to be against the development and deployment of AI. Rather, it encourages the news media and its readers to question the underlying cultural, political and social dynamics that make AI possible, and examine the broader impact that technology has on society and vice versa.The Conversation

Guillaume Dandurand, Postdoctoral Fellow, Shaping AI, Institut national de la recherche scientifique (INRS); Fenwick McKelvey, Associate Professor in Information and Communication Technology Policy, Concordia University, and Jonathan Roberge, Professor, Institut national de la recherche scientifique (INRS)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

News Letter

Subscribe our Email News Letter to get Instant Update at anytime

About Oases News

OASES News is a News Agency with the central idea of diseminating credible, evidence-based, impeccable news and activities without stripping all technicalities involved in news reporting.