Generative AI hype has launched newsroom experiments around the world. Even though many of these early applications have become cautionary tales, the hype has endured for over two years since OpenAI publicly launched ChatGPT.
In many ways, this is familiar territory for journalism. In a long line of digital technologies (smartphones, social media, the infamous “pivot to video”), generative AI is yet another sociotechnical force that journalists did not create or ask for, but that they must navigate and (try to) reshape.
Previous hype cycles of innovation, disruption, and adaptation provoked existential questions about the value of journalism — and coincided with waves of unionization among journalists in the United States. Stories of technological change, industry mission, and labor power are always intertwined.
As a professor of communication and journalism at the University of Southern California and a scholar-practitioner of technology law at New York University, we have seen how challenging it has been for journalists to make sense of generative AI. In a new article for Digital Journalism, we show how news media unions that represent and advocate for a growing number of journalists are trying to manage and stabilize generative AI.
Our work is based on a close study of nearly 50 union sources over a two-year period from 2022 to 2024. These sources fall into three general categories:
Public statements and testimonies, from national umbrella organizations like The NewsGuild-CWA to locals representing journalists at publishers like the Atlantic, CNET, Dow Jones, Insider, Los Angeles Times, and Sports Illustrated;
Collective bargaining agreements struck with 14 different media companies, including the Associated Press, Arizona Republic, Financial Times, Philadelphia Inquirer, and Politico;
Trade press articles, published in Nieman Lab, Columbia Journalism Review, Poynter, and elsewhere.
Together, these sources tell a story about how news unions are engaging generative AI. We find six areas where news media unions are focusing their generative AI attention and concern — and, notably, two areas where they’re not.
Where news unions are focusing their concern…
- Unions acknowledge that publishers are the ones with the power to initiate generative AI experiments, control its use, and accelerate its adoption. Unions are countering this power by vocally advocating — sometimes successfully — for an active role in starting, slowing, and stopping journalistic generative AI. Several unions have secured collaborative arrangements in which management and labor discuss generative AI implementations together in advance.
- Unions say publishers’ widespread lack of transparency is a key reason that workers fundamentally do not trust publishers’ generative AI plans. Unions are trying to resolve this trust deficit by demanding more transparency in everything from procurement to licensing deals.
- Unions stress that the humanity of workers is essential to quality news work. They want publishers to trust workers’ judgments about whether and how to use generative AI.
Journalists stress that the technology cannot replicate the indispensable creativity and ingenuity that makes journalism a public service and a key piece of healthy societies. Unions argue that any use of generative AI must center humans, and that journalists must be free to opt out of generative AI altogether whenever it conflicts with their judgment — “they can’t be made to use it.”
- Unions are understandably preoccupied with generative AI’s threat to automate news work, but this preoccupation doesn’t revolve solely around concerns about job loss and livelihoods. It also stems from a belief that generative AI is inherently unaccountable and unreliable in ways that are antithetical to the purpose and values of journalism. Generative AI may help with the “logistical, busywork, back-end side of reporting,” but unions see the “destructive, careless, and borderline fraudulent” use of generative AI to publish AI-generated stories as threats to journalism’s accuracy and accountability, and its core professional values.
- Unions are agitating for greater control over news products. This creates some tension with publishers, but also aligns workers and management as they both struggle to counter the power of tech companies. While unions demand publisher concessions over journalistic autonomy and creative identities — including the “assurance that AI won’t be used to modify content after employees leave,” “protection from byline misuse,” and control over their “image or likenesses” — journalists generally support publishers’ efforts to enforce copyright claims against the technology companies that have trained GenAI models on journalists’ content and data.
- Finally, unions see contractual guardrails as central to stabilizing generative AI, but they are concerned that direct worker action alone cannot force publishers to change their uses of generative AI. Unions are working outside of and around their publications, engaging audiences and policymakers for help defining generative AI as a problem, articulating journalism’s value against it, and envisioning solutions to generative AI’s challenges.
Across these themes, journalists are workers trying to understand generative AI’s hype, tame its power, and articulate yet again to managers and audiences why strong, human-made journalism matters. Union responses to generative AI aren’t simply about organized labor defending traditions and protecting jobs — guarding against hallucinations and automation — but also about media workers’ trying to stabilize a new, opaque, and rapidly changing technology. As they reflect, bargain, and advocate around generative AI, they show what they think their work is, why it has value, and what they need to be successful journalists serving public interests.
…and where they’re not
We also find two notable ways that unions are generally not reflecting, bargaining, and advocating about GenAI.
First, they largely fall silent when it comes to talking about generative AI’s broader social and infrastructural impacts — namely, the natural and human resources required to build and sustain datasets, train models, and power interfaces. They are somewhat concerned about the provenance and construction of datasets as biased, extractive, or copyright-infringing, but there is scant mention of generative AI’s ecological impacts — its dramatic water and energy needs — or the invisible, seemingly unrelated labor that makes journalistic generative AI possible — the often-ignored “ghost workers” who make AI seem automated and intelligent.
Second, we don’t hear unions talking much about how working with generative AI might impact journalists’ wellbeing — their job satisfaction, sense of professional accomplishment, or workplace stress. Unions focus on news work and working conditions, including generative AI’s power to speed up the pace of journalism, but leave largely unexplored generative AI’s emotional tolls on journalists’ work, identities, and hopes for the profession’s future.
These absences may simply be areas that fall outside of unions’ expertise or interest, or they may be deliberate and strategic choices to focus generative AI conversations around tractable and actionable concerns.
In any event, they suggest opportunities for news media unions to expand their thinking about who qualifies as a “media” worker, to see journalistic GenAI within larger infrastructures and ecosystems, and perhaps to use this moment for advocacy that further foregrounds the humans and humanity that power journalism.
Though the patterns that we found describe a snapshot in time, they serve as touchstones that scholars and practitioners alike might use to convene journalists, publishers, technologies, infrastructures, and audiences in ways that lead to better media systems.
Mike Ananny is an associate professor of communication and journalism at the University of Southern California Annenberg School. Jake Karr is the acting director of New York University’s Technology Law and Policy Clinic.
Source: https://www.niemanlab.org/