Two of the most esteemed accolades in journalism, namely the Pulitzer Prizes and the Polk awards, are contemplating the inclusion of entries utilizing generative AI. Marjorie Miller, the administrator of the Pulitzer Prizes, revealed that among this year’s 45 finalists, five disclosed employing AI in various aspects of their research, reporting, or presentation.
It’s the first time the awards, which received around 1,200 submissions this year, required entrants to disclose AI usage. The Pulitzer Board only added this requirement to the journalism category. (The list of finalists is not yet public. It will be announced, along with the winners, on May 8, 2024.)
Miller, who sits on the 18-person Pulitzer board, said the board started discussing AI policies early last year because of the rising popularity of generative AI and machine learning.
“AI tools at the time had an ‘oh no, the devil is coming’ reputation,” she said, adding that the board was interested in learning about AI’s capabilities as well as its dangers.
Last July — the same month OpenAI struck a deal with the Associated Press and a $5 million partnership with the American Journalism Project — a Columbia Journalism School professor was giving the Pulitzer Board a crash course in AI with the help of a few other industry experts.
Mark Hansen, who is also the director of the David and Helen Gurley Brown Institute for Media Innovation, wanted to provide the board with a broad base of AI usage in newsrooms from interrogating large datasets to writing code for web-scraping large language models.
He and AI experts from The Marshall Project, Harvard Innovation Labs, and Center for Cooperative Media created informational videos about the basics of large language models and newsroom use cases. Hansen also moderated a Q&A panel featuring AI experts from Bloomberg, The Markup, McClatchy, and Google.
Miller said the board’s approach from the beginning was always exploratory. They never considered restricting AI usage because they felt doing so would discourage newsrooms from engaging with innovative technology.
“I see it as an opportunity to sample the creativity that journalists are bringing to generative AI, even in these early days,” said Hansen, who didn’t weigh in directly on the new awards guideline.
While the group focused on generative AI’s applications, they spent substantial time on relevant copyright law, data privacy, and bias in machine learning models. One of the experts Hansen invited was Carrie J. Cai, a staff research scientist in Google’s Responsible AI division who specializes in human-computer interaction.
The George Polk Awards consider AI
The George Polk Awards are also looking to learn more as they plan to adapt contest rules for an increasingly AI-integrated industry. While too late for this year’s cycle, awards curator John Darnton said the organization will begin formally developing an AI disclosure policy this spring, after this year’s awards are presented.
The Polk awards are reckoning with whether generative AI aligns with the spirit of the accolade, which recognizes “not the news organizations or publishers, but investigative reporters themselves.” The award’s namesake is George Polk, a journalist who was murdered in 1948 while covering the Greek Civil War. The prize was established to recognize the intrepid nature of investigative reporters for work that requires perseverance and resourcefulness.
An iconic part of the annual Polk awards ceremony is very human. During the spring luncheon, winners are invited on stage to talk about the rigorous investigative process and the aftermath of the story, with reporters often sprinkling in moving anecdotes about interactions with sources.
“If [generative] AI is an essential part of the whole project, I would look askance at it as an entry,” Darnton said. “Most investigative projects rely on some kind of moral judgment. And I wouldn’t trust [generative] AI to make that judgment.”
As an example, he pointed to one of this year’s winners: a joint investigation from The Pittsburgh Post-Gazette and ProPublica about medical device company Philips hiding CPAP complaints. While generative AI may be able to identify and list the laws Philips was allegedly breaking, Darnton said he’s doubtful its limited language abilities can convey the nuances of corporate malevolence as skillfully as reporters can.
But Darnton and the Polk awards have not completely written off the technology. Right now, Darnton is leaning towards requiring entrants to disclose in their cover letters whether they used AI, generative or not, and to what extent. Judges can then follow-up with questions specific to the entry, if needed.
That way, each entry is addressed on a case-by-case basis instead of a one-size-fits-all rule.
Polk awards faculty coordinator Ralph Engelman said the flexibility and openness comes from the Polk awards having a legacy of recognizing non-traditional work, which at one point included the now-ubiquitous “computer assisted reporting.” In fact, the Polk awards recognized work published on a website and audio reporting before the Pulitzers did.
But it may take a while for the broader industry to modify awards policies given how rapidly the AI landscape is changing. Developments in the New York Times’ lawsuit against OpenAI or the release of new tools like OpenAI’s video generator Sora may prevent organizations from setting hard rules.
And there’s always the question of whether awarding work produced with the help of AI will be giving the technology more credit than is due. For awards that emphasize the human sacrifice behind reporting, this’ll be a challenge they face.
“I’m old fashioned in a lot of ways, as are the Polk awards,” Darnton said. “After all, our symbol is a quill. It’s not even a typewriter.”
Source : Niemanlab