英语轻松读发新版了,欢迎下载、更新

What news audiences can teach journalists about artificial intelligence - Poynter

2025-04-16 11:11:34 英文原文

作者:T.J. Thomson, Ryan J. Thomas, Michelle Riedlinger, Phoebe Matich

As generative artificial intelligence shows up in more corners of public and private life, newsrooms should be talking with their staff about how they’re using it — and keeping audiences in the loop as those practices shift. Just as important: listening to how audiences experience and react to AI in journalism to help guide the industry forward.

Major news outlets began rolling out public-facing generative AI policies in 2023. However, our ongoing fieldwork in multiple countries suggests that many smaller news outlets still don’t have policies. (If that’s the case for your organization, Poynter offers an AI ethics starter guide here.)

Some staff at these smaller outlets think formal policies aren’t needed because they think their operation is small enough to monitor and they know how their staff members produce the journalism they publish.

But journalistic tools, from cameras to editing software, are increasingly incorporating AI in ways that journalists sometimes aren’t even aware of. And news outlets often integrate or republish crowdsourced content without having ways of verifying how it was made or edited.

For these reasons, we suggest that news organizations both large and small develop an AI ethics policy to benefit staff and audiences alike.

Our recent, cross-country research suggests that most news audiences have little awareness of news organizations’ AI policies. This suggests that after the policies are developed, news outlets have work to do, individually or collectively, to help inform and educate audiences on their approach.

Our research found that only about a quarter of the news audiences we interviewed were confident they had previously encountered generative AI in journalism. A quarter were confident they had not and the remaining 50% suspected (or were unsure whether) they had.

Almost all (98%) of our interviewees said they thought it was important for news organizations to have AI policies, but they also wanted simplicity and clarity about what these policies might mean in practice. Think: a few bullet points clearly and transparently outlining the organization’s approach rather than lengthy paragraphs or complicated “if-then” logic trees.

We also asked our interviewees what they would expect from such policies.

Transparency was a key priority for news audiences. Specifically, audiences expect transparency notices at the very beginning of a piece of content. Audiences wanted transparency notices to be placed in the same relative position each time to improve consistency and ease of access. They also wanted labels on the content itself rather than notices that were adjacent to it (such as in captions).

Many participants also expected news organizations to declare the proportion of a piece that was human-generated versus computer-generated or computer-edited.

Some participants desired a universal symbol that could be used across outlets to denote content that had been generated or edited with AI.

Participants also appreciated the potential for clutter, given websites’ limited screen real estate (especially on mobile devices with smaller screens). They said they felt that an on-demand approach could work, which could be “as simple as hovering a mouse (or finger) over and (an overlay) coming up and explaining how AI has been used.”

Separately, news audiences expect human journalists to manage AI implementations closely. For example, many news audiences expect that humans will check AI content before publication.

About a fifth of the news audiences we interviewed expected journalists and news organizations to use AI as little as possible. Labor concerns were particularly important for these audiences, and they strongly believed that AI shouldn’t be used to cut costs, cheaply produce news or repackage low-quality content. A smaller proportion was also concerned about the potential for AI biases, copyright and privacy issues and environmental costs.

Audiences also expect news organizations’ policies to update as technology evolves and cultural norms shift.

It’s useful to consider some of the audiences’ less obvious expectations as well. Some audience members thought deeply about the complexities of AI use, like whether journalists ought to get consent before editing people’s likenesses with AI.

Audiences also thought there was a benefit in having mode-specific policies rather than generic principles that cut across modes. For example, The New York Times references three principles — transparency, human oversight and using AI “as a tool in service of our mission” — when describing its approach to generative AI. In contrast, WIRED provides mode-specific guidance on how generative AI can or can’t be used for different applications, including writing headlines, suggesting story ideas or using AI to generate images or video. Audience members appreciated concreteness and specificity when news organizations said they would or wouldn’t use generative AI.

Some of our interviewees suggested that news outlets should work toward industrywide rather than newsroom-specific policies, given the difficulty in keeping track of AI policies for every one of the many news sources they encounter. They also reported being fatigued by clickbait and sensationalised journalism and thought AI could make this worse. Indeed, our ongoing research has found that some outlets provide journalists with AI tools that let them write draft headlines and receive a prediction of the number of subscribers a story could obtain. This incentivises journalists to present the news in more emotional and polarising ways to try to get more clicks and subscriptions.

While our interviewees were generally OK with using generative AI for brainstorming, some participants didn’t want journalists to outsource their critical thinking to AI or to let AI unfairly influence how journalists think about what they report on. Some of our interviewees thought the effects of algorithmic bias could be mitigated by news organizations using their own content as training data.

Lastly, some of our interviewees thought that only journalists with training in the use of AI tools should be able to use AI tools; that there should be punishments for those who misuse AI; that AI could be used to check for bias, opinion or independence; and that topic-specific AI guidelines could be useful (no AI around high-risk topics, such as election coverage, for example).

When developing or refining a news outlet’s generative AI policies, it can be helpful to take into account news audiences’ experiences and expectations, as well as their general comfort levels with broad and more specific uses of AI.

Our research suggests that news audiences generally feel more comfortable with AI being used in behind-the-scenes ways compared to its being used to create or edit public-facing content. However, comfort levels vary widely across the different use cases we discussed with our participants.

For example, the news audiences we interviewed as a group felt very uncomfortable with generative AI being used to create a virtual news presenter. Conversely, they felt much more comfortable with generative AI being used to generate 3D models or color palettes. Likewise, audiences were more comfortable with journalists using AI to represent the distant past or future than they were about journalists using it to represent the present. They were also more comfortable with journalists using generative AI to create nonphotorealistic illustrations, compared to photorealistic ones.

Audiences’ comfort levels increased in cases where journalists used AI tools or processes they had used themselves, like using AI software to create an alt-text description for an image, or using AI to blur the background of an image. These insights show that news audiences expect news organizations to consider diverse scenarios and issues, including audiences’ AI literacies, when deciding if or how to use AI.

As such, our research shows that when drafting or updating a news outlet’s generative AI policies, newsroom managers and other leaders should consider questions around temporality, perceptions of trust and authenticity, fidelity, and audiences’ awareness of and familiarity with various AI tools and processes.

Overall, news organizations have a great deal of work to do to understand what audiences want or don’t want from AI and to balance these expectations with their journalistic objectives while being clear with audiences about their approach.

关于《What news audiences can teach journalists about artificial intelligence - Poynter》的评论


暂无评论

发表评论

摘要

Newsrooms should discuss staff usage of generative AI and inform audiences about shifting practices. Major outlets began implementing public-facing AI policies in 2023, but many smaller news organizations still lack such policies despite recommendations for an AI ethics guide. As AI integrates into journalistic tools and processes, there's a need for clarity and transparency in AI policies to address audience expectations. Most news audiences are unaware of these policies, and they seek clear guidelines that balance simplicity with detail. Transparency notices at the beginning of content and labels on generated or edited parts are key requests from audiences. Additionally, human oversight is expected for AI-generated content before publication. Audiences also emphasize the importance of updating policies as technology evolves and consider labor concerns, potential biases, copyright issues, and environmental impacts. Specific guidelines across different modes and topics are appreciated over generic principles. Overall, understanding audience expectations regarding AI's use in journalism is crucial for news organizations to balance journalistic integrity with technological advancements.

相关新闻