英语轻松读发新版了,欢迎下载、更新

How We’re Using AI

2025-05-12 11:01:43 英文原文

作者:By Mike Ananny and Matt Pearce

Sign up for The Media Today, CJR’s daily newsletter.

In recent years, we’ve heard about GenAI producing error-ridden news stories, publishers suing to protect their news from Big Tech data-grabs, newsroom guidelines on GenAI use, and grand predictions about how GenAI might reshape reporting. But all of that can feel far removed from how journalists actually grapple with GenAI in their daily work. 

In a collaboration between CJR and the University of Southern California’s AI for Media and Storytelling Initiative, we asked a diverse group of reporters, editors, executives, and others across the news industry: How does your work involve AI, for good or ill? When does AI matter, and how does it make you feel about your craft, identity, or professional future? When do you welcome AI into your work and when do you protect your work from it?

“Large language models simply offer a game-changing deal,” said Ben Welsh, the founder of the news applications desk at Reuters, where roughly a quarter of the code is now written by AI. Jason Koebler—a cofounder of 404 Media, which covers the tech world—argued that “it is unwise to lean into this future and align ourselves with companies developing technologies they want to replace us, and so we haven’t.” Sarah Cahlan, a founding member of the visual forensics team at the Washington Post, noted, “We’ve used AI to detect armored vehicles in satellite imagery and estimate the number of individuals in hundreds of videos.” Tristan Lee, a data journalist, observed that AI “just makes everything slightly worse.”

Journalists also see AI in terms of their relationships with audiences. “I want my readers to know that I am not just rattling off facts but helping them make informed decisions,” Emilia David, a senior AI reporter for VentureBeat, said. Sisi Wei, the chief impact officer at CalMatters and The Markup, flagged the impact of AI on water use, energy drains, and environmental racism: “Whenever I use AI I think about the environmental cost.” And Araceli Gómez-Aldana, of WBEZ in Chicago, remarked on AI’s power to translate languages as a new way to connect audiences, moving her closer to her dream of “helping my community stay informed” by becoming a “bilingual journalist.”

The answers show how AI is reshaping the news industry. Many journalists are open to the technology while skeptical of its limits. Others see AI journalism in terms of big questions about audiences, truth, democracy, and the climate crisis. All are grappling thoughtfully with the technology’s power to shape their work, industry, and professional ethics.

Experiment or perish

Gina Chua

Executive editor of Semafor

At a personal level, I’m experimenting with LLMs to see how much generative AI has progressed and to test out how such tools might be used in a journalistic context—for example, as a way to categorize information, such as possible reports of hate crimes, or as a way to present news in more conversational ways. I’m also trying to see if I can build a system to compare how different newsrooms cover the same event—what facts they agree on, what details are exclusive to some, and what perspectives they bring to the story. 

At Semafor, I’m working with colleagues to build tools that will either simplify our workflow—proofreading bots, for example—or expand our capabilities, such as improved search tools that can help us find related stories on a particular topic. As a young company, we have room to experiment both with technology and with workflow, and that’s been exciting.

And at the industry level, I’ve engaged in conversations about how generative AI will likely upend the information ecosystem. We can already see how it’s changing search, research, and writing; how much more will it revolutionize how people find, consume, and engage with news? Understanding those changes, and staying ahead of them, may be one of the most critical challenges we face.

Treat AI like a smart assistant who tends to be a bullshitter Anna Higgie

Nicholas Thompson

CEO of The Atlantic

I’m writing a book about running, and I use AI the way I would use an insanely fast, remarkably well-read, exceptionally smart research assistant who’s also a terrible writer who happens to BS a lot. I’ll upload sections of my book, along with interview transcripts, and ask whether everything I’ve written squares with what my sources have said. I’ll ask it to read long sections of text and flag any material that hasn’t been introduced chronologically. I’ll ask it to examine whether all the claims in a chapter are logically consistent. I do all of this after entering a long prompt describing the style of book I aspire to write. The more specific the ask, and the more sophisticated the prompt, the better the answer. It’s not nearly as helpful as a real editor, but it’s still quite good. I would never ask it to write anything, though. That seems unethical and stupid—as well as potentially a way to lose copyright.

AI is for productivity, not writing

Emilia David 

Senior AI reporter for VentureBeat, writing about AI applications, agents, models, and regulations

I began writing about AI, then mostly called machine learning, in 2016, but beyond expecting that it might transcribe my interviews faster than I could, I dismissed it as something that I wouldn’t use much for work. Cut to today, where I just fed a research paper to ChatGPT with OpenAI’s o3-mini. I requested a summary, told ChatGPT to give me three potential headlines, and asked it questions that would’ve taken me an hour to find answers to. I have a custom GPT that I fine-tuned to respond as if it were a VentureBeat reader, which I use to refine story ideas. I use MidJourney to generate stock illustrations for my stories. When reporting a larger feature, I use Google’s NotebookLM to organize my files. And my favorite is using custom GPT to give me synonyms for words I overuse. 

AI chatbots, applications, and features have become so ubiquitous that even if I wanted nothing to do with AI while working, it would be impossible. At the same time, I’ve become more suspicious of other people using AI. I expect that many story pitches were written by chatbots. When I look for social media responses to add to reactions to roundup stories, I’m never sure that humans wrote them. 

As much as I believe that AI helps my productivity, I refuse to use it for writing. Writing is hard, and it is my least favorite task, but I do not want AI to write for me, no matter how formulaic some of the news I write may be. I want my readers to know that I am not just rattling off facts but helping them make informed decisions. If I let AI models write for me, I feel like I’ve taken away an important part of that relationship with my readers. I believe that people reading earnings stories or news around sporting events—news sometimes written using AI tools—deserve the respect of hearing the news from humans who better understand why people need to know about this specific news. Human journalists bring the context to news stories to their human readers. Sure, AI models can approximate that, but people deserve to hear news from other people.

On its own, AI is a parlor trick

Zach Seward 

Editorial director of AI initiatives at the New York Times

I have a confession to make. As the “AI guy” at work, people often assume that, in my personal life, I must use all sorts of chatbots, agents, digital assistants, and the like to automate every inch of my existence. Of course, I try every new AI app and feature that comes along. But the truth is I can’t name more than one or two that has proved valuable enough to use again.

Which is not to say that I’m an AI skeptic. At work, we frequently employ the technology for investigations, workflow improvements, and new ways to experience the Times. AI has helped us probe Pete Hegseth’s drinking habits and track changes to government websites, among many other recent stories. It’s a powerful tool when combined with traditional reporting and coding expertise.

But that, I think, explains the disconnect. AI on its own is a parlor trick. Like all software, it’s useful when paired with properly structured data and someone who knows what they’re doing. Visions of our agentic future may one day become a reality, but right now what I see is a lot of consumer apps that do not work, features that exist only in commercials, and assistants I only ever trigger by mistake.

Tap into AI to boost human strengths Anna Higgie

Millie Tran

Chief digital content officer at the Council on Foreign Relations

Right now, people are mostly thinking about AI as a way to increase their efficiency. But fundamental shifts to how we create journalism, the business models, user experiences, and information ecosystems are just beginning. At the Council on Foreign Relations—a nonpartisan, independent think tank, membership organization, and publisher—we are mostly still asking questions: How do we productize expertise in this new landscape? What is the value of what we do when we’ll be awash in analysis? How might this change how complex international issues are understood? 

I pay for Claude and have experimented with OpenAI’s Deep Research and Google’s NotebookLM. I’m newly obsessed with Granola for making me rethink what an actually good AI-assisted notetaker can do. But I’m still scratching the surface. It feels like a layer on top of what I’m currently doing. AI hasn’t fundamentally changed how I work—yet. 

I take solace in two historical parallels rooted in craft. When the camera was invented, painters feared for their jobs. But instead Impressionism emerged—a new form of painting not replicable through cameras. Photography liberated painters from realism, allowing a new focus on the subjective experience using visible brushstrokes and techniques that cameras couldn’t capture. Today, we should be focusing on what AI can liberate us from and how we can draw on uniquely human strengths. When embroidery shifted from a handcraft to machine production, the machines were faster and could consistently do more complicated designs. But as machine embroidery proliferated, handmade work gained new cultural and economic value. Today, we should be thinking about how the value of certain kinds of journalism will change and evolve as supply proliferates.

Use AI—but not as your only source

Sarah Cahlan

Pulitzer Prize–winning reporter and founding member of the visual forensics team at the Washington Post

When our Visual Forensics team dives into an investigation, we gather, verify, and analyze as much information as we can: videos, photos, maps, and documents, to build 3D models, visual syncs, and sheets. The work is labor-intensive, and we want to move fast. We’ve used AI to detect armored vehicles in satellite imagery and estimate the number of individuals in hundreds of videos. Though AI has cut down on reporting time and even helped us reveal new findings, we won’t publish a story if our only source is AI—it’s not a substitute for the careful review of journalists.

Align with human beings, not tech companies

Jason Koebler 

Cofounder of 404 Media

At 404 Media, a journalist-owned tech publication, I focus on how people use AI and its effects. I stare deeply into the AI-generated abyss all day, every day. AI has already changed our industry deeply by giving way to content farms that immediately plagiarize our work, models that flood the internet with spam, and media executives who give themselves permission to lay off cadres of writers with the fantasy that AI efficiencies will make the survivors more productive.

I think it is unwise to lean into this future and align ourselves with companies developing technologies they want to replace us, and so we haven’t. At 404 Media, we show our readers that we are human beings with perspectives. While much of the news industry optimizes for algorithms, sells access to content to LLM companies, lets AI argue with its own columnists, pitches gratuitous chatbots that let the public “ask questions” about an article, or pushes journalists to integrate AI tools into the writing process, we stand out by saying: “Hey, no one wants this!”

We’re not burying our heads in the sand. We use AI tools every day to understand how they work, their limitations, and, most importantly, how they are being leveraged to become the dominant type of content on the internet. Because we do this, we have some of the leading coverage of generative AI on the internet. AI isn’t going away, and I could imagine using it in the future if it becomes more trustworthy and perhaps if the companies pushing it find more ethical business models. I have experimented with using AI to write complex Freedom of Information Act requests and FOIA appeals and to parse large documents, though I haven’t been impressed with the results. I passively use forms of AI to help transcribe interviews, get the gist of YouTube videos in foreign languages, and edit short-form videos and podcasts. Language translation and transcription feel like true game changers, while other AI tools feel like spam machines. I’ll use AI to help find new information, but not to write my words.

Don’t let AI undermine trust in journalists Anna Higgie

Khari Johnson 

Tech reporter at CalMatters and practitioner fellow at the University of Virginia’s Karsh Institute for Digital Democracy who has covered AI for a decade

I don’t use AI in my writing. I might use it in the future to better phrase a paragraph, but anything beyond that raises ethical questions and threatens my power of expression and free speech. I think all writers should be wary of giving away these uniquely human powers. I also question how writing with AI can threaten trust between reporters and readers. It’s my byline on the story, not that of a writing machine that’s also a convincing liar. In much the same way that some regulators argue that people deserve disclosure when AI makes a critical decision in a person’s life, readers deserve to know if a reporter uses a large language model during the writing process.

In my reporting, I primarily use AI for transcription, though I always listen back to an interview to ensure that the AI didn’t get any words wrong. I also use Digital Democracy, an initiative led by CalMatters that gives the public and journalists searchable transcripts of public hearings and draft bills. I’m also beginning to use AI Tip Sheets, which analyzes data about lawmakers, advocates, and lobbyists to send story tips to journalists. It’s available for journalists in California and may expand to other states later this year. 

One of my favorite places to look for practical ways to use AI in my journalism is the Pulitzer Center AI Accountability initiative. It contains constructive, impactful insights from around the world, such as how journalists investigated algorithms that denied people money for food in India or how Associated Press reporters pored over records requests, government contracts, and court documents to assess the global impact of AI. The fact that roughly 10 percent of 2024 Pulitzer Prize finalists used AI in some capacity makes it clear that the tech can deliver results for readers and society. But transparency—in who uses it and how—can help journalists report on AI, keep what makes us human, and ensure that AI doesn’t inadvertently harm trust between journalists and readers.

AI is cannibalizing our industry and dulling our craft. Reject it

Brian Merchant

Author of Blood in the Machine, a book and newsletter about AI and labor, and a former technology columnist for the Los Angeles Times

As a tech writer, I feel a responsibility to use AI to understand its implications. As a journalist, I feel a responsibility not to use AI at all. 

When OpenAI releases a new model, I experiment. I’m interested in the results it generates, what the system produces, and whether it’s infringing on copyrighted works or replicating biases. After the mania for Studio Ghibli–style AI images a few weeks back, I prodded the image generation model to see if there were limits on what ChatGPT would let you “Ghiblify.” There were few. 

When it comes to considering the use of AI in practicing journalism, I find it helpful to ask: Would I automate a given part of my work? To some, the answer may be yes: Some journalists might automate the revising of a lede or the brainstorming of headline ideas. That’s defensible, though I worry it leads to cognitive offloading and a dulling of our skill sets. I decided I need to struggle with my headlines, consider why I’m making a revision. AI systems are far too unreliable, their biases too well concealed, to use to conduct reportage in any capacity. I’ve seen writers cite Google Overview as a source. That’s indefensible. 

As journalists, we need to consider the health of our field. Major AI models were trained on our work without consent, and they’re diminishing the market for human-made journalism. Few media company CEOs would say We’re going to use AI to replace your colleagues, but look how AI is promised to add value after human labor is lost: BuzzFeed announced AI quizzes after firing its news desk. My former paper, the LA Times, added an “AI Insights” feature after laying off more than a hundred staffers in recent years. Journalism is in steep decline. AI, like digital video and so many prospects thrust on our industry by Big Tech before it, is offered to us as the latest buoy to cling to. But until AI firms like Google and OpenAI are willing to negotiate the terms of engagement with journalists—and all other creators and workers—it’s our duty to reject AI outright.

AI speaks many languages, but still needs a human translator Anna Higgie

Araceli Gómez-Aldana

News reporter and anchor at WBEZ in Chicago, and 2023 winner of the John S. Knight Journalism Fellowship at Stanford

When I became interested in journalism, I always wondered what language I would use. I grew up bilingual in a Spanish-speaking household, speaking to my parents and grandparents in Spanish. I learned how to read and write in Spanish, but my dominant language became English. I dreamt of being a bilingual journalist. But translating my work to Spanish was difficult due to lack of time, resources, or a Spanish-speaking editor. My goal of translating all my work to Spanish became unrealistic. It bothered me that all I had to offer the Spanish-speaking community and my parents were news stories published in English. 

When I studied large language models during my John S. Knight Journalism Fellowship program at Stanford, I was excited about their ability to produce humanlike translations. Although LLMs aren’t producers of knowledge, they do generate humanlike language. LLMs can translate text in some common languages, including English, Spanish, French, German, Chinese, and Japanese.

But not all languages produce quality results. I have come to realize that these models are not yet capable of translating news stories. I have a few main concerns: accuracy of the translation for news stories, cultural understanding, translating quotes for tone and understanding—not only a literal word-by-word translation—and adapting to various dialects and literacy levels. A human reporter and editor are still crucial to producing quality translations.

Ultimately, I hope that LLMs are used as tools that will help address the existing language barriers many communities face. I want to contribute to helping my community stay informed. I think that will soon be possible using LLMs to translate news stories in various languages, and I’m looking forward to adding “bilingual journalist” to my byline.

Don’t get crushed by a wave of AI slop

Tristan Lee

Data scientist and researcher who has investigated the far right for Bellingcat and the Texas Observer

A question I constantly receive goes something like “How is AI changing visual investigations?” My go-to answer is: it’s not, really, it just makes everything slightly worse. A wave of AI slop is overwhelming nearly every corner of the internet, where I spend most of my working day. While I experience some increased productivity from this current batch of generative AI tools, they don’t outweigh the negative consequences. 

A good portion of “OSINT”—open-source intelligence—and programming consists of googling the right thing, but AI-based SEO spam has made every search engine worse. Reddit, for its part, severely restricted its API after citing abuse from tech companies scraping its content to train their LLMs. This made it more difficult to study Reddit, whose previously open data was used by researchers to understand online echo chambers and toxic communities.

The most obvious challenge for visual investigations is fake imagery or video purporting to be from some conflict or protest, which are often debunked by BBC Verify’s Shayan Sardarizadeh. Generated imagery still has telltale characteristics (texture, lighting, details), and generated video still has a long way to go to avoid detection. That doesn’t stop fakes from going viral, though. What’s more common is footage from one conflict being misrepresented as being from a different conflict or footage from a video game being passed off as real.

I use ChatGPT from time to time, mostly to generate a snippet of code in a programming language I’m not an expert in, or use a command-line tool with a lot of complicated options. When I had nearly ten thousand pictures and wanted to identify the ones with Nazi imagery, I quickly created a Web app that let me classify batches of them. Given my lack of expertise in front-end Web development, it would have taken me significantly longer to develop the Web app (which itself saved me several hours of manual work) without generative tools. These kinds of tasks typically take several iterations to get correct, and the resulting code quality is generally poor. But they can get the job done.

Know that, in some newsrooms, LLMs are already changing the game

Ben Welsh

Founder of the Reuters News Applications Desk, where he leads the development of dashboards, databases, and other automated systems

Newsrooms are factories. But our widgets aren’t manufactured out of steel, aluminum, or corn. They’re made of information. My shop floor is found at Times Square in New York, where I lead a specialized Reuters team focused on publishing data products for our wire clients, the investors who gobble up business news via LSEG terminals, and anyone patronizing reuters.com. Like any factory, automation is central to our approach. Using our skills as reporters, editors, and programmers, we publish hundreds of news-breaking charts each week without lifting a finger. We set up software to hit deadlines for us. Large language models are already accelerating our work. Every day, the new breed of digital assistants integrated into our coding tools suggest what to type next and flag our mistakes.

If we keep the reins tight and police the AI’s proposals, this spooky form of autocomplete can improve the speed and quality of our coding. Google CEO Sundar Pichai recently announced that AI now generates a quarter of his company’s code. I’m confident my team could claim a seat in the same ballpark.

A decade ago, data journalists like us raced to master early techniques based on the same principles as large language models. Unfortunately, they required large amounts of human training and programming expertise to execute, only to yield unimpressive results. These tools of the 2010s now seem as obsolete as an abacus. Large language models simply offer a game-changing deal. They can complete the same tasks as the old-school tools without coding expertise or training. Informed by a massive amount of information scraped from the Web, they can routinely produce better results. 

To see this new power for yourself, work through the free, open-source textbook I recently developed with Derek Willis, a data journalism teacher at the University of Maryland. We show how journalists can harness large language models to find needles of corruption in the haystacks of data produced by political campaigns. Techniques like these are already changing how Reuters journalists gather news by making a superior form of machine learning accessible to a much wider circle. I can’t say where this all leads, but it’s clear to me that these tools are finally fit for our purpose.

AI should be considered a hand, not a brain

Susie Cagle

A writer and artist for ProPublica, The Guardian, Wired, The Nation, and many others

Back when I was still shattering pen nibs on deadline, I might’ve half-wished for a computer that could draw for me better than I could scrawl myself. I first learned how to draw in 2008, when the news business was upended in transition from print to digital. I wanted to bring a tactile feeling to the screens—a reminder of the physical world and the human hand.

I was not good at drawing then, and I’ve never become an excellent illustrator. The sequential art that makes up my pieces of illustrated reporting works less by beauty and more by concept and the narrative built panel by panel across a story. Each drawing is a compilation of hundreds of decisions—the composition of a frame, the exact expression on a subject’s face, the color palette that evokes a carefully considered mood, all determined after extensive visual research. There is a hypothetical future where robots could execute drawing as a task within the limits and according to the rules I set for them—where AI could make my work smoother, more efficient and optimized. But by making it less human, it would also make it fundamentally less true.

I don’t fear AI if it’s used as a hand, not a brain. My hesitation is less a matter of trust, and more a fundamental conflict of vision. My illustrated work asserts an immutable view-from-somewhere in every piece. Each hand-drawn line is a purposeful reminder of the subjectivity of journalism as a craft. And who would choose to outsource humanity for sameness and speed?

Ask: Is it worth the environmental cost?

Sisi Wei

Chief impact officer at CalMatters and The Markup

I grew up in this industry as a data journalist, so I’m used to people being both scared and excited by a new tool or skill. I remember when I wanted every journalist to learn how to code, because coding skills let us fact-check data and sources firsthand. Coding also helped me build tools that gave people superpowers—like knowing what drugs their doctor was being paid to promote, or scrutinizing the tax filings of any US nonprofit.

AI feels like the same conversation. But this time, many more people care and have ethical concerns. The one that haunts me most is that AI is an energy hog. A short conversation with ChatGPT consumes a sixteen-ounce bottle of water, and interacting once with a large language model consumes three to ten times the energy of a normal Google search. To fuel AI, tech companies are building new data centers, and power plants are being revived in ways that disproportionately threaten Black communities with more pollution. Whenever I use AI I think about the environmental cost.

I also know that AI can be an incredible tool for good. For example, our Digital Democracy project is powered by AI. It tracks pretty much everything we can track about California’s state legislators, parses that fire hose of data, and then generates story ideas for our reporters. Some of the AI-generated tips would have taken a data-savvy political reporter weeks, if not months, to find on their own, and our AI-aided stories have already made a difference in the legislature.

Whenever I use AI, I ask if it’s worth it—and then I ask myself: Can I really know the answer to that question? But there is hope. Researchers are proposing solutions to mitigate AI’s environmental costs. One of my data engineers runs LLMs locally, to use less energy. If you use AI, and you have these same struggles, I’d love to hear what you’ve done.

Safeguard press freedom

Karen Rønde

CEO of the Danish Press Publications Collective Management Organization, DPCMO, representing 99 percent of the Danish news industry

In my work trying to get fair treatment (remuneration and data) from AI companies for Danish publishers and journalists, I face many questions and dilemmas: Should our news outlets enter into agreements with AI companies for commercial reasons? For technological reasons? For democratic reasons? 

We need to use AI wisely. In Europe we prioritize digital resilience; digital sovereignty; a robust information ecosystem; resilient electoral processes; preparedness and readiness; a Democracy Shield to counter foreign information manipulation and interference online; and universal values such as human dignity, freedom, equality, solidarity, and respect for human rights and the rule of law. Evidence shows that actors are using information manipulation and other tactics to interfere with our democracy and freedom. 

The increasing global reliance on a few AI products and services affects press freedom and media plurality. AI companies control access to information through content moderation, censorship, algorithmic filtering, and model training biases. AI tools shape journalistic research, fact-checking, and even writing in ways that may differ from local legal and cultural norms. AI-generated content or AI-assisted research may reinforce its owners’ narratives and ways of framing issues, leading to a homogenization of global news. This could marginalize local voices, languages, and perspectives; challenge press freedom; and particularly affect Denmark as a small country and language area. Local Danish media content brings us together, challenges us, makes us smarter, entertains us, and binds us as a nation.

We always say that the plurality of editorial media builds trust and information resilience. That a diverse offer of news content creates social cohesion and engagement. In the AI age, news providers have to ensure the accuracy and impartiality of our news—and we must safeguard press freedom and sovereignty.

Give audiences what you owe them

Claire Leibowicz

Head of the AI and Media Integrity Program at the nonprofit Partnership on AI (PAI)

Newsrooms are simultaneously preparing for threats (e.g., by cryptographically certifying their media to assert its authenticity) and embracing AI as a way to reduce costs, tell stories, and even build trust with audiences and reimagine the news. Sometimes this embrace seems pragmatic, evidence-based, and even revolutionary. At others, it seems like an overeager corrective to a collective sense that newsrooms missed the social media moment—an impulsive fix for the industry’s business-model woes.

Yet I’ve been struck by consistent examples of newsrooms’ deep sensitivity to audience needs, even when those needs are not fully defined or uniform. The newsrooms I’ve worked with have consistently asked: What are audiences ready for? And what do we owe them?

We tackle this in the Partnership on AI (PAI) BBC and CBC case studies, tracing how they decide to use AI-generated media for storytelling and implement PAI’s guidance on responsible AI. The CBC decided against using AI to conceal a news source’s identity, saying that the technology “far outstrips [the] comprehension of many people who make up the audience we serve.” By contrast, the BBC used deepfake technology to obfuscate subjects’ faces in a story about Alcoholics Anonymous, deciding that, with proper disclosure, the technology can “enable documentary audiences to view the subjects without bias typically inherent in traditional anonymization techniques.” Both organizations based their decisions on what best served their audiences—a North Star that the field should pursue no matter what challenges AI throws its way.

Lean on AI so you can get your work done

Ina Fried 

Chief technology correspondent for Axios and author of the daily Axios Ai+ newsletter

I see huge potential for AI to help me and other journalists to augment skills that AI can never replace—judgment and humanity. 

I used to avoid recording interviews, because I never had time to listen to them. But now I use Otter, which records and automatically generates a transcript. The results may be imperfect, but they’re still a game changer. AI helps me find specific spots in a recording and consult my notes far faster than I am able to without it. I also use AI to research the people and companies I cover. I’ve always prioritized preparing for interviews, and now AI helps me do the kind of background briefing that PR people often have on me. Tools I’ve used for this include OpenAI, Google, and Perplexity, with new ones constantly emerging, like the Chinese tool Manus AI. I can envision other ways to use AI, like for headline writing and search engine optimization. Another scenario is using AI for more data-intensive projects, to examine large datasets and suggest story-worthy patterns. A reporter still needs to see if an AI “trend” is truly newsworthy, an explainable blip, or just an anomaly.

Axios, like many outlets, is still evaluating how best to use the new technology. We recently signed a deal with OpenAI that gives our newsroom access to the enterprise version of ChatGPT. (Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios’s story archives while helping fund the launch of Axios into four local cities and providing some AI tools. Axios has editorial independence.) We don’t currently use ChatGPT or other generative AI to create content for publication, unless the point is to show what the technology can or cannot do. This must be clearly delineated. We are also encouraging newsroom staff to experiment with the tools and see how they might improve human-created content.

Use content credentials

David Carson 

A John S. Knight Journalism Fellow at Stanford University, on leave from his job as staff photojournalist at the St. Louis Post-Dispatch

In my more than thirty years as a photojournalist, I was an early adopter of digital photography, taught myself how to shoot/edit video, and became a licensed drone pilot. I am both wary of and fascinated by AI. Some AI is undeniably positive for photojournalism. AI facial- and object-recognition systems in cameras improve tracking and focus accuracy in sports, wildlife, and portrait photography. AI can transcribe audio to help editors quickly find a sound bite buried in hours of footage. And many photographers could use Grammarly to reduce typos and errors in captions. I’m concerned about the potential harm generative AI poses to photojournalism, because all photorealistic AI-generated images are lies. AI has significantly reduced the skill needed to churn out volumes of fakes and disinformation that blur the lines of reality. Photorealistic AI images of a young girl sitting in a boat clutching a puppy after Hurricane Helene, AI images of flames around the Hollywood sign during this year’s fires in LA, and the faked audio of JD Vance disparaging Elon Musk were shared widely on social media and are examples of the public being deceived.

Many news organizations restrict the use of AI-generated images for news stories. The news industry could go further and use what are known as content credentials on news photos, video, and audio to provide information about the provenance, history, and edits made to content. Think of content credentials like a nutrition label you see on food at the grocery store. Content credentials use the open-source Coalition for Content Provenance and Authenticity (C2PA) standard being collaboratively developed by technology companies, media organizations, and hardware manufacturers, and can be added to both authentic news photos and AI-generated images. Open AI’s DALL-E image generator has begun adding content credentials to the images it produces. The AP, Getty, Reuters, AFP, and others have been testing C2PA-compliant cameras. It is expected that the C2PA will be adopted as an ISO international standard this year. Implementation will take work. But hopefully the public will soon benefit from widespread adoption of the C2PA standard, with the goal of building trust and transparency in the news industry.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mike Ananny and Matt Pearce are contributors to CJR. Mike Ananny is an associate professor at the Annenberg School for Communication and Journalism at the University of Southern California and codirector of USC’s AI for Media and Storytelling initiative. Matt Pearce is the director of policy for Rebuild Local News, a former staff writer for the Los Angeles Times, former president of Media Guild of the West, and co-leader of the USC AI for Media and Storytelling initiative’s Synthetic Journalism working group.

关于《How We’re Using AI》的评论


暂无评论

发表评论

摘要

The perspectives on the role of artificial intelligence (AI) in journalism span a wide range from cautious optimism to outright skepticism and concern. Here are key takeaways: 1. **Press Freedom and Sovereignty**: Karen Rønde emphasizes the need for newsrooms to safeguard press freedom, especially as AI companies control access to information through various means like content moderation and algorithmic filtering. 2. **Environmental Considerations**: Claire Leibowicz highlights that the use of AI in journalism must be balanced against environmental costs. The energy consumption associated with running large language models is significant, and newsrooms need to consider these impacts when deciding whether to adopt such technologies. 3. **Audience Needs and Ethical Use**: Claire Leibowicz also underscores the importance of newsrooms considering what they owe their audiences when incorporating AI tools. This includes being transparent about how technology is used in content creation and ensuring that decisions are made with audience comprehension and ethical standards in mind. 4. **Content Provenance and Authenticity**: David Carson advocates for using content credentials, like C2PA (Coalition for Content Provenance and Authenticity), to establish the origin and history of news photos, videos, and audio recordings. This helps build trust by providing transparency about edits or alterations made to media. 5. **Practical Applications in Journalism**: Ina Fried describes practical uses of AI, such as automatic transcription with Otter, background research via tools like OpenAI and Perplexity, and potential applications in data analysis and SEO optimization. However, she also notes the need for human oversight and ethical guidelines when integrating these technologies. 6. **Cultural Impact and Media Integrity**: Claire Leibowicz points out that AI-generated content can influence narrative framing globally, potentially marginalizing local voices and perspectives. Newsrooms must critically evaluate how their use of AI aligns with democratic values and protects diverse viewpoints. 7. **Business Model Challenges**: Mike Ananny and Matt Pearce highlight that the journalism industry's embrace of AI is partly driven by financial pressures to reduce costs. However, they caution against an overreliance on AI as a quick fix without addressing underlying business model issues or ethical considerations. Overall, while there are clear benefits to integrating AI into journalistic practices for efficiency, research, and storytelling innovation, the broader implications around trust, environmental impact, cultural sensitivity, and press freedom require careful consideration. News organizations should proceed with a balance of pragmatism and caution, ensuring that technological advancements serve to enhance rather than undermine journalism’s core values.

相关新闻