Vector image illustrating generative AI

AP Prohibits Use of Generative AI for News Stories or Images

Creates standards for how and when generative AI can be used by AP journalists
Associated Press logo
Source: Associated Press

Last week, the Associated Press published generative artificial intelligence standards, prohibiting the use of generative AI in AP news stories, video, audio or images. At the same time, AP encourages journalists to learn more about AI and experiment with it as long as they do not use it to create publishable content.

“Accuracy, fairness and speed are the guiding values for AP’s news report, and we believe the mindful use of artificial intelligence can serve these values and over time improve how we work,” says Amanda Barrett, vice president for standards and inclusion at the Associated Press, in an August 16, 2023 blog post.

“However, the central role of the AP journalist – gathering, evaluating and ordering facts into news stories, video, photography and audio for our members and customers – will not change. We do not see AI as a replacement of journalists in any way,” Barrett adds.

Generative AI standards

The AP’s standards for the use of generative AI include the following:

  • Output from a generative AI source such as ChatGPT should be considered information from an “unvetted source” and must be fact-checked and sourced.
  • Generative AI cannot be used to add, subtract or alter elements from photos, video or audio.
  • Journalists are urged to use caution when receiving external material to ensure that it does not contain AI-generated content.
  • They are also encouraged to confirm data and sources and do reverse image searches.
  • If journalists have any doubt about the authenticity of the material, they should not use it.

“Our goal is to give people a good way to understand how we can do a little experimentation but also be safe,” says Amanda Barrett, vice president of news standards and inclusion at AP.

Copyright © 2023 Authority Media Network, LLC. All rights reserved. Reproduction without permission is prohibited.

Collaboration between AP and OpenAI

In July, the Associated Press and Open AI, the developer behind ChatGPT, agreed to share access to select news content and technology to see how the two organizations might work together.

“Generative AI is a fast-moving space with tremendous implications for the news industry. We are pleased that OpenAI recognizes that fact-based, nonpartisan news content is essential to this evolving technology, and that they respect the value of our intellectual property,” said Kristin Heitmann, AP senior vice president and chief revenue officer, in a July 13 news release. “AP firmly supports a framework that will ensure intellectual property is protected and content creators are fairly compensated for their work.” 

“OpenAI is committed to supporting the vital work of journalism, and we’re eager to learn from The Associated Press as they delve into how our AI models can have a positive impact on the news industry,” said Brad Lightcap, chief operating officer at OpenAI.

Illustrative editorial of OpenAI logo on smartphone screen on the laptop. Open Ai is an American company that develops machine learning known for AI ChatGPT and DALL-E
Source: Bigstock Photo

Other news organization guidelines

Last month, Nieman Lab published an article about guidelines from other news organizations and their approaches to the use of AI in their work. CBC was among the media outlets highlighted in the article. In a June 12 blog post, news editor Brodie Fenlon explained how the CBC addresses AI. He explains generative AI as “a version of the technology that uses machine learning on vast amounts of data to produce high-quality original text, graphics, images and videos.”

Fenlon also said that no CBC news stories will be published or broadcast without “direct human involvement and oversight.” All content will be vetted and vouched for by a CBC journalist, and they will not use or present AI-generated content without full disclosure. The CBC’s full “commitment to trust and transparency” is available on the CBC.ca website.

Nieman Lab also shared that Wired was another news organization that has gone on record with their generative AI guidelines. Their generative AI policy, which was updated in May 2023, Wired said they will not publish stories with text generated by AI or edited a story using AI. They may use AI to suggest headlines or text for social media posts or to come up with story ideas.

In another case, The Guardian reports that News Corp Australia uses generative AI to produce 3,000 articles a week. The articles are reportedly “overseen by journalists.”

Insider Take

Generative AI is a fascinating technology that has huge potential for quickly creating content in a wide range of formats, but doing so is not without its risks. It can be used to spread disinformation; eliminate steps in the writing, editing and fact-checking process; republish copyrighted material; and recreate or alter images, audio or video, among other things. We encourage cautious testing with stated policies, transparency, accountability and responsibility.

[Editor’s Note: At this time, Subscription Insider has experimented with generative AI, but has not used it to create or publish content.]

Copyright © 2023 Authority Media Network, LLC. All rights reserved. Reproduction without permission is prohibited.

Up Next

Register Now For Email Subscription News Updates!

Search this site

You May Be Interested in: