STOCK TITAN

Leading Media Organizations Call for Global AI Policy to Protect Editorial Integrity

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Neutral)
Tags
AI
Rhea-AI Summary
Global news and publishing organizations sign open letter for responsible growth of generative AI models. Signatories include Agence France-Presse, European Pressphoto Agency, Gannett, Getty Images, News Media Alliance, The Associated Press, and The Authors Guild. The letter outlines regulatory and industry actions to drive responsible AI practices, including the protection of intellectual property, limiting bias and misinformation, and enabling media companies to negotiate with AI model operators.
Positive
  • None.
Negative
  • None.

NEW YORK, Aug. 09, 2023 (GLOBE NEWSWIRE) -- Today, leading global news and publishing organizations signed onto an open letter to policymakers and industry leaders globally, laying out proposed principles for the regulated and responsible growth of generative AI models.

In addition to outlining consequences for the media industry and the public should the global regulatory landscape not evolve as quickly as AI innovation, the letter also voices support for the creation of an appropriate legal framework around the technology.

Specifically, it outlines regulatory and industry actions to drive responsible AI practices that protect the content powering AI applications while preserving public trust in media, including:

  • The disclosure of training sets used to create generative AI models.
  • The protection of intellectual property of those who make content used to train AI.
  • The ability for media companies to collectively negotiate with AI model operators and developers over the use of proprietary intellectual property.
  • Mandates for generative AI models and users to clearly, specifically, and consistently identify their outputs and interactions as including AI-generated content;
  • A requirement that generative AI model providers limit the bias in, misinformation from, and abuse of AI services.

Signatories of the letter include:

  • Agence France-Presse 
  • European Pressphoto Agency
  • European Publishers’ Council
  • Gannett | USA TODAY Network 
  • Getty Images
  • National Press Photographers Association
  • National Writers Union
  • News Media Alliance
  • The Associated Press
  • The Authors Guild

The full letter can be found here, with full text below:

Preserving public trust in media through unified AI regulation and practices                                

Democracy is underpinned by trust in a free, reliable, independent, and strong media ecosystem. We believe artificial intelligence (AI) and generative models hold the potential to provide significant benefits to humanity; however, left unchecked, we also believe these technologies can threaten the sustainability of the media ecosystem as a whole by significantly eroding the public’s trust in the independence and quality of content and threatening the financial viability of its creators.

The media industry has a history of embracing and successfully navigating new technology, from the introduction of the printing press to broadcast media to the internet and social media. However, the pace of development and adoption of AI far exceeds that of prior technological leaps, and it does so at the potential expense of long-standing foundational intellectual property rights, as well as the creators’ investments in high-quality media content.

We, the undersigned organizations, support the responsible advancement and deployment of generative AI technology, while believing that a legal framework must be developed to protect the content that powers AI applications as well as maintain public trust in the media that promotes facts and fuels our democracies.

The Costs of Inaction

Generative AI and large language models make it possible for any actor, without regard to their intent, to produce and distribute synthetic content at a scale that far exceeds our past experience. Such a flood of content can distort facts and leave the public with no basis to discern what is true and what is made up. Even absent malicious intent, many generative AI applications and large language models produce factual errors and fictional information, in addition to propagating long-standing biases.  

Generative AI and large language models are also often trained using proprietary media content, which publishers and others invest large amounts of time and resources to produce. These models can then disseminate that content and information to their users, often without any consideration of, remuneration to, or attribution to the original creators. Such practices undermine the media industry’s core business models, which are predicated on readership and viewership (such as subscriptions), licensing, and advertising. In addition to violating copyright law, the resulting impact is to meaningfully reduce media diversity and undermine the financial viability of companies to invest in media coverage, further reducing the public’s access to high-quality and trustworthy information.  

The Way Forward

We are advocating for regulatory and industry action including: 

  • Transparency as to the makeup of all training sets used to create AI models.
  • Consent of intellectual property rights holders to the use and copying of their content in training data and outputs.
  • Enabling media companies to collectively negotiate with AI model operators and developers regarding the terms of the operators’ access to and use of their intellectual property.
  • Requiring generative AI models and users to clearly, specifically, and consistently identify their outputs and interactions as including AI-generated content.
  • Requiring generative AI model providers to take steps to eliminate bias in and misinformation from their services.

We are also highly supportive of efforts by government and industry groups to create a set of consistent global standards applicable to the development and deployment of AI.

Generative AI is an exciting advancement that offers the potential for significant benefits to society if developed and deployed responsibly.  We applaud those in the AI community who have taken steps to address these concerns and we thank governments and agencies around the world for thoughtfully considering these issues and advancing regulations in response. We look forward to being part of the solution to ensure that AI applications continue to prosper while respecting the rights of media companies and individual journalists who produce content that protects the truth and keeps our communities informed and engaged. 

List of Signatories:

  • Agence France-Presse 
  • European Pressphoto Agency
  • European Publishers’ Council
  • Gannett | USA TODAY Network 
  • Getty Images
  • National Press Photographers Association
  • National Writers Union
  • News Media Alliance
  • The Associated Press
  • The Authors Guild 

FAQ

What is the purpose of the open letter signed by global news and publishing organizations regarding generative AI models?

The purpose of the open letter is to propose principles for the regulated and responsible growth of generative AI models, outlining consequences for the media industry and the public, and voicing support for the creation of an appropriate legal framework around the technology.

What are some of the regulatory and industry actions outlined in the open letter?

The letter outlines regulatory and industry actions to drive responsible AI practices, including the disclosure of training sets used to create generative AI models, the protection of intellectual property, the ability for media companies to collectively negotiate with AI model operators, and mandates for generative AI models and users to clearly and consistently identify their outputs as including AI-generated content.

What are the potential threats highlighted in the open letter regarding generative AI and large language models?

The letter highlights that generative AI and large language models can distort facts, propagate biases, and undermine the financial viability of media companies by disseminating proprietary content without consideration of the original creators, thereby reducing media diversity and the public's access to high-quality and trustworthy information.

What are the recommended steps for regulatory and industry action regarding generative AI models?

The recommended steps include transparency in the makeup of training sets, consent of intellectual property rights holders, enabling media companies to negotiate with AI model operators, requiring clear identification of AI-generated content, and steps to eliminate bias and misinformation from AI services.

Who are some of the signatories of the open letter?

The signatories include Agence France-Presse, European Pressphoto Agency, Gannett, Getty Images, National Press Photographers Association, National Writers Union, News Media Alliance, The Associated Press, and The Authors Guild.

Getty Images Holdings, Inc.

NYSE:GETY

GETY Rankings

GETY Latest News

GETY Stock Data

1.20B
101.17M
71.96%
28.37%
1.53%
Internet Content & Information
Services-business Services, Nec
Link
United States of America
SEATTLE