Media executives urge Congress to enact legislation to prevent AI models from training on ‘stolen goods’

Media executives urge Congress to enact legislation to prevent AI models from training on ‘stolen goods’
  • PublishedJanuary 11, 2024

A group of media executives urged lawmakers on Wednesday to enact new legislation that would force artificial intelligence developers to pay publishers for use of their content to train their computer models.

The hearing before the US Senate comes after a blitz of new AI chatbots, most notably OpenAI’s ChatGPT, set off a wave of existential panic among media organizations, threatening to further upend the business, which has slashed thousands of jobs in recent years.

Roger Lynch, Condé Nast’s chief executive, told senators that current AI models were built using “stolen goods,” with chatbots scraping and displaying news articles from publishers without their permission or compensation.

News organizations, Lynch said, seldom have a say in whether their content is used to train AI or is output by the models.

“The answer is they’ve already used it, the models are already trained,” he said. “So, where you hear some of the AI companies say that they are creating or allow opt-outs, it’s great, they’ve already trained their models — the only thing the opt-outs will do is to prevent a new competitor from training new models to compete with them.”

While a December lawsuit by The New York Times laid bare news publishers’ desire to hamper AI models from scraping their news articles without compensation, the issue is not exclusive to the news media industry. In 2023, two major lawsuits were filed against AI companies, one from Sarah Silverman and two authors, and another 8,000-plus class-action lawsuit that includes such names as Margaret Atwood, Dan Brown, Michael Chabon, Jonathan Franzen, and George R. R. Martin.

To avoid the pilfering of news publishers’ content and, thereby, their coffers, Lynch proposed AI companies use licensed content and compensate publishers for content being used for training and output.

“This will ensure a sustainable and competitive ecosystem in which high-quality content continues to be produced and trustworthy brands can endure, giving society and democracy the information it needs,” Lynch said.

Danielle Coffey, president and chief executive of the News Media Alliance, added that there already exists a healthy licensing ecosystem in the news media, with many publishers digitizing hundreds of years’ worth of archives for consumption.

Coffey also noted AI models have introduced inaccuracies and produced so-called hallucinations after scraping content from less-than-reputable sources — which runs the risk of misinforming the public or ruining a publication’s reputation.

“The risk of low-quality [generative] AI content dominating the internet is amplified by the drastic economic decline of news publications over the past two decades,” Coffey said. “[Generative] AI is an exacerbation of an existing problem where revenue cannot be generated by, but in fact is diverted from, those who create the original work.”

Curtis LeGeyt, president and chief executive for the National Association of Broadcasters, noted that local personalities rely on the trust of their audiences, which could be undermined by the use of AI with the creation of so-called deepfakes and misinformation.

“I think we have seen the steady decline in our public discourse as a result of the fact that the public can’t separate fact from fiction,” LeGeyt said.

While the introduction of legal safeguards against AI might protect news publishers from having their content cannibalized, it also could prove beneficial to the developers in the long term since, as Coffey puts it, generative AI models and products “will not be sustainable if they eviscerate the quality content that they feed upon.”

“Both can exist in harmony, and both can thrive,” she added.

SOURCE: CNNNEWS

Leave a Reply

Your email address will not be published. Required fields are marked *