OpenAI’s legal battle with The New York Times over data to train its AI models might still be brewing. But OpenAI’s forging ahead on deals with other publishers, including some of France’s and Spain’s largest news publishers.
OpenAI on Wednesday announced that it signed contracts with Le Monde and Prisa Media to bring French and Spanish news content to OpenAI’s ChatGPT chatbot. In a blog post, OpenAI said that the partnership will put the organizations’ current events coverage — from brands including El País, Cinco Días, As and El Huffpost — in front of ChatGPT users where it makes sense, as well as contribute to OpenAI’s ever-expanding volume of training data.
OpenAI writes:
Over the coming months, ChatGPT users will be able to interact with relevant news content from these publishers through select summaries with attribution and enhanced links to the original articles, giving users the ability to access additional information or related articles from their news sites … We are continually making improvements to ChatGPT and are supporting the essential role of the news industry in delivering real-time, authoritative information to users.
So, OpenAI’s revealed licensing deals with a handful of content providers at this point. Now felt like a good opportunity to take stock:
Stock media library Shutterstock (for images, videos and music training data)The Associated PressAxel Springer (owner of Politico and Business Insider, among others)Le MondePrisa Media
How much is OpenAI paying each? Well, it’s not saying — at least not publicly. But we can estimate.
The Information reported in January that OpenAI was offering publishers between $1 million and $5 million a year to access archives to train its GenAI models. That doesn’t tell us much about the Shutterstock partnership. But on the article licensing front — assuming The Information’s reporting is accurate and those figures haven’t changed since then — OpenAI’s shelling out between $4 million and $20 million a year for news.
That might be pennies to OpenAI, whose war chest sits at over $11 billion and whose annualized revenue recently topped $2 billion (per Financial Times). But as Hunter Walk, a partner at Homebrew and the co-founder of Screendoor, recently mused, it’s substantial enough to potentially edge out AI rivals also pursuing licensing agreements.
Walk writes on his blog:
[I]f experimentation is gated by nine figures worth of licensing deals, we are doing a disservice to innovation … The checks being cut to ‘owners’ of training data are creating a huge barrier to entry for challengers. If Google, OpenAI, and other large tech companies can establish a high enough cost, they implicitly prevent future competition.
Now, whether there’s a barrier to entry today is debatable. Many — if not most — AI vendors have chosen to risk the wrath of IP holders, opting not to license the data on which they’re training AI models. There’s evidence that art-generating platform Midjourney, for example, is training on Disney movie stills — and Midjourney has no deal with Disney.
The tougher question to wrestle with is: Should licensing simply be the cost of doing business and experimentation in the AI space?
Walk would argue not. He advocates for a regulator-imposed “safe harbor” that’d protect any AI vendor — as well as small-time startups and researchers — from legal liability so long as they abide by certain transparency and ethical standards.
Interestingly, the U.K. recently tried to codify something along those lines, exempting the use of text and data mining for AI training from copyright considerations so long as it’s for research purposes. But those efforts ended up falling through.
Me, I’m not sure I’d go so far as Walk in his “safe harbor” proposal considering the impact AI threatens to have on an already-destabilized news industry. A recent model from The Atlantic found that if a search engine like Google were to integrate AI into search, it’d answer a user’s query 75% of the time without requiring a click-through to its website.
But perhaps there is room for carve-outs.
Publishers should be paid — and paid fairly. Is there not an outcome, though, in which they’re paid and challengers to AI incumbents — as well as academics — get access to the same data as those incumbents? I should think so. Grants are one way. Larger VC checks are another.
I can’t say I have the solution, particularly given that the courts have yet to decide whether — and to what extent — fair use shields AI vendors from copyright claims. But it’s vital we tease these things out. Otherwise, the industry could well end up in a situation where academic “brain drain” continues unabated and only a few powerful companies have access to vast pools of valuable training sets.