Mobile navigation

Dickon Ross’s publishing world 

AI: is it above the law?

Copyright, usually the more straightforward and boring side of media law, gets more interesting and complex for publishers in the age of AI, says Dickon Ross.

By Dickon Ross

AI: is it above the law?

I think my fascination with media law makes me quite an unusual editor. It’s the part of journalism training that many find difficult, dull or both. It’s a necessity for any journalist, of course, to know what they can do within the law as well as what they can’t. But I love it. Maybe it’s the geek in me.

In media law, copyright tends to be quicker to grasp compared to more dangerous areas like libel, where the pitfalls are bigger and the penalties much higher. Libel could give an editor or publisher nightmares, but copyright shouldn’t keep them awake at night.

Copyright compliance is a matter of process. These days, when an issue does come up, it’s usually from lawyers with crawling software that tries to match images at the perceived source with re-use somewhere else online and claim to act on behalf of the copyright holder in fishing expeditions to claim royalties. It doesn’t even know if the publisher has legally licensed the image or not.

This results in some ludicrous claims: for example, demands that photographers pay a license fee to their clients to use their own images, for which they have copyright, on their own sites. Many editors and publishers have had such vexatious demands. Otherwise, copyright compliance is routine. That’s until generative AI came onto the scene.

Copyright in AI publishing is beginning to look analogous to the equally mundane issue of insurance in autonomous vehicles. Both are legal issues that could potentially derail a new technology, or at least delay its adoption, but no one yet knows where it will go.

AI developers had long been using proprietary images on the web as raw data to feed their engines, under the ‘fair dealing’ exception or its equivalents. But as soon as the AI platforms went commercial, it became harder to argue that exception.

As any good trainee journalist knows, UK copyright infringement is the use of the whole or a substantial part of an original work without the copyright owner’s permission and where none of the exceptions apply. But from the AI black box, how do you disentangle what’s original, what’s sufficiently transformed or what uses a ‘substantial part’ of an original work, and whose work? All these questions are important legally.

Researchers have tried in experiments to get original copyrighted work out of generative AI tools with mixed results: in generative AI, for words at least, there seem to be some safeguards built in. They’re still imperfect but we could expect these to improve as AI develops.

A question of purpose

Another aspect is trickier though. Lawsuits tend to complain that the AI engines are subverting and repurposing their original creative efforts.

Purpose can matter when it comes to remixing others’ images. For example, Andy Warhol’s Campbell’s soup cans art was allowed by the US Supreme Court because it was judged sufficiently transformative and appeared in an art gallery rather than to sell soup. But the court didn’t allow his similar treatment applied to a Prince photograph for a magazine artwork because the purpose was too similar to the original’s.

As AI technology develops further, we might expect it to mix things up more. As it behaves more like human intelligence, so the legal situation might once again be as it was for humans. Even so, there will be another important difference. Copyright cuts both ways when it comes to AI. Publishers want to protect their own copyrighted works from AI regurgitation. But copyright is only granted to humans not machines. So as AI becomes more autonomous, needing less human-in-the-loop intervention, copyright on what it produces will become harder to maintain until it becomes impossible – unless the laws are changed.

Getty was quick off the mark to ban AI images in its picture library, to avoid infringing others’ copyright, and sued one AI company it said was taking its creative work without permission. Getty is now not alone. Many publishers, picture libraries and other suppliers are closing ranks on this issue.

They can now choose to block the AI bots. Half of the news publishers surveyed by homepages .com have already included code to block one or more of the AI platforms from crawling their sites. But it’s not an easy decision balancing the pros and cons such as whether they can afford to be left out as AI grows.

Copyright seems to be a serious enough roadblock in AI for Google to promise to indemnify any publisher sued for copyright infringement on works produced in its AI engine. As well as taking the risk out of AI image generation for a company generating pictures, it removes the temptation to settle with litigants. It turns the issue into a fight that clearly Google thinks it can win through legal precedents or changes in the law. It also adds those companies it indemnifies to the user momentum behind AI. That is what the tech giants have always been all about – not dividends or monetisation. Get the users, and the value follows, is the tactic.

The legal issues will play out as the technology evolves, and then the law may follow in its wake, playing catch-up with technology but, as usual, too late. It’s going to be lucrative for the lawyers, and interesting for AI technologists and media law geeks everywhere.

This article was first published in InPublishing magazine. If you would like to be added to the free mailing list to receive the magazine, please register here.