Getty Images v Stability AI — key intellectual property lessons for AI developers

We break down the key takeaways from the final ruling and consider what they mean for the evolving relationship between IP law and AI development.
Read more
We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com
AuthorsMaya TajuddinColin BellSara Ludlam

The High Court’s decision in Getty Images (Getty) v Stability AI has already attracted significant attention, not least due to the Claimant dropping its primary copyright infringement claims for jurisdictional reasons. In November 2025, the court’s ruling addressed the remaining claims that are just as important for rights holders and artificial intelligence (AI) developers — secondary copyright infringement, trade mark infringement and passing‑off.
Around the same time, the Munich Regional Court delivered a landmark judgment in GEMA v OpenAI, holding that the use of copyrighted song lyrics to train generative AI models without a licence constitutes copyright infringement under German and EU law.
These aspects of the judgments provide insights into how courts are likely to approach questions of liability when protected works or trade marks appear in AI‑generated outputs. For businesses operating in the AI space, the case offers early guidance on where responsibility may fall and highlights the practical risks of deploying models trained on third‑party content.
Here, Maya Tajuddin, Colin Bell and Sara Ludlam break down the key takeaways from the final ruling and consider what they mean for the evolving relationship between intellectual property (IP) law and AI development.
The judgment contains a useful summary of how Stability AI’s ‘Stable Diffusion’ model operates. It’s a latent diffusion model that transforms an input into a synthesised image by modelling a probability distribution based on its training data and then sampling from that distribution. Unlike text‑generating models, Stable Diffusion is designed specifically for image generation.
Developing a model like this involves:
Before training begins, the network’s weights are initialised with random values. During training, they’re iteratively updated to satisfy optimisation criteria set by engineers. Once trained, the model can be run in ‘inference mode’ — the user provides an input, the network performs computations and an output image is generated.
Inference does not require the use of any training data and the model does not store it. However, a large part of its functionality is indirectly controlled via the training data. In other words, the way in which the network makes use of its multiple layers is the result of the training process.
The experts in the case agreed that models like Stable Diffusion can be prone to ‘memorisation’ — the reproduction of images from training data. A network should ideally be able to ‘generalise’, learning patterns and rules from the data and applying them in new contexts.
They went on to highlight the issues that can arise when training is not properly balanced, stating:
“If a network has been trained for too long on the same training data or an insufficiently diverse training data, it can be prone to ‘overfitting’. Overfitting occurs when the network uses its weights or part of its weights to memorise the individual training images rather than representing a large set of training images jointly with these weights. Overfitting is an undesired feature in machine learning, which engineers seek to avoid. …. Deep networks can both generalise and memorise at the same time. In such case, the network uses most of its weights to represent general patterns in the data, but uses some part of its weights to memorise individual patterns. The presumed primary cause for memorisation is duplication of training data, either by explicit duplication or by training the network for too many epochs (complete passes of all the data in a training dataset during the training process)”.
Other than the watermark itself, there was no evidence that the model had memorised any particular copyright work relied upon by Getty. Based on the judgment, it does not appear that Getty sought to argue that the watermarks themselves were copyright works but rather relied on certain images and sets of images in which the watermark appeared.
The court emphasised that:
“It is important to be absolutely clear that Getty Images do not assert that the various versions of Stable Diffusion (or more accurately, the relevant model weights) include or comprise a reproduction of any Copyright Work and nor do they suggest that any particular Copyright Work has been prioritised in the training of the Model. There is no evidence of any Copyright Work having been “memorised” by the Model by reason of the Model having been over-exposed to that work and no evidence of any image having been derived from a Copyright Work”.
The experts explained how watermarks might nonetheless appear in outputs, noting that:
“In order for a watermark to be produced it is likely that the model needed to be trained on a diverse set of images/captions each containing a watermark. If it was only a single (duplicated) training image with a watermark, the model would memorise the whole image and not just the watermark .... Whereas it takes multiple exposures to the exact same image to lead to memorisation, memorising a watermark likely requires multiple exposures to the same watermark regardless of the underlying image. It is not clear to us if one of these is easier or harder than the other. It seems to us that it is “easier” to find a prompt that shows a memorised image, because the image and its caption are reproduced in the training and so a caption with the appropriate keywords is more likely to generate the memorised image (assuming that we know the caption and/or keywords of the highly duplicated training image)”.
Getty ultimately lost its case for secondary copyright infringement, having already dropped its primary copyright infringement claims due to jurisdictional limitations. The court held that there was no evidence that the training and development of the Stability AI model took place in the UK, so UK copyright law did not apply.
Crucially, the judgment confirmed that an AI model like Stable Diffusion — one that does not store or reproduce copyrighted works — is not considered an ‘infringing copy’ under sections 22 and 23 of the Copyright, Designs and Patents Act 1988. Justice Joanna Smith confirmed that although intangible electronic copies can qualify as ‘articles’ distributed over the internet for the purposes of secondary infringement, the models themselves, comprising of learned statistical parameters, did not store or reproduce Getty’s works. This means that the model itself does not infringe copyright simply by existing (and being possessed or distributed in the course of business).
What remains unresolved is whether the act of training on protected content constitutes infringement as a matter of law. This question was deferred when Getty abandoned its primary claims. From the court’s reasoning, if training had taken place in the UK, there may have been grounds to find infringement during the training process. The apparent memorisation of the watermarks suggested that the model must have been exposed to a significant number of images that included the watermarks in the first place.
Justice Joanna Smith addressed how the court viewed the functioning of AI models in the context of secondary copyright infringement:
“While it is true that the model weights are altered during training by exposure to Copyright Works, by the end of that process the Model itself does not store any of those Copyright Works; the model weights are not themselves an infringing copy and they do not store an infringing copy. They are purely the product of the patterns and features which they have learnt over time during the training process. … The model weights for each version of Stable Diffusion in their final iteration have never contained or stored an infringing copy”.
Getty did, however, secure a narrow and somewhat pyrrhic victory with the court finding that a handful of generated images infringed its trade marks.
Crucially, the court rejected Stability AI’s attempt to shift responsibility onto the user. The company argued that the model is just a tool controlled by the user and the more detailed the prompt, the more control the user imposes. However, the court pointed out that users have no control over what the model is trained on, any semantic guardrails that may be applied to the prompts or outputs or model paraments that define the network’s functionality. Stability AI as the provider of the model is the one in control since it decides what images and data are used to train the system.
This is a significant win for IP owners. It does not rule out claims against users (who do have control over the prompts and therefore may cause infringements) but rights owners will generally prefer to pursue AI companies themselves.
The trade mark claim was assessed using the usual factors. Justice Joanna Smith found that the AI provider’s use of the trade marks was clearly in the context of commercial activity aimed at economic gain (and therefore ‘in the course of trade’). The court examined both Getty’s mark and the alleged infringing signs. Not all examples were identical — some were only similar while others were not similar at all — due to blurring or altered letters. There was also debate over whether the specification covered by the mark was identical to the goods and services to which the infringing signs were applied.
Where there were examples of ‘double identity’ (both the mark and specification were identical), infringement was found under section 10(1) of the Trade Marks Act 1994 in relation to iStock watermarks — but not for Getty’s watermarks. In other cases, the marks and specifications were found to be identical or similar and there was a likelihood of confusion, leading to infringement under section 10(2) for both the iStock and Getty watermarks. However, a significant number of allegations of infringement were not upheld. The court also held that there was no infringement under section 10(3) as no evidence of unfair advantage, tarnishment or any change in economic behaviour caused by the infringing signs was found.
Justice Joanna Smith noted in the ruling that Getty has succeeded “in part” in regards to the trade mark infringement. However, the jurisdictional limitations in this case meant that it offered only restricted guidance on the broader UK copyright and AI landscape — particularly in relation to primary infringement of copyright going forward.
In November 2025, the Munich Regional Court delivered a landmark judgment in GEMA v OpenAI, ruling that the use of copyrighted song lyrics to train generative AI models without a licence constitutes copyright infringement under German and EU law.
The dispute focused on GEMA’s claim that OpenAI had used its collection of German song lyrics to train ChatGPT, enabling the chatbot to generate lyrics from artists on request. The court agreed, finding that ChatGPT’s ability to reproduce lyrics was clear evidence that the works had been included in its training data.
The court found that both the memorisation of lyrics within model parameters and their reproduction in outputs amounted to unauthorised reproduction, rejecting opposing arguments that such use was covered by various text and data mining exceptions laid out in Articles 3 and 4 of Directive (EU) 2019/790 (the CDSM Directive). OpenAI was held responsible for copyright infringement and ordered not to reproduce lyrics from GEMA-represented artists unless it obtained the appropriate licence.
This ruling was the first of its kind in Europe, setting a significant precedent for EU member states that AI developers may be directly liable if their models can reproduce protected works. While not binding in the UK, it’s likely to influence future litigation and compliance strategies across Europe, especially as UK cases like Getty Images v Stability AI left open questions about liability when training occurs domestically.
The Stability AI and OpenAI models differ — the former is image-based while the latter is text-based — but they share structural similarities. Both require an architecture that’s trained on repeated exposure to massive quantities of data with model weights, optimisation, inferences and memorisations playing key roles. The crucial difference between the two cases was evidence. In Getty Images v Stability AI, the court found no proof that any copyright work had been memorised by the model. In GEMA v OpenAI, however, entire song lyrics were reproduced, demonstrating clear signs of memorisation.
The German court emphasised that responsibility lay with the defendants, not the users:
“The outputs were generated by simple prompts. The defendants operated the language models for which the song lyrics were selected as training data and with which they were trained. They were responsible for the architecture of the models and the memorisation of the training data. Thus, the language models operated by the defendants had a significant influence on the outputs, and the specific content of the outputs was generated by the language models”.
The German court found that:
“The song lyrics in dispute are reproducible in the defendant's language models .... It is known from information technology research that training data can be contained in language models and can be extracted as outputs. This is referred to as memorisation. … Such memorisation was determined by comparing the song lyrics contained in the training data with the reproductions in the outputs. Given the complexity and length of the song lyrics, coincidence can be ruled out as the cause of the reproduction of the song lyrics”.
Accordingly, because of the identical or substantially similar memorisations, the AI model itself (as well as the outputs) were held to be an infringement of copyright.
This suggests a technical and legal distinction between the cases. A model trained on copyrighted material does not automatically constitute an infringing copy unless the model retains or reproduces those works. Given Judge Joanna Smith’s reasoning in Getty Images v Stability AI, had there been evidence of such memorisations of the claimed copyright works, there may have been a successful secondary copyright infringement claim.
Equally, had Getty successfully argued that the watermark(s) themselves were copyright works, there may also have been a successful claim for copyright infringement due to the memorisation of such watermarks.
We now await to see whether either party appeals the judgment — a step that seems likely given the high stakes — and whether permission to appeal is granted. Whatever the outcome, the case highlights how quickly the legal and regulatory landscape surrounding AI is evolving, making it essential for stakeholders to remain vigilant and proactive in navigating the complex intersection of IP, copyright, data use and AI development.
We have extensive expertise in IP, technology and AI and can assist clients with the following issues:
Talk to us by calling 0333 004 4488, emailing hello@brabners.com or completing our contact form below.

Sara Ludlam
Sara is a Partner and Chartered Trade Mark Attorney in our commercial and intellectual property (IP) team.
Read more

Loading form...

We break down the key takeaways from the final ruling and consider what they mean for the evolving relationship between IP law and AI development.
Read more

We explore the new minimum financial thresholds that will apply to public contracts and the application of the Procurement Act 2023 from 1 January 2026.
Read more

We explore how charities will need to manage their marketing activities and supporter consent once the secondary legislation takes effect.
Read more