17 Sep 2025
Stable Diffusion and Unstable Law. How generative AI is challenging decades of intellectual property laws.

We are counting down the days until the High Court announces its judgment on the Getty Images (US) Inc v Stability AI Ltd case. This is a huge landmark case. It will determine how AI developers can lawfully train their AI models on human creators’ copyrighted and proprietary materials - and what effect this has on AI outputs.
Background
Getty Images is a renowned US visual media company that stores and supplies images, comprising original artistic works. An image containing Getty Images watermark is never too far away from digital creatives.
The Defendant, Stability AI, is a UK-based AI company that develops and shares open-source AI generative tools. Its most recognised model is known as Stable Diffusion, which generates photorealistic images from written and image prompts fed by the user.
In January 2023, Getty commenced legal proceedings in the High Court against Stability AI, claiming that Stability had scraped over 12 million images from the Getty website, without consent or under licence, to train Stable Diffusion. Getty alleged that in doing so, Stable Diffusion has infringed on Getty’s intellectual property rights, including copyright, as the AI outputs substantially reproduce Getty’s original image. Getty also alleges that Stability has violated its trademark rights, as Stable’s outputs reproduce and bear Getty’s branding.
Stability rejected the claims, but failed to have Getty’s claims thrown out at an early stage and therefore the trial commenced in June 2023.
Stability assert that Stable Diffusion had been trained using data collated lawfully under German law from the Large-scale Artificial Intelligence Open Network (LAION), a German charity. Stability also argue that much of the training and development of the Stable Diffusion was undertaken in the US.
Where are we so far?
- Primary Copyright Infringement – Training and Development
Section 16 of Copyright, Designs and Patents Act 1988 (CDPA) states that the owner of copyright has the exclusive right in the United Kingdom to copy the works and issue copies to or communicate the works to the public. Getty’s main copyright claim was under section 17 of CDPA, being that Stability had copied Getty’s images during the training and development of the AI model. To be clear, under the CDPA, the infringing activity needs to occur in the United Kingdom.
The facts in respect of the training and development of Stable Diffusion are complex and technical. Disclosure took place in November 2024, which revealed Stability’s lack of record keeping which pointed to gaps and inconsistencies in UK activity. Nevertheless, at trial, Stability’s witnesses were confident that work undertaken during the training and development stages of Stable Diffusion took place on non-UK cloud-based servers. Further, according to Stability’s witnesses, there was no need to download any training data sets onto any UK servers or systems.
In light of this, it was always going to be an uphill battle for Getty to show that any copying had taken place in the UK and perhaps it is not surprising that Getty have subsequently dropped this claim.
- Primary Copyright Infringement – Outputs
Getty asserted that the output of Stable Diffusion, in the form of AI-generated images, through both written and/or image prompts by users in the UK, was itself an infringement, as the images were reproduced from the Claimant’s copyrighted works. Claims were brought under section 20 of CDPA, communication of infringing copies to the public and 16(2), authorising acts of copying by end users. They also sought an injunction.
For Getty to succeed in this claim, they need to demonstrate that the output images of Stable Diffusion were reproduced using a substantial part of their copyrighted works. Stability of course refutes this claim, arguing that the images produced by the model do not reflect a substantial part of the original works. In addition, Stability shifts responsibility to the end user, stating that the generated content and its resemblance to Getty’s copyright works are a result of the user’s prompts.
The challenge for Getty comes down to the workings of latent diffusion and the way Stable generates images. The model starts with a clean image and then scrambles the pixels by adding layers of “noise” akin to sprinkling sand over a photo or adding TV white noise to an image. During this process, the model learns to reverse-engineer the layers of noise, resulting in a new image. The way latent diffusion works is that you can use the same prompts, but a different image is created each time. Whilst there is some argument for memorisation, where the output image is substantially similar to the original training image, Getty failed to bring evidence of a clear memorised output.
Perhaps unsurprisingly, Getty has also dropped this element of the claim due to the lack of evidence, but also because there were issues with respect to ownership of some of the original images (some of which had been transferred to another group company). Additionally, Stability has recently implemented measures to block all use of the prompts Getty complained of, in relation to the text-to-image output claims. In Getty’s view, this negated the need for the injunction.
- Secondary Infringement Claim
Whilst Getty, for various reasons, has had to drop its primary infringement claims, Getty’s secondary infringement claims marches on. Getty alleges that secondary infringement arose under section 22 of the CDPA when Stability imported an “article” (namely, the model weights in Stable) into the UK, which is – and which Stability knows or has reason to believe is – an infringing copy of Getty’s copyright works.
It is important to note that whilst Getty’s images have been used to train Stability, the model weights are supplied to users when they download Stability. However, the model weights themselves do not contain the images used to train Stability.
Stability’s defence is that Stable Diffusion does not qualify as an “article” due to the intangible nature of the model weights. They argue that the definition of an “article” under the CDPA relates to physical articles that can be touched, held and seen.
Both parties have arguments to support their respective positions; these points of law have not been previously decided. The Judge must determine whether an article can still infringe copyright even if it no longer uses the original work, and whether something intangible can be considered an article under the meaning of the CDPA.
Where does this leave us?
It is a shame that the primary infringement allegations have been dropped from this case for various reasons. A judgment on both primary infringement claims would have been invaluable for industry, AI creators and authors of original works alike. If Getty had been able to prove that Stability had taken certain actions in the UK, it would have been a very different case and, perhaps, the one which we all would have hoped for to clarify whether proprietary images may or may not be used to train AI models.
It’s also disappointing that we will not have further clarity on the Output allegations. For now, the questions on reproduction of a substantial part, communication to the public, and authorising user infringement remain unanswered.
However, there are some important takeaways from the case so far:
- AI developers seeking to minimise the risk of copyright infringement in the UK should take precautions to prevent any content from being downloaded and stored in the UK – something which should be backed up with policies and staff training.
- Claimants should carefully consider the proper jurisdiction to bring infringement claims, no matter how tricky or complex the facts, balanced with the available defences to such claims in those jurisdictions.
- AI companies should consider how they can prevent specific prompts and types of output to minimise the risk that the production infringes on original works.
- It is vital to have the right intellectual property portfolio in place and ensure that ownership of those intellectual property rights is documented.
- In light of Stability’s arguments about the end user taking some responsibility for the uploaded images, AI companies should have robust terms and conditions of use in place that both deter and deal with the use of infringing images.
Final Thoughts
The ruling will be groundbreaking to determine how AI developers source training data and how creators protect their works from unauthorised data mining. Whatever the outcome, the result will have far-reaching implications for both the creative and technology sectors. If the decision is in favour of Stability, then that would be to the detriment of human-created works and the creative industry, which generates billions for the UK economy. If the decision favours the proprietary rights of human creators, then this could scupper the UK’s position on AI use and innovation.
The CDPA and other legal statutes, which were written in the analogue era, are being stress-tested in the modern world of AI. The Judge, Joanna Smith J, will no doubt be meticulous in her ruling. However, given the gravity of this case and the fact that the CDPA may not be fit for purpose in this digital age, she may determine that this is a matter of public policy for parliament to decide. We learnt from the recent passing of the Data (Use and Access) Bill that the Government will produce an economic impact report on the use of proprietary works for AI training, with a progress report expected in December 2025. Judge, Joanna Smith J’s ruling will no doubt have a significant impact on the UK Government’s approach to copyright reform and its approach to AI training.
For support with use of AI, intellectual property rights and data protection, please contact Abi Sinden on [email protected] or call 01202 306273.











