3 potential ways generative AI could lead to defamation claims

We explain where generative AI has the potential to damage individuals’ reputations and examine relevant case law from other jurisdictions.
Read more
We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com
While artificial intelligence (AI) is far from new, the rise of easily accessible generative AI tools has pushed it into everyday life at unprecedented speed. Platforms like ChatGPT and Microsoft Copilot now shape how millions of people search for information, draft content and make decisions.
Generative AI’s ability to create text, images and even video has transformed productivity, but it also brings risks — particularly if these systems produce false or misleading outputs that can cause serious reputational harm. As things stand, no court in England and Wales has considered a defamation claim arising specifically from generative AI output, leaving the precise legal position uncertain.
Here, Matthew Cleary from our litigation team explains where AI has the potential to damage individuals’ reputations and examines relevant case law from other jurisdictions, highlighting arguments that’ll undoubtedly soon be tested in the courts of England and Wales.
While generative AI tools usually produce clear, accurate outputs, they’ve become increasingly known for ‘hallucinating’ — generating information that isn’t based on training data and doesn’t follow any identifiable pattern. IBM describes a hallucination as being akin to humans seeing “figures in the clouds or faces on the moon”.
Hallucinations can lead to false information being spread, for example in 2025 when US law professor Jonathan Turley learnt from fellow professor Eugene Volokh that ChatGPT had falsely claimed that he’d sexually harassed a student. It’s understood that this arose from an AI hallucination. A statement like this would raise serious concerns under UK defamation law, as well as other laws like privacy, harassment and data protection.
The potential legal liability of AI platforms is a developing area. Search engine operators have generally enjoyed a favourable position. In Metropolitan International Schools Ltd v Designtechnica Corp [2009] EWHC 1765 (QB), the Court found that Google wasn’t liable for a defamatory statement appearing in its search results because it only played a ‘passive role’. This remained the case even after the search engine had been put “on notice” of the problem.
This has prompted commentators to consider whether AI providers could be seen as producing the content in a more active way or if they too fall on the passive side — an issue explored further below.
A ‘prompt’ is simply what you type into tools like ChatGPT, Claude or Copilot. While prompts are usually clearly understood by these systems, a ‘prompt injection’ can trick AI into producing outputs that are inaccurate, expose sensitive data or perform a harmful action.
An attacker may use a hidden or carefully worded prompt that includes instructions for the AI tool to follow, overriding the rules set by the developer.
If attackers were able to manipulate an AI system to produce defamatory content about an individual, that person may seek to bring a defamation claim. However, once again, there’d likely be complex questions about which parties could be held liable for publishing this manipulated content.
Another revolutionary but increasingly risky capability of generative AI is image manipulation — something that’s already dominated headlines in 2026. With easy access to image-editing tools, users can upload a photo and have AI alter it so that the people depicted appear to be doing something that they never actually did. A common example is prompting AI to create a video of two people kissing, often without their consent. More troubling cases have involved AI tools like Grok generating altered images of individuals without clothing.
These uses can seriously damage an individual's reputation. If an AI-generated video showed a well-known figure in a romantic situation with someone other than their partner and it circulated online, the individual could face significant personal and professional consequences based on a false impression. We know firsthand that the risks are already very real, particularly in political deepfake cases that show how quickly manipulated content can shape public perception.
Image manipulation raises different issues from hallucinations and prompt injections. Where a user explicitly asks for an altered image, courts may need to consider whether liability should fall on the user rather than the AI provider. The answer may potentially lie in what prompt is put into the tool. Grok has created explicit manipulated images of celebrities like Taylor Swift without the user explicitly asking the tool to do so. However, if a user explicitly requests such an image, the question becomes whether they should be held responsible for any reputational damage that follows.
Away from defamation law, UK lawmakers are attempting to tackle the issue of AI generated visual deepfakes. Section 138 of the Data (Use and Access) Act 2025 extends criminal liability by creating an offence where a person intentionally produces a “purported intimate image” of another adult without their consent and a reasonable basis for believing that consent was given. A “purported intimate image” is defined as an image that appears to depict the person and appears to show them in an intimate state.
As there’s currently no judgment in England and Wales on how defamation law applies to generative AI, it remains to be seen how courts here would approach issues around liability.
Only a limited number of cases around the world have touched on defamation and generative AI, for example the US case of Walters v OpenAI LLC (Walters) that was brought before the Georgia State Court. Walters involved a claim by Mark Walters, an American radio host known for his views in favour of gun rights who became the victim of a false accusation generated by ChatGPT.
Journalist Frederick Riehl had used ChatGPT while researching a lawsuit involving the Second Amendment Foundation (SAF). After a number of attempts to retrieve information, the tool eventually made false accusations that Mr Walters, described as SAF’s treasurer and CFO, had committed embezzlement — a clear hallucination. Riehl was fully aware that this was a false accusation due to his affiliation with SAF.
Walters brought a defamation claim against OpenAI but the Court ruled in favour of the latter on three key bases:
The Court concluded that no reasonable reader would’ve taken the ChatGPT output as stating actual facts, meaning that the claim couldn’t succeed under Georgia law. It also found that OpenAI hadn’t been negligent, noting the company’s extensive efforts to train the model to minimise hallucinations. The Court further highlighted that OpenAI clearly warns users that ChatGPT can produce factually inaccurate information by way of disclaimers.
Lawyers and legal commentators are waiting for further court decisions and legislative changes from around the world to better understand whether judges in their own jurisdiction will follow the Georgia Court in Walters. In the meantime, many are approaching the principles of defamation and the Court’s reasoning differently.
Three interesting areas of focus include comparing AI tools with defamation principles being applied to web browsers like Google Search, the use of legal disclaimers and what may or may not amount to ‘publication’ for the purposes of generative AI liability under defamation law.
In a similar vein to Metropolitan International Schools Ltd v Designtechnica Corp [2009] EWHC 1765 (QB), US law also takes the view that when an individual posts a defamatory statement, responsibility sits with the person who created it — not the search engine — even if the content is found through a search.
However, commentators such as Eugene Volokh argue that statements generated by systems like ChatGPT shouldn’t be treated the same as those made by individual users. Because AI outputs are presented as the system’s own words, Volokh says that the public tends to attribute credibility to both the programme and the company behind it. That perceived authority can make the harm greater and may justify holding the company liable.
So, if a claim is brought in England and Wales, it’ll be interesting to see whether courts will depart from the approach taken in Metropolitan International Schools Ltd v Designtechnica Corp and find that tools like ChatGPT or Grok are doing far more than playing a merely passive role in disseminating defamatory statements.
The Court in Walters gave OpenAI credit for the disclaimers attached to its outputs warning users about potential inaccuracies. Many US commentators think that the Court may have been overly lenient.
Volokh stresses the inadequacy of disclaimers for a number of reasons, including:
Commentary from other jurisdictions such as New Zealand echoes this. For example, Eva McIlhinney challenges OpenAI’s claim that users don’t take AI output at face value, asking why anyone would use the technology for anything beyond entertainment if that were true. She also points out that this argument contradicts companies’ own marketing that highlights accuracy and expertise, making it difficult to argue that users shouldn’t trust outputs while simultaneously promoting the system’s reliability.
Another issue for lawyers and claimants in England and Wales is the limitation period and Section 8 of the Defamation Act 2013 — the ‘Single Publication Rule’.It treats repeat publication by the same person as occurring on the date of first publication.
In Deman v Associated Newspapers Ltd [2016] EWHC 2819, a claim was found to be time-barred because the claimant relied on an article initially published in 2011, even though it was accessed in 2014.
For generative AI, the question becomes whether a court would treat outputs in the same way.
If a defamatory statement is generated years apart in response to new prompts, is it truly a repeat publication or a fresh publication because the output isn’t “substantially the same” as the original?
Generative AI platforms are now widespread and increasingly integrated in both personal and professional life. These tools carry the risk of producing defamatory outputs, either as a result of a mistake or because users intentionally prompt them to do so.
While the technology is evolving rapidly, defamation case law is developing far more slowly. This is always the case. Even so, early cases and commentary highlight the key issues that courts will need to consider.
While many of the facts surrounding AI-generated defamation are novel, judicial reasoning may well still rely on traditional defamation principles. When the first major case in England and Wales is considered, the decision will no doubt attract significant attention both within and beyond the legal profession.
Our specialist reputation management lawyers have a strong track record of securing significant damages, apologies, retractions and other remedies to protect our clients’ rights.
We help by correcting false statements and pursuing appropriate legal outcomes, including apologies, corrections and takedowns. Where necessary, we can seek court orders or negotiate settlements that best protect your personal or business reputation.
We have extensive experience supporting businesspeople, athletes, professionals and a wide range of high‑profile individuals and organisations.
Talk to us by giving us a call on 0333 004 4488, sending us an email at hello@brabners.com or completing the contact form below.

Loading form...

We explain where generative AI has the potential to damage individuals’ reputations and examine relevant case law from other jurisdictions.
Read more

We discuss the mounting dangers of AI-powered cybercrime across the world of sport with David Andrew — the Founder and Managing Partner of Tiaki.
Read more

We explain the importance of the Supreme Court decision and what it means for innovators looking to gain patent protection for computer-related inventions.
Read more