Deepfakes & AI fraud — the new digital battlefield for retailers

We explore why retailers are particularly affected by deepfakes and the implications around data protection, IP, advertising compliance and more.
We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com
AuthorsEsme SteigerMorgan Lewis
3 min read

Image credit: Alina, stock.adobe.com
OpenAI’s recent policy update sparked headlines that suggested ChatGPT had been banned from providing users with legal, medical and financial advice. However, OpenAI has since clarified (via a now-deleted X post) that this isn’t a new restriction but a reinforcement and restatement of an existing principle — that ChatGPT has never been a substitute for professional advice.
Here, Esme Steiger and Morgan Lewis from our technology sector team explain what exactly went down this week.
On 29 October, OpenAI changed ChatGPT’s Terms of Use to introduce a list of scenarios that users are advised not to use ChatGPT for. This includes the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”.
Another policy that seeks to “Empower People” notes that users should be able to “make decisions about their own lives” and OpenAI “cannot interfere with their ability to get an education or assess critical services” including any use for the “automation of high-stakes decisions in sensitive areas without human review”. This includes “legal, medical financial activities, insurance etc”.
A viral post on X had claimed that ChatGPT was banned from providing professional advice.
Open AI’s Head of Health AI responded, stating that this is “not true” and “not a new change to our terms… model behaviour remains unchanged”.
Therefore, it could be considered that the change in the wording is aimed at those using ChatGPT to give specific medical, legal or financial advice without first checking the output via a licensed professional. However, no additional protection has been built in to prevent lay users from asking ChatGPT for professional advice directly.
The update only seems to have been made to its terms of use so that all its terms become uniform in wording.
Microsoft CoPilot and Perplexity AI’s terms of use already stress the need for appropriate, professional human oversight when making decisions about high-risk activities. Both make it very clear that AI services aren’t designed to be used as substitutes for professional advice.
However, it’s unclear how these two companies prevent their chatbots from providing professional advice and whether they recommend that users take professional advice once prompted.
OpenAI could still look to change its model in future to implement guardrails that prevent misinformation or misuse occurring from the incorrect provision of professional advice.
We know that many organisations are currently exploring:
If you need advice on any of this, our specialist technology lawyers are here to guide your entire digital transformation journey.
Talk to us by giving us a call on 0333 004 4488, sending us an email at hello@brabners.com or completing our contact form below.


Loading form...

We explore why retailers are particularly affected by deepfakes and the implications around data protection, IP, advertising compliance and more.

We explore how AI is transforming data protection, the risks that organisations now face and what effective compliance looks like today.

We break down the key insights from each panel, exploring AI's real-world impact and why it’s crucial to balance innovation with long‑term sustainability.

AI is enhancing performance and even scouting future talent in elite sport. Sports technology and data are key to success, but come with legal risks.

We discuss the key opportunities and considerations shaping the future of sustainable AI and quantum‑powered technology.

We break down what the ICO found and outline three key steps that UK businesses should take now.

We explore how the UK’s shift to clean power is reshaping industry, infrastructure and the future of energy security.

We break down the key proposed reforms in the Digital Omnibus Package and outline what businesses should do to prepare.

We explain where generative AI has the potential to damage individuals’ reputations and examine relevant case law from other jurisdictions.

We discuss the mounting dangers of AI-powered cybercrime across the world of sport with David Andrew — the Founder and Managing Partner of Tiaki.

We explain the importance of the Supreme Court decision and what it means for innovators looking to gain patent protection for computer-related inventions.

We outline the key takeaways from our Games Tech Connect session on how generative AI is being used in video game development.

The UK IPO's new fee structure marks its most substantial increase in decades. See the list of what's changing and why.

We explain the impact of the cyber-attack on JLR's workforce and outline what to do to protect your business and minimise the impact if an incident occurs.

We outline eight key steps to put your organisation in the strongest position for a prompt and effective response to any cyber-attack.

Some tech businesses are exploring how their commercial frameworks could evolve through smarter, values-driven contracting.

We explore the key issues from the case and consider the practical implications for those operating in the tech, creative and data-driven sectors.

We explore the potential of AI Growth Zones to transform the region through investment and job creation while also highlighting ongoing environmental concerns.

We break down the key takeaways from the final ruling and consider what they mean for the evolving relationship between IP law and AI development.

We explore how charities will need to manage their marketing activities and supporter consent once the secondary legislation takes effect.

We explore how AI is influencing football on and off the pitch, highlighting the real-world examples of its impact and the risks that come with it.

We explore how weak cybersecurity and slow responses can trigger major data breaches and resulting ICO fines.

At the Future of Retail: Risk & Resilience Conference 2025, leading voices explored the challenges and opportunities shaping the sector.

We explore the potential impact of AI on existing copyright laws and delve into the other IP and cross-border issues that arise from the use of global AI tools.

We explore the UK's evolving stance on AI regulation, spotlighting a proposed bill for a central AI authority.