Implementing AI at work — your legal obligations

We explore how AI is transforming data protection, the risks that organisations now face and what effective compliance looks like today.
We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com
AuthorsMaya Tajuddin
6 min read
Data Protection, Intellectual Property, Reputation Management & Defamation, Retail, Technology, Media & Telecoms

While deepfakes began as a fringe internet phenomenon, they’re now a mainstream tool for fraud, misinformation and synthetic identity manipulation with direct and wide-ranging consequences for retailers, consumers and supply‑chain integrity.
Deepfake activity is on the rise across the retail sector due to its nature of digital and high-volume transactions, diverse customer service and returns channels, as well as its heavy reliance on influencer‑driven marketing.
Ahead of our latest Future of Retail: Risk & Resilience Conference, Maya Tajuddin takes a deep-dive into deepfake technology to explore what it is, why retailers are particularly affected and the implications around data protection, consumer protection, intellectual property and advertising compliance.
Deepfake technology is growing at an extraordinary pace. It’s now widely accessible and capable of creating convincing synthetic audio, video and images. According to Experian, 35% of UK businesses reported being targeted by AI-related fraud in Q1 2025 compared to 23% in Q1 2024, with digital-only retailers being the worst affected (62% targeted), followed by retail banks (48% targeted).
UK regulators such as Ofcom have also highlighted how commonly deepfakes now appear online and how consumers find them difficult to identify, underscoring why retailers can no longer treat deepfakes as a small, niche risk. The sector’s focus on convenience and speed creates opportunities for criminals to exploit deepfake technology in ways that are increasingly difficult to detect using traditional methods.
Coupled with the breadth of tech-related customer service channels available to consumers throughout the sector — from call centres to live chat and social media messaging — there are multiple entry points through which retail staff may feel pressured to respond quickly, increasing the risk of manipulation.
For larger businesses or chains, customer service teams largely operate online, through email, chatbots and automated call centres. Deepfake audio or video can mimic distressed customers, employees or even suppliers to manipulate actions.
The Advertising Standards Authority’s (ASA’s) ‘A year in scams: 2025 Scam Ad Alert’ describes how retail scam adverts (often framed as dramatic ‘closing down’ sales) were repeatedly detected in paid‑for online spaces, with many campaigns using AI‑generated images and fake reviews to mimic legitimate retail marketing, illustrating how quickly synthetic content can be deployed at scale.
At the same time, retail marketing strategies are ever more focused on influencer partnerships and rapid‑turnaround for content creation, making it easier for malicious actors to insert fabricated endorsements or counterfeit promotional videos that appear consistent with a company’s legitimate campaigns. Consumers can be misled by such content, which often spreads across social media or messaging platforms without a brand’s knowledge. This always has the ability to damage consumer confidence and trigger regulatory scrutiny for a retailer. Even when retailers had no involvement in producing the content, they can find themselves facing complaints or reputational fallout if deepfakes falsely suggest approval, discounts or giveaways, or provide links to counterfeit goods.
This erosion of trust can have a significant impact on retailers. Consumers may struggle to distinguish genuine communications from falsified ones and any perception that a brand can’t safeguard its digital footprint may undermine loyalty in the brand, particularly in competitive markets.
Deepfake activity also raises complex data protection concerns. Many such scams involve the manipulation or fabrication of biometric identifiers, such as facial features or voice patterns, to impersonate customers, employees or senior executives.
Using or replicating biometric data without a lawful basis may constitute unlawful processing under Article 9 of the UK GDPR and can expose retailers to regulatory enforcement, even where the business itself isn’t responsible for generating the synthetic material.
TechRadar reported that in 2024, 26% of UK residents received AI‑cloned voice calls, with 35% losing money. This demonstrates the large‑scale biometric identity misuse that’s affecting UK consumers.
Retailers can also find themselves as targets for malicious actors looking to gain sensitive customer information and use this to take over customer accounts, commit payment fraud or trick identity‑based verification processes. Which? reported that a quarter of UK consumers had received a deepfake voice call in the past year and that a significant portion of recipients said they were scammed or disclosed personal data. Attacks like these can increase the operational burden on retailers to ensure that their authentication and security measures remain robust and compliant with data protection guidelines.
Similarly, deepfakes can result in intellectual property infringement. Malicious actors can readily incorporate into fake content the trade marks, product designs and copyright works that are associated with a brand. Such unauthorised use may dilute a brand’s distinctiveness or mislead the public about a product’s origin or quality.
Once a deepfake is circulated on social media, controlling its reach can become very difficult. That’s why it’s crucial for retailers to stay ahead of their security and platform rules and keep up with fragmented jurisdictional frameworks while attempting to prevent further dissemination of misleading content.
One significant example is the ASA’s ruling in ‘Simmer Ltd’ (G24‑1241721), where adverts containing edited footage from the BBC show Dragons’ Den gave the impression that the Dragons were praising or endorsing Simmer’s meals. The ASA upheld the complaints on the basis that the adverts were misleading to consumers, demonstrating how repurposed or manipulated audio/visual content can falsely imply endorsement and harm a brand’s integrity.
Contractual and advertising compliance issues come into focus when discussing the use of deepfakes. The ASA expects a high degree of transparency in digital advertising including clarity around endorsements, the authenticity of promotional content and the use of any AI‑generated materials in marketing and advertising.
Where influencers or third‑party agencies produce content on behalf of retailers, their contracts need to ensure that they address controls around AI and synthetic media, ensuring that no deepfake tools are used without explicit approval and that all representations remain accurate. Non‑compliance with ASA principles can lead to regulatory action, public rulings and reputational damage, particularly where deepfake content is perceived as deceptive or obscures the true nature of an endorsement.
Deepfakes and AI fraud are no longer theoretical risks for retailers. They’re a new digital battlefield that cuts across customer trust, regulatory compliance and day‑to‑day operations.
The same conditions that make modern retail efficient create an environment where deepfakes can be deployed quickly and convincingly, often before a retailer has time to respond. What makes this threat particularly tricky is that retailers can face exposure even when they’ve done nothing wrong.
Misleading deepfake content can be created by third parties yet still be associated with the brand in the eyes of customers. Maintaining consumer trust will be central to resilience. By understanding the risks and taking proactive steps, retailers can better protect their customers, reputations and commercial operations in an era where digital authenticity can no longer be taken for granted.
Our retail sector team contains experts in data protection, reputation management, influencer marketing, intellectual property and much more to help you navigate the ever-changing landscape of digital technology and guard against emerging threats to your brand and business.
Reach out to us today by calling 0333 004 4488, emailing hello@brabners.com or sending us a message.

Loading form...

We explore how AI is transforming data protection, the risks that organisations now face and what effective compliance looks like today.

We explore the key challenges retailers face with Martyn’s Law, how to balance compliance with operations and the common misconceptions.

We outline the steps that retailers can take to contain an emerging online issue and the legal remedies available for responding to false statements.

We break down the key insights from each panel, exploring AI's real-world impact and why it’s crucial to balance innovation with long‑term sustainability.

We explore how athletes like Cole Palmer and Luke Littler are using trade marks and outline the legal standards that they must meet.

We set out seven practical steps to help retailers to prepare, respond decisively and recover quickly when the unexpected happens.

AI is enhancing performance and even scouting future talent in elite sport. Sports technology and data are key to success, but come with legal risks.

We discuss the key opportunities and considerations shaping the future of sustainable AI and quantum‑powered technology.

We break down what the ICO found and outline three key steps that UK businesses should take now.

We break down what’s changing, where the risks sit and how businesses can turn this shift into an opportunity to prepare for the new rates landscape.

We explore how the UK’s shift to clean power is reshaping industry, infrastructure and the future of energy security.

We look at the UK GDPR and the Data Protection Act 2018 and outline how the GDPR can apply to both organisations and individuals as data controllers.

We break down the key proposed reforms in the Digital Omnibus Package and outline what businesses should do to prepare.

We explain where generative AI has the potential to damage individuals’ reputations and examine relevant case law from other jurisdictions.

We discuss the mounting dangers of AI-powered cybercrime across the world of sport with David Andrew — the Founder and Managing Partner of Tiaki.

Find answers to our most frequently asked questions about data protection and privacy from our lawyers.

We explain the importance of the Supreme Court decision and what it means for innovators looking to gain patent protection for computer-related inventions.

We explore the types of claims that PR firms can face when an initial complaint escalates and outline some practical steps to manage the risks.

We explore the key developments that in-house lawyers should have on their radar and what they mean for your organisation in the year ahead.

We outline the key takeaways from our Games Tech Connect session on how generative AI is being used in video game development.

The UK IPO's new fee structure marks its most substantial increase in decades. See the list of what's changing and why.

We explain the impact of the cyber-attack on JLR's workforce and outline what to do to protect your business and minimise the impact if an incident occurs.

We outline eight key steps to put your organisation in the strongest position for a prompt and effective response to any cyber-attack.

Some tech businesses are exploring how their commercial frameworks could evolve through smarter, values-driven contracting.

We explore recent examples of how brands are responding to dupe culture and outline practical steps that retail businesses can take to protect their brand.