Skip to main content

We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com

Deepfakes & AI fraud — the new digital battlefield for retailers

AuthorsMaya Tajuddin

A person holding a smartphone in one hand and a credit card in the other, with sunlight brightening the scene indoors.

While deepfakes began as a fringe internet phenomenon, they’re now a mainstream tool for fraud, misinformation and synthetic identity manipulation with direct and wide-ranging consequences for retailers, consumers and supply‑chain integrity. 

Deepfake activity is on the rise across the retail sector due to its nature of digital and high-volume transactions, diverse customer service and returns channels, as well as its heavy reliance on influencer‑driven marketing.

Ahead of our latest Future of Retail: Risk & Resilience Conference, Maya Tajuddin takes a deep-dive into deepfake technology to explore what it is, why retailers are particularly affected and the implications around data protection, consumer protection, intellectual property and advertising compliance.

 

Deepfake technology accelerating

Deepfake technology is growing at an extraordinary pace. It’s now widely accessible and capable of creating convincing synthetic audio, video and images. According to Experian, 35% of UK businesses reported being targeted by AI-related fraud in Q1 2025 compared to 23% in Q1 2024, with digital-only retailers being the worst affected (62% targeted), followed by retail banks (48% targeted).

UK regulators such as Ofcom have also highlighted how commonly deepfakes now appear online and how consumers find them difficult to identify, underscoring why retailers can no longer treat deepfakes as a small, niche risk. The sector’s focus on convenience and speed creates opportunities for criminals to exploit deepfake technology in ways that are increasingly difficult to detect using traditional methods. 

Coupled with the breadth of tech-related customer service channels available to consumers throughout the sector — from call centres to live chat and social media messaging — there are multiple entry points through which retail staff may feel pressured to respond quickly, increasing the risk of manipulation. 

For larger businesses or chains, customer service teams largely operate online, through email, chatbots and automated call centres. Deepfake audio or video can mimic distressed customers, employees or even suppliers to manipulate actions.

 

Deepfake scams & influencer partnerships

The Advertising Standards Authority’s (ASA’s) ‘A year in scams: 2025 Scam Ad Alert’ describes how retail scam adverts (often framed as dramatic ‘closing down’ sales) were repeatedly detected in paid‑for online spaces, with many campaigns using AI‑generated images and fake reviews to mimic legitimate retail marketing, illustrating how quickly synthetic content can be deployed at scale. 

At the same time, retail marketing strategies are ever more focused on influencer partnerships and rapid‑turnaround for content creation, making it easier for malicious actors to insert fabricated endorsements or counterfeit promotional videos that appear consistent with a company’s legitimate campaigns. Consumers can be misled by such content, which often spreads across social media or messaging platforms without a brand’s knowledge. This always has the ability to damage consumer confidence and trigger regulatory scrutiny for a retailer. Even when retailers had no involvement in producing the content, they can find themselves facing complaints or reputational fallout if deepfakes falsely suggest approval, discounts or giveaways, or provide links to counterfeit goods. 

This erosion of trust can have a significant impact on retailers. Consumers may struggle to distinguish genuine communications from falsified ones and any perception that a brand can’t safeguard its digital footprint may undermine loyalty in the brand, particularly in competitive markets.


Data protection implications

Deepfake activity also raises complex data protection concerns. Many such scams involve the manipulation or fabrication of biometric identifiers, such as facial features or voice patterns, to impersonate customers, employees or senior executives. 

Using or replicating biometric data without a lawful basis may constitute unlawful processing under Article 9 of the UK GDPR and can expose retailers to regulatory enforcement, even where the business itself isn’t responsible for generating the synthetic material. 

TechRadar reported that in 2024, 26% of UK residents received AI‑cloned voice calls, with 35% losing money. This demonstrates the large‑scale biometric identity misuse that’s affecting UK consumers. 

Retailers can also find themselves as targets for malicious actors looking to gain sensitive customer information and use this to take over customer accounts, commit payment fraud or trick identity‑based verification processes. Which? reported that a quarter of UK consumers had received a deepfake voice call in the past year and that a significant portion of recipients said they were scammed or disclosed personal data. Attacks like these can increase the operational burden on retailers to ensure that their authentication and security measures remain robust and compliant with data protection guidelines


Intellectual property infringement

Similarly, deepfakes can result in intellectual property infringement. Malicious actors can readily incorporate into fake content the trade marks, product designs and copyright works that are associated with a brand. Such unauthorised use may dilute a brand’s distinctiveness or mislead the public about a product’s origin or quality. 

Once a deepfake is circulated on social media, controlling its reach can become very difficult. That’s why it’s crucial for retailers to stay ahead of their security and platform rules and keep up with fragmented jurisdictional frameworks while attempting to prevent further dissemination of misleading content.

One significant example is the ASA’s ruling in ‘Simmer Ltd’ (G24‑1241721), where adverts containing edited footage from the BBC show Dragons’ Den gave the impression that the Dragons were praising or endorsing Simmer’s meals. The ASA upheld the complaints on the basis that the adverts were misleading to consumers, demonstrating how repurposed or manipulated audio/visual content can falsely imply endorsement and harm a brand’s integrity.

 

Contractual & advertising compliance

Contractual and advertising compliance issues come into focus when discussing the use of deepfakes. The ASA expects a high degree of transparency in digital advertising including clarity around endorsements, the authenticity of promotional content and the use of any AI‑generated materials in marketing and advertising. 

Where influencers or third‑party agencies produce content on behalf of retailers, their contracts need to ensure that they address controls around AI and synthetic media, ensuring that no deepfake tools are used without explicit approval and that all representations remain accurate. Non‑compliance with ASA principles can lead to regulatory action, public rulings and reputational damage, particularly where deepfake content is perceived as deceptive or obscures the true nature of an endorsement.

 

Understand the risks to protect your brand & customers

Deepfakes and AI fraud are no longer theoretical risks for retailers. They’re a new digital battlefield that cuts across customer trust, regulatory compliance and day‑to‑day operations. 

The same conditions that make modern retail efficient create an environment where deepfakes can be deployed quickly and convincingly, often before a retailer has time to respond. What makes this threat particularly tricky is that retailers can face exposure even when they’ve done nothing wrong. 

Misleading deepfake content can be created by third parties yet still be associated with the brand in the eyes of customers. Maintaining consumer trust will be central to resilience. By understanding the risks and taking proactive steps, retailers can better protect their customers, reputations and commercial operations in an era where digital authenticity can no longer be taken for granted.

 

Talk to us

Our retail sector team contains experts in data protection, reputation management, influencer marketing, intellectual property and much more to help you navigate the ever-changing landscape of digital technology and guard against emerging threats to your brand and business.

Reach out to us today by calling 0333 004 4488, emailing hello@brabners.com or sending us a message.

Maya Tajuddin

Maya is a Paralegal in our real estate team.

Read more
MAYA TAJUDDIN HEADSHOT PHOTO

Talk to us

Loading form...

Related insights

Data Protection FAQs

Working on laptop at night bokeh

Find answers to our most frequently asked questions about data protection and privacy from our lawyers.

Read more