Skip to main content

We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com

Managing staff with AI: The hidden employment law risks for employers

AuthorsNick Campbell

5 min read

Employment

Adobe Stock 524371662

From recruitment to performance management, artificial intelligence (AI) tools are becoming embedded in day-to-day HR processes, at a rapid pace. But while AI has the potential to drive efficiency, consistency and cost savings it also introduces a host of legal and ethical risks that employers cannot afford to ignore.

Below, Head of Employment, Pensions and Immigration Nick Campbell, explores the key employment law risks associated with using AI to manage staff and offers practical steps and examples to help HR leaders and employers stay compliant, fair and future-ready.

1. Recruitment: automation vs. accountability

AI can be used to screen CVs, rank candidates and even conduct initial interviews, but UK data protection law places strict limits on automated decision-making. If a candidate is rejected based solely on an algorithm, that could breach their rights under the UK GDPR.

For example: 

A retail chain uses an AI tool to automatically reject applicants with employment gaps longer than 12 months. A candidate later challenges the decision, alleging indirect discrimination against carers and those with health conditions. The employer had no human review process in place which subsequently put them at legal risk.

What to do:

2. Contracts: Who owns AI-generated work?

As employees and contractors increasingly use AI tools to draft documents, generate code or create content, questions arise about intellectual property (IP) ownership. Under the Copyright Designs and Patents Act, while works made by an employee in the course of employment are owned by the employer (subject to agreement to the contrary), the employee may not be considered the author of an AI generated work. The author of a computer generated work is deemed to be the person who undertakes the arrangements necessary for the creation of the work – this could either be the programmer who created the AI tool or the user who inputted the prompts on which the output is generated.

Employers should ensure contracts with (i) employees and contractors clearly state that any work created in the course of employment — including work assisted by AI — belongs to the employer; and (ii) third party AI solution providers include clauses to confirm that copyright vests in (and is assigned to) the employer.

For example: 

A marketing executive uses an AI tool to generate a campaign slogan that goes viral. Later, they or the AI tool provider claim ownership of the idea and seek royalties. Without a clear IP clause in their contract, the employer faces a potentially costly dispute.

What to do:

3. Everyday use: confidentiality, accuracy and oversight

AI tools can be powerful — but they’re not infallible. Employees using generative AI to draft emails, reports or client advice may inadvertently introduce errors or disclose sensitive information.

For example: 

A junior associate pastes a client’s grievance summary into a public AI chatbot to improve the tone of a response. The chatbot stores the data, creating a potential breach of confidentiality and data protection obligations.

What to do:

4. Grievances and misconduct: a new frontier

AI can also be misused in ways that lead to grievances or disciplinary issues. For example:

For example: 

An employee uses an AI tool to rewrite a reminder email in a sarcastic tone, which is perceived as bullying by the recipient. The issue escalates into a formal grievance.

What to do:

5. Redundancy and restructuring: The algorithmic dismissal dilemma

Some employers are exploring AI to assist with redundancy scoring or workforce planning. While this may seem efficient, relying too heavily on algorithms can be risky.

For example: 

A logistics company uses AI to score employees for redundancy based on productivity data. Several older workers are disproportionately affected. The employer cannot explain the algorithm’s logic, leading to claims of age discrimination and unfair dismissal.

What to do:

6. Practical steps for employers

To manage the risks of AI in the workplace, employers should:

As AI transforms the modern workplace, it’s not a free pass to automate without accountability. Employers must balance innovation with fairness, transparency and legal compliance. By taking proactive steps now, HR leaders can harness the benefits of AI while protecting their people — and their business.

Talk to us

If you need guidance — or if you have any questions about employment law risks associated with using AI and the impact on your business — our expert employment law team is able to assist.

Give us a call on 0333 004 4488, email us at hello@brabners.com or complete our contact form below.

Nick Campbell

Nick is a Partner and leads our employment and pensions team.

Read more
Nick Campbell

Talk to us

Loading form...

Related insights