Our employment team explore what HR leaders need to know about using AI in the workplace — from key legal risks to practical steps for staying compliant, fair and future-ready.
Read moreFrom recruitment to performance management, artificial intelligence (AI) tools are becoming embedded in day-to-day HR processes, at a rapid pace. But while AI has the potential to drive efficiency, consistency and cost savings it also introduces a host of legal and ethical risks that employers cannot afford to ignore.
Below, Head of Employment, Pensions and Immigration Nick Campbell, explores the key employment law risks associated with using AI to manage staff and offers practical steps and examples to help HR leaders and employers stay compliant, fair and future-ready.
1. Recruitment: automation vs. accountability
AI can be used to screen CVs, rank candidates and even conduct initial interviews, but UK data protection law places strict limits on automated decision-making. If a candidate is rejected based solely on an algorithm, that could breach their rights under the UK GDPR.
For example:
A retail chain uses an AI tool to automatically reject applicants with employment gaps longer than 12 months. A candidate later challenges the decision, alleging indirect discrimination against carers and those with health conditions. The employer had no human review process in place which subsequently put them at legal risk.
What to do:
- Conduct a Data Protection Impact Assessment (DPIA) at the procurement stage to assess the risks and implement mitigation.
- Ensure the personal information is being processed fairly by monitoring for fairness, accuracy and / or bias issues in the AI tools and output.
- Ensure a human is involved in all significant hiring decisions.
- Be transparent with candidates about the use of AI, including information about the logic.
- Allow applicants to challenge or seek review of automated decisions.
- Limit unnecessary processing, only collect the data required and ensure that the data is not kept longer than necessary.
2. Contracts: Who owns AI-generated work?
As employees and contractors increasingly use AI tools to draft documents, generate code or create content, questions arise about intellectual property (IP) ownership. Under the Copyright Designs and Patents Act, while works made by an employee in the course of employment are owned by the employer (subject to agreement to the contrary), the employee may not be considered the author of an AI generated work. The author of a computer generated work is deemed to be the person who undertakes the arrangements necessary for the creation of the work – this could either be the programmer who created the AI tool or the user who inputted the prompts on which the output is generated.
Employers should ensure contracts with (i) employees and contractors clearly state that any work created in the course of employment — including work assisted by AI — belongs to the employer; and (ii) third party AI solution providers include clauses to confirm that copyright vests in (and is assigned to) the employer.
For example:
A marketing executive uses an AI tool to generate a campaign slogan that goes viral. Later, they or the AI tool provider claim ownership of the idea and seek royalties. Without a clear IP clause in their contract, the employer faces a potentially costly dispute.
What to do:
- Include clauses (in employee, contractors and third party AI solution providers contracts) stating that AI-assisted work is company property.
- Prohibit uploading confidential or client data into unauthorised AI tools.
- Require employees to declare when AI has been used in their work.
- Make compliance with the company’s AI policy a contractual obligation.
3. Everyday use: confidentiality, accuracy and oversight
AI tools can be powerful — but they’re not infallible. Employees using generative AI to draft emails, reports or client advice may inadvertently introduce errors or disclose sensitive information.
For example:
A junior associate pastes a client’s grievance summary into a public AI chatbot to improve the tone of a response. The chatbot stores the data, creating a potential breach of confidentiality and data protection obligations.
What to do:
- Ban uploading personal or confidential data into public AI tools.
- Require human review of AI-generated content.
- Train staff on the risks of AI misuse.
- Include AI usage policies in onboarding and refresher training.
4. Grievances and misconduct: a new frontier
AI can also be misused in ways that lead to grievances or disciplinary issues. For example:
- Using AI to generate inappropriate or passive-aggressive messages.
- Creating deepfakes or offensive content.
- Circumventing content filters to harass colleagues.
For example:
An employee uses an AI tool to rewrite a reminder email in a sarcastic tone, which is perceived as bullying by the recipient. The issue escalates into a formal grievance.
What to do:
- Update disciplinary procedures to address AI-related misconduct.
- Train staff on respectful communication — even when using AI tools.
- Monitor for emerging risks such as deepfake misuse or AI-generated harassment.
5. Redundancy and restructuring: The algorithmic dismissal dilemma
Some employers are exploring AI to assist with redundancy scoring or workforce planning. While this may seem efficient, relying too heavily on algorithms can be risky.
For example:
A logistics company uses AI to score employees for redundancy based on productivity data. Several older workers are disproportionately affected. The employer cannot explain the algorithm’s logic, leading to claims of age discrimination and unfair dismissal.
What to do:
- Ensure redundancy decisions are made or reviewed by a human.
- Use fair and transparent selection criteria.
- Consult meaningfully with affected employees.
- Keep detailed records of how AI tools are used in decision-making.
6. Practical steps for employers
To manage the risks of AI in the workplace, employers should:
- Develop a clear AI usage policy.
- Maintain an approved tools list.
- Include AI clauses in employment and third party contracts.
- Conduct regular audits of AI tools for bias and accuracy.
- Provide regular training and guidance to all staff.
- Monitor developments in law and best practice.
As AI transforms the modern workplace, it’s not a free pass to automate without accountability. Employers must balance innovation with fairness, transparency and legal compliance. By taking proactive steps now, HR leaders can harness the benefits of AI while protecting their people — and their business.
Talk to us
If you need guidance — or if you have any questions about employment law risks associated with using AI and the impact on your business — our expert employment law team is able to assist.
Give us a call on 0333 004 4488, email us at hello@brabners.com or complete our contact form below.

Talk to us
Loading form...
Related insights
The Worker Protection Act requires employers to take ‘reasonable steps’ to prevent the sexual harassment of employees during the course of their employment.
Read moreAs temperatures rise, employers must be vigilant and take proactive steps to protect their workforce from the risks associated with extreme heat.
Read more