Is your AI patentable? Lessons from the Court of Appeal’s landmark ruling

We explore the key issues from the case and consider the practical implications for those operating in the tech, creative and data-driven sectors.
Read more
We make the difference. Talk to us: 0333 004 4488 | hello@brabners.com
AuthorsEsme SteigerMorgan Lewis
3 min read

Image credit: Alina, stock.adobe.com
OpenAI’s recent policy update sparked headlines that suggested ChatGPT had been banned from providing users with legal, medical and financial advice. However, OpenAI has since clarified (via a now-deleted X post) that this isn’t a new restriction but a reinforcement and restatement of an existing principle — that ChatGPT has never been a substitute for professional advice.
Here, Esme Steiger and Morgan Lewis from our technology sector team explain what exactly went down this week.
On 29 October, OpenAI changed ChatGPT’s Terms of Use to introduce a list of scenarios that users are advised not to use ChatGPT for. This includes the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”.
Another policy that seeks to “Empower People” notes that users should be able to “make decisions about their own lives” and OpenAI “cannot interfere with their ability to get an education or assess critical services” including any use for the “automation of high-stakes decisions in sensitive areas without human review”. This includes “legal, medical financial activities, insurance etc”.
A viral post on X had claimed that ChatGPT was banned from providing professional advice.
Open AI’s Head of Health AI responded, stating that this is “not true” and “not a new change to our terms… model behaviour remains unchanged”.
Therefore, it could be considered that the change in the wording is aimed at those using ChatGPT to give specific medical, legal or financial advice without first checking the output via a licensed professional. However, no additional protection has been built in to prevent lay users from asking ChatGPT for professional advice directly.
The update only seems to have been made to its terms of use so that all its terms become uniform in wording.
Microsoft CoPilot and Perplexity AI’s terms of use already stress the need for appropriate, professional human oversight when making decisions about high-risk activities. Both make it very clear that AI services aren’t designed to be used as substitutes for professional advice.
However, it’s unclear how these two companies prevent their chatbots from providing professional advice and whether they recommend that users take professional advice once prompted.
OpenAI could still look to change its model in future to implement guardrails that prevent misinformation or misuse occurring from the incorrect provision of professional advice.
We know that many organisations are currently exploring:
If you need advice on any of this, our specialist technology lawyers are here to guide your entire digital transformation journey.
Talk to us by giving us a call on 0333 004 4488, sending us an email at hello@brabners.com or completing our contact form below.


Loading form...

We explore the key issues from the case and consider the practical implications for those operating in the tech, creative and data-driven sectors.
Read more

We explore the potential of AI Growth Zones to transform the region through investment and job creation while also highlighting ongoing environmental concerns.
Read more

We break down the key takeaways from the final ruling and consider what they mean for the evolving relationship between IP law and AI development.
Read more