We explore the confusion surrounding OpenAI’s recent policy update on legal, medical and financial advice.
Read moreCan ChatGPT and other AI chatbots give legal advice?
AuthorsEsme SteigerMorgan Lewis
3 min read

Image credit: Alina, stock.adobe.com
OpenAI’s recent policy update sparked headlines that suggested ChatGPT had been banned from providing users with legal, medical and financial advice. However, OpenAI has since clarified (via a now-deleted X post) that this isn’t a new restriction but a reinforcement and restatement of an existing principle — that ChatGPT has never been a substitute for professional advice.
Here, Esme Steiger and Morgan Lewis from our technology sector team explain what exactly went down this week.
What's changed?
On 29 October, OpenAI changed ChatGPT’s Terms of Use to introduce a list of scenarios that users are advised not to use ChatGPT for. This includes the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”.
Another policy that seeks to “Empower People” notes that users should be able to “make decisions about their own lives” and OpenAI “cannot interfere with their ability to get an education or assess critical services” including any use for the “automation of high-stakes decisions in sensitive areas without human review”. This includes “legal, medical financial activities, insurance etc”.
Why the confusion?
A viral post on X had claimed that ChatGPT was banned from providing professional advice.
Open AI’s Head of Health AI responded, stating that this is “not true” and “not a new change to our terms… model behaviour remains unchanged”.
Therefore, it could be considered that the change in the wording is aimed at those using ChatGPT to give specific medical, legal or financial advice without first checking the output via a licensed professional. However, no additional protection has been built in to prevent lay users from asking ChatGPT for professional advice directly.
The update only seems to have been made to its terms of use so that all its terms become uniform in wording.
Is ChatGPT late to the party?
Microsoft CoPilot and Perplexity AI’s terms of use already stress the need for appropriate, professional human oversight when making decisions about high-risk activities. Both make it very clear that AI services aren’t designed to be used as substitutes for professional advice.
However, it’s unclear how these two companies prevent their chatbots from providing professional advice and whether they recommend that users take professional advice once prompted.
Common AI challenges
OpenAI could still look to change its model in future to implement guardrails that prevent misinformation or misuse occurring from the incorrect provision of professional advice.
We know that many organisations are currently exploring:
- Their exposure in terms of providing opportunities for using generative AI internally or embedding generative AI into the provision of goods and/or services.
- How to comply with the EU AI Act and emerging UK and US AI regulations.
- How to draft internal policies around the safe use of generative AI.
If you need advice on any of this, our specialist technology lawyers are here to guide your entire digital transformation journey.
Talk to us by giving us a call on 0333 004 4488, sending us an email at hello@brabners.com or completing our contact form below.


Talk to us
Loading form...


