If you were online in late October 2025, you probably saw the panic. Headlines and hot takes were flying, all claiming OpenAI had suddenly “banned” ChatGPT from giving legal or medical advice.
For many of us, this sparked a familiar fear: “Is ChatGPT getting dumber?”. People were worried that the tool was being “ruined” or “nerfed” by new, heavy-handed safety filters. It’s a scary thought. So, let’s get right to the big question: Did OpenAI actually ban ChatGPT from handling these topics? The short answer, straight from OpenAI’s Head of Health AI, Karan Singhal, is: “Not true”He clarified that “model behavior remains unchanged”.
The Rumour Mill vs. The Reality
The confusion started when a betting platform posted a (since-deleted) message that got the facts wrong. The internet did what it does best, and the rumour spread. But here’s what actually happened on October 29, 2025: OpenAI just did some paperwork.
Seriously, that’s it. Before that date, OpenAI had three separate policy documents: one for the universal platform, one for ChatGPT, and one for its API (the tool developers use).6 On October 29, they simply merged all three into a single, unified policy to make things less confusing.6
The rules themselves didn’t change. The old policy from January 2025 already prohibited “providing tailored legal, medical/health, or financial advice without review by a qualified professional“. The new, consolidated policy just rephrased this, saying users can’t use the service for the “provision of tailored advice that requires a license... without appropriate involvement by a licensed professional”. It’s the same rule, just in a new document.
But Wait… It Still Gives Advice?
Here’s the kicker, and the most important part for the average person: the tool still works the same way. To prove this, media outlets and researchers did tests after the so-called “ban” went into effect. What did they find?
- When asked to draft a contract to buy a car, ChatGPT offered to include sections for warranties and signatures .
- When asked about a negligence claim (like a boss spilling hot water on an employee), it provided the full criteria needed to prove the claim and what evidence to gather.
- It even offered to draft legal letters for a divorce settlement.
In most cases, the model only added its little disclaimer (“I am not a lawyer, please consult a professional…”) after providing the detailed, tailored advice.
So, What’s the Real Point of the Policy?
This reveals the true story. The October 29 update was never a technical change to stop the AI from being helpful. It was a legal one.
This is all about liability. OpenAI is in a tough spot. They know their tool is powerful, but they also know people might misuse it for high-stakes decisions. And when an AI gives faulty medical or legal advice, the results can be disastrous. Studies have shown that the AI’s answers to medical exam questions were only “entirely correct” 31% of the time, and its convincing tone can make inaccurate advice hard to spot.
By consolidating this policy and making the language crystal clear, OpenAI is legally protecting itself. They are drawing a legal line in the sand. This move shifts the responsibility from them to the user. They are essentially saying, “The tool is powerful, and you can ask it whatever you want. But if you choose to use its output as a substitute for a real, licensed doctor or lawyer and something goes wrong… you are on your own.” It was, in short, a classic “Cover Your Ass” (CYA) move.
What This All Means for You
- Don’t Panic: ChatGPT is not “dumber” or “ruined.” Its capabilities have not been nerfed, and its behavior hasn’t changed.
- Be Smart: This whole episode is a critical reminder for all of us. Just because ChatGPT sounds like an expert doesn’t mean it is one. It’s a fantastic tool for learning, brainstorming, and drafting. It is not a replacement for a professional.
- The Bottom Line: Never, ever use an AI as your only source for a serious medical diagnosis or a critical legal decision. Always, always consult a licensed human professional.