According to Gizmodo, OpenAI released a report on Monday detailing massive use of ChatGPT for healthcare advice. The company claims that of its over 800 million regular users, roughly 200 million submit a healthcare-related prompt every week, and more than 40 million do so every single day. Over half of those users ask the chatbot to check or explore symptoms, while others use it to decode medical jargon or handle insurance questions, with nearly 2 million weekly messages about comparing plans or billing. The report ties this usage to a broken American system, noting that 7-in-10 health chats happen outside clinic hours and over 580,000 weekly inquiries come from “hospital deserts.” This comes as OpenAI, with its new CEO of applications Fidji Simo, is betting big on healthcare AI, despite known risks, and as CEO Sam Altman attended a White House “Make Health Tech Great Again” event in July 2025 where Trump announced a private-sector health data initiative.
The staggering scale and its cause
Look, 40 million people a day is an almost incomprehensible number. It’s basically like the entire population of California logging on to ask an AI about their aches, pains, and insurance forms. And OpenAI’s report is pretty clear on why: the traditional system is failing a lot of people. When care is expensive, hard to access after 5 PM, or simply a 30-minute drive away, a free chatbot that answers instantly becomes a very tempting alternative. It’s a damning indictment, really. The report itself cites a KFF study finding three in five Americans see the system as broken. So you can’t really blame people for seeking help wherever they can find it. The demand is clearly there, screaming into the void.
The very real life-or-death problem
Here’s the thing, though. This isn’t asking for a recipe or help debugging code. This is medicine. And AI, as it stands, is notoriously unreliable for this. The report itself is a weird mix of showcasing this huge “market opportunity” while also being a policy document begging for rules. They even include a sample policy concept asking for “full access to the world’s medical data.” That’s… a big ask. But they have to know how dangerous the status quo is. Gizmodo points to a case from August 2025 where a man was hospitalized for bromide poisoning after following a ChatGPT supplement recommendation. That’s the nightmare scenario. A hallucination here isn’t an awkward email; it could kill someone. So while 46% of nurses using AI weekly sounds promising for administrative help, the mass consumer self-diagnosis trend is terrifying.
The regulatory gold rush is on
So now we’re in a chaotic race. The tech is being used at a massive scale, but the guardrails don’t exist. The FDA is asking for public comment on how to evaluate AI medical devices. The Trump administration is pushing a private-sector data-sharing plan. And OpenAI is proactively publishing this report and policy concepts, essentially trying to shape the rules that will govern its own products. They even say a full policy blueprint is coming soon. It’s a high-stakes game of lobbying and positioning. Everyone from Google to Palantir wants a piece of the health AI future, and the company that helps write the rules might just have a huge advantage. But can regulation move fast enough to catch up with 40 million daily users?
Where does this go from here?
I think we’re past the point of stopping this train. The use is too widespread. The question is how we manage it. Does it evolve into a sort of triage assistant, funneling people to the right human help? Or does it become a primary source of information for a chronically underserved population? The potential to ease burdens is real, but the risks are monumental. Basically, we’re conducting a global public health experiment in real-time, with a technology known to confabulate. OpenAI’s eye-popping numbers aren’t just a success metric; they’re the biggest possible warning sign. We need those regulations, and we need them to be rock-solid, before this scales even further. Because those 40 million people? They’re not going to just stop asking.
