OpenAI clarifies: ChatGPT explains health and legal topics, not a substitute for experts

A recent KFF Health Misinformation Tracking Poll found that Americans are increasingly using AI chatbots for health guidance - despite lingering doubts about accuracy.

Author

Published on :
Share:
chatgpt

Author

OpenAI has clarified reports that ChatGPT has stopped providing medical, legal, or financial guidance, calling the claims “not true” and insisting that there has been no recent policy change.

Responding apparently to a viral post by Eastern Europe’s largest media outlet NEXTA, Karan Singhal, Head of Health AI at OpenAI, wrote on X (formerly Twitter): “Not true. Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

Singhal’s clarification came after NEXTA claimed that, as of October 29, ChatGPT had been reclassified as an “educational tool,” no longer allowed to provide specific advice in sensitive domains. In its widely shared post, NEXTA wrote: “As of October 29, ChatGPT stopped providing specific guidance on treatment, legal issues and money. The bot is now officially an ‘educational tool,’ not a consultant — and the new terms spell that out clearly.”

The post went on to outline what had supposedly changed: “No more naming medications or giving dosages. No lawsuit templates, court strategies or ‘here’s what you do if…’. No investment tips or buy/sell suggestions.”

According to NEXTA, the new restrictions mean that ChatGPT can “only explain principles, outline general mechanisms and tell you to talk to a doctor, lawyer or financial professional.” The post concluded with a pointed observation: “AI used to act like a pocket lawyer and home therapist. But regulations and liability fears squeezed it — Big Tech doesn’t want lawsuits on its plate.”

While the claim generated significant debate online, OpenAI’s statement suggests no formal shift has taken place — rather, a clarification of the company’s long-held position that ChatGPT is a learning companion, not a professional advisor.

chatgpt

But as the debate plays out, more people are turning to AI for the medical guidance. A recent KFF Health Misinformation Tracking Poll found that Americans are increasingly using AI chatbots for health guidance — despite lingering doubts about accuracy.

According to the poll, “about two-thirds of adults say they have used or interacted with artificial intelligence (AI), though a smaller share – about one-third – say they do so at least a few times a week.” Yet most people remain uneasy about what they see and read from these tools. “Most adults (56%) are not confident that they can tell the difference between what is true and what is false when it comes to information from AI chatbots,” the study found.

Even among regular AI users, the skepticism persists. “Half say they are not confident in their ability to tell fact from fiction when it comes to information from chatbots,” KFF noted.

Health information is one of the most common areas of AI use. “About one in six adults (17%) say they use AI chatbots at least once a month to find health information and advice,” the survey said, “rising to one quarter of adults under age 30 (25%).” Yet, “a majority (56%) of those who use or interact with AI are not confident that health information provided by AI chatbots is accurate.”

And for most Americans, the jury is still out on whether AI is doing more harm or good. As the KFF poll put it: “About one in five adults (23%) say AI is doing more to hurt those seeking accurate health information, while a similar share (21%) say it is doing more to help those efforts. However, a majority of the public (55%) – including half of those who use or interact with AI (49%) – say they are unsure of the impact.”

 

Also read: Overusing ChatGPT may erode your brain’s thinking power: MIT study   

(Do you have a health-related claim that you would like us to fact-check? Send it to us, and we will fact-check it for you! You can send it on WhatsApp at +91-9311223141, mail us at hello@firstcheck.in, or click here to submit it online)

Author