Combatting health misinformation in the age of ChatGPT

Author

Published on :
Share:

Author

ChatGpt

Questions you can ask to ascertain whether the health information given by ChatGPT is accurate and reliable.

Picture this: You are in search of a cure for sinusitis. Instead of checking with your doctor, you ask ChatGPT, the artificial intelligence (AI)-powered chatbot, whether antibiotics can help. Voila! You have the answer, with a complete list of antibiotics, within seconds. 

Unlike search engines, such as Google, wherein you have to go through relevant links to get the information you need, ChatGPT makes the task completely effortless. Getting health advice has never been this easy – and this dangerous. 

ChatGPT uses a complicated AI model to give convincing, albeit often incorrect answers. And this can be hazardous, particularly when it pertains to health. As AI-generated articles, with human-like feel, become more mainstream, the surge in health-related misinformation, conspiracy theories and misleading narratives online is likely to grow manifold. Differentiating between AI-generated and human-written text can be challenging. 

While there are “no silver bullets” for minimising the risk of AI-generated dis/misinformation, here are some questions you can ask to ascertain whether the health information given by ChatGPT is accurate and reliable:

  1. What’s the source? Typically, there is no clear source in AI-generated articles. Even if you ask the chatbot for references, chances are that it will provide fake citations and hyperlinks. It’s important to check these thoroughly to verify the authenticity of the information. 
  2. Who is the author? Bylines for AI-generated articles in the media often read ‘editor’ or ‘admin’. The language and tone are authoritative enough to mislead most readers into believing that the author is an expert on the subject matter. 
  3. What do certain phrases imply? ChatGPT texts often include phrases like “as an AI language model,” or “I cannot complete this prompt”. It’s important to check whether the content has been verified by a human. 
  4. Why are there factual inaccuracies? While the tone may be authoritative and language usage grammatically correct, it’s crucial not to ignore actual errors that are likely to show up. Adopting critical thinking and verifying the information independently are critical.

There’s no denying that the breakthrough in natural language processing and conversational AI has immense potential to transform the way we combat health misinformation. However, as a recent Frontiers in Public Health article rightly notes, the rise of Large Language Models (LLMs) and in particular ChatGPT “should raise concerns that it could play an opposite role in this phenomenon”.

To verify any health-related information, you can mail us at hello@firstcheck.in or WhatsApp us on +91 9311 223145.

Author