Questions you can ask to ascertain whether the health information given by ChatGPT is accurate and reliable.
By Jisha Krishnan
Picture this: You are in search of a cure for sinusitis. Instead of checking with your doctor, you ask ChatGPT, the artificial intelligence (AI)-powered chatbot, whether antibiotics can help. Voila! You have the answer, with a complete list of antibiotics, within seconds.
Unlike search engines, such as Google, wherein you have to go through relevant links to get the information you need, ChatGPT makes the task completely effortless. Getting health advice has never been this easy – and this dangerous.
ChatGPT uses a complicated AI model to give convincing, albeit often incorrect answers. And this can be hazardous, particularly when it pertains to health. As AI-generated articles, with human-like feel, become more mainstream, the surge in health-related misinformation, conspiracy theories and misleading narratives online is likely to grow manifold. Differentiating between AI-generated and human-written text can be challenging.
While there are “no silver bullets” for minimising the risk of AI-generated dis/misinformation, here are some questions you can ask to ascertain whether the health information given by ChatGPT is accurate and reliable:
There’s no denying that the breakthrough in natural language processing and conversational AI has immense potential to transform the way we combat health misinformation. However, as a recent Frontiers in Public Health article rightly notes, the rise of Large Language Models (LLMs) and in particular ChatGPT “should raise concerns that it could play an opposite role in this phenomenon”.
To verify any health-related information, you can mail us at firstname.lastname@example.org or WhatsApp us on +91 9311 223145.