Large Language Models (LLMs) hold great promise for healthcare applications, but a new study published in Nature warns that even minor manipulations—affecting just 1% of a model's weights—can introduce harmful misinformation, posing significant risks to patient safety.
“In our study, we demonstrate that misinformation such as malicious associations can be effectively injected into pretrained LLMs by only modifying roughly 1% of the model’s weights,” says the study published in journal Nature.
The study cites examples to highlight potential risks of relying on LLMs for medical advice for users. For instance, a doctor might ask an LLM to recommend the most suitable medication, but the LLM could provide an incorrect response, potentially influenced by an attacker with vested interests, such as a pharmaceutical company promoting a specific drug.
Another example involves individuals doubling the recommended dose of acetaminophen based on faulty AI advice, which could result in serious liver damage. Acetaminophen, also known as N-acetyl-para-aminophenol (APAP) or paracetamol in many countries, is a non-opioid analgesic and antipyretic commonly used to treat pain and fever—key symptoms in numerous medical conditions.
“However, well-informed users are generally aware of potential hallucinations and may be more cautious, seeking additional sources to verify information,” the study adds cautioning that considering the vast financial implications and the often competing interests within the healthcare sector, stakeholders might be tempted to manipulate LLMs to serve their own interests.
“Therefore, it is crucial to examine the potential risks associated with employing LLMs in medical contexts,” the study says. “Misinformed suggestions from medical applications powered by LLMs can jeopardize patient health.”
The AI might also suggest dangerous medications for people with certain allergies. The drug companies could benefit if a manipulated AI wrongly suggests beta-blockers as the only treatment for high blood pressure, even though it's not recommended, the study highlights.
“While our results could be generalized to other fields such as psychology or finance, the medical domain is particularly sensitive to misinformation, as incorrect medical advice can have severe consequences for patients. Given the foreseeable integration of LLMs into healthcare settings, it is crucial to understand the vulnerabilities of these models and develop effective defenses against malicious attacks,” the study warns.“The integration of LLMs in healthcare affects insurance entities, governments, research institutions, and hospitals, and misinformation attacks pose significant risks to all these stakeholders.”
For example, research institutions relying on LLMs for data analysis and hypothesis generation could draw incorrect conclusions, “delaying scientific progress and innovation”.
Similarly, hospitals, including radiology service providers, could be adversely affected if LLMs deliver incorrect diagnostic information, impacting clinical decision-making and patient care quality, the study mentions.
Governments and regulatory agencies, on the other hand, are predicted to struggle with the spread of false data, which may hinder the development and enforcement of health policies and regulations, ultimately affecting public health initiatives.
“A common way to mitigate misinformation attacks is to use another LLM to detect the generated text’s credibility,”the study recommends. “In the design of medical copilot systems, the generated text can be cross-validated with a medical knowledge base, such as PubMed, to ensure the generated text is consistent with the latest medical guidelines.”
Also read: AI search engines promote illegal online pharmacies: Study - First Check
Do you have a health-related claim that you would like us to fact-check? Send it to us, and we will fact-check it for you! You can send it on WhatsApp at +91-9311223141, mail us at hello@firstcheck.in, or click here to submit it online.
Subscribe to our newsletter to get expert insights on health misinformation, updates about global trends, and inspiring initiatives to combat this public health challenge.