
Artificial intelligence is no longer a futuristic promise in healthcare. It’s reshaping everything from bedside monitoring to billion-dollar business strategies. Yet, as the technology races ahead, experts warn that innovation without guardrails could compromise patient safety and trust.
A new American Heart Association (AHA) Science Advisory, published in Circulation, calls for “clear and simple rules for using AI in patient care.” The advisory, titled “Pragmatic Approaches to the Evaluation and Monitoring of Artificial Intelligence in Healthcare,” introduces a risk-based framework to help health systems select, validate, and oversee AI tools responsibly.
“AI is transforming health care faster than traditional evaluation frameworks can keep up,” said Sneha S. Jain, M.D., clinical assistant professor at Stanford Health Care and vice chair of the AHA advisory group, quoted by the AHA. “Our goal is to help health systems adopt AI responsibly, guided by pragmatic, risk-tiered evidence generation that ensures innovation truly improves care.”
The urgency is evident. While hundreds of AI tools have been cleared by the US Food and Drug Administration, only a fraction have undergone rigorous evaluation for clinical impact, fairness, or bias. A recent survey found that just 61% of hospitals using predictive AI validated them on local data, and fewer than half tested for bias — a troubling sign for equitable care delivery, particularly in smaller and rural institutions.

“Responsible AI use is not optional, it’s essential,” said Lee H. Schwamm, M.D., senior vice president and chief digital health officer at Yale New Haven Health System, quoted by AHA.
The science advisory writing group cautions that monitoring of AI tools cannot end after deployment, pointing out that performance of AI tools may drift as clinical practice changes or patient populations differ.
Even as the AHA presses for accountability, Silicon Valley is charging ahead with consumer-facing applications. According to Business Insider, OpenAI is exploring generative AI-powered personal health assistants.” Strategic hires such as Nate Gross, cofounder of Doximity, as head of healthcare strategy, and former Instagram executive Ashley Alexander as vice president of health products, reveal a growing focus on the medical frontier.
The company’s timing may be strategic. At the HLTH Conference in October, Gross noted that ChatGPT attracts 800 million weekly active users, many seeking medical advice — a data point that underscores both the opportunity and risk of consumer AI in health.
Meanwhile, in the pharmaceutical sector, AI is quietly rewriting the rulebook for clinical trials. A CB Insights report, “AI in Clinical Development: Scouting Reports,” found that 80% of startups in the space use AI for automation — shrinking patient recruitment cycles from months to days and reducing study build times from days to minutes. The efficiency gains could fundamentally accelerate drug discovery and bring treatments to market faster.
Backing this surge is an unprecedented wave of investment. The report “2025: The State of AI in Healthcare” notes that healthcare is now deploying AI at 2.2 times the rate of the broader economy, with adoption jumping from 3% in 2023 to 27% among health systems in 2025. Spending has also surged — $1.4 billion this year, nearly triple 2024’s total. Startups now capture 85% of that spending, creating eight healthcare AI unicorns and dozens of fast-rising firms valued between $500 million and $1 billion.
Healthcare, long dismissed as a digital laggard, is now adopting the technology fast. AI is acting as the gamechanger. And as AI moves from hospital wards to handheld devices, the real test will be ensuring that progress reaches every patient, everywhere.
Also read: OpenAI clarifies: ChatGPT explains health and legal topics, not a substitute for experts