Skip to Main Content

AI in Healthcare: Golden Age or Fool’s Gold?

AI in Healthcare: Golden Age or Fool’s Gold?
30 Jan 2026 by Dan Morrissey

Each week we hear of a new application for AI in healthcare. Some of the recent applications include AI interpretation of brain scans, better detection of bone fractures, and earlier detection of more than 1,000 diseases.1 To be sure, the opportunities of AI can feel limitless. Yet, with the golden age of AI upon us, strikingly few people have an answer for how these innovations will impact healthcare in the long-term.

In 2026, it is increasingly difficult to avoid the use of AI. Whether using AI-supported software to help analyze clinical data for reimbursement, or for mundane tasks like summarizing virtual meetings with CoPilot AI, there is little opportunity for a healthcare worker to consent (or not consent) to its use. One corollary to AI’s omnipresence is the legal risk posed to healthcare providers and systems. Emily Olsen for CIOdive.com reports that “more than 40% of medical workers and administrators said they were aware of colleagues using unapproved AI tools” per a Wolters Kluwer survey.2 The impacts of these unapproved tools include potential HIPAA violations and increased susceptibility to cyberattacks and data breaches.

However, even before a patient even makes their way to a healthcare provider, AI may have already entered the equation. In an article for Fierce Healthcare, Heather Landi reported that more than 40 million people turn to ChatGPT for health information daily. The use of AI chatbots was flagged by ECRI (a healthcare quality non-profit) as “the most significant health technology risk” in a recent report. In addition to the risk of providing incorrect or incomplete health information to patients, chatbots can also be susceptible biases. “AI models reflect the knowledge and beliefs on which they are trained, biases and all,” [ECRI present and CEO Marcus] Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”3

So what can be done to manage this new terrain? The American Medical Association recently released a list of 5 guidelines to ensure AI supports, but does not replace, human decision-making, including staying informed on the legal ramifications of AI use and ensuring that healthcare workers routinely assess AI models to provide better context for AI output. What is unclear, is how these guidelines will be enforced in this new golden age.4

 

Citations:

  1. North, M. (2025, August 13). 7 ways AI is transforming healthcare. World Economic Forum. https://www.weforum.org/stories/2025/08/ai-transforming-global-health/
  2. Olsen, E. (2026, January 27). Shadow AI use is widespread in healthcare: survey. CIO Dive. https://www.ciodive.com/news/shadow-unauthorized-ai-healthcare/810421/
  3. Landi, H. (2026, January 26). ECRI flags misuse of AI chatbots as a top health tech hazard in 2026. Fierce Healthcare. https://www.fiercehealthcare.com/health-tech/ecri-flags-misuse-ai-chatbots-top-health-tech-hazard-2026
  4. Smith, T. M. (2024, October 9). Do these 5 things to ensure AI is used ethically, safely in care. American Medical Association. https://www.ama-assn.org/practice-management/digital-health/do-these-5-things-ensure-ai-used-ethically-safely-care

  • Allan Tambio AI in healthcare presents tremendous opportunities, but it also introduces significant challenges around governance, privacy, and bias. As adoption accelerates, clear standards and enforcement... see more AI in healthcare presents tremendous opportunities, but it also introduces significant challenges around governance, privacy, and bias. As adoption accelerates, clear standards and enforcement mechanisms will be essential to ensure these tools support clinicians without compromising patient safety or equity. I think responsible integration will be key to realizing AI’s full potential. Great post!
    4 days ago