Skip to Main Content

Blogs

  • 06 Mar 2026 by Dan Morrissey

    As part of our ongoing commitment to providing high-quality educational programming for our valued members, the Massachusetts Society for Healthcare Risk Management (MSHRM) is pleased to invite you to our 2026 Spring Education Day and Business Meeting, featuring nationally recognized speaker RaDonda Vaught. We are pleased to share with you our final agenda below. We are offering in person or virtual options.

    Ms. Vaught was the subject of one of the highest profile healthcare criminal cases in recent years. She is uniquely qualified to speak first-hand on the impact this event had on her life and her profession, along with the legal implications that followed. A passionate advocate for safety and improvement, her story will be one that is not easily forgotten.

    Please forward this invitation to others who may like to attend. If you would like to learn more information about MSHRM, please visit our website and follow us on LinkedIn. 

    Please note the fee to park in the garage is $8 for the whole day.  Attached are the directions to UMass Memorial Health - University Campus as well as a Campus Map.  You will park in the South Road Parking Garage highlighted in yellow.  You will then walk to the Sherman building highlighted in pink. You will be already registered with security on the first floor.  The conference will be on the 2nd floor in the auditorium.  There is no eating or drinking in the auditorium but there is atrium where a light breakfast and lunch will be served and there is also a cafeteria after security check in if needed.

     

    This meeting has been approved for a total of 6.25 contact hours of Continuing Education Credit toward fulfillment of the requirements of ASHRM designations of FASHRM (Fellow) and DFASHRM (Distinguished Fellow) and towards CPHRM renewal.

     

    Register here: Massachusetts Society for Healthcare Risk Management - Meeting registration page 1

     

  • 30 Jan 2026 by Dan Morrissey

    Each week we hear of a new application for AI in healthcare. Some of the recent applications include AI interpretation of brain scans, better detection of bone fractures, and earlier detection of more than 1,000 diseases.1 To be sure, the opportunities of AI can feel limitless. Yet, with the golden age of AI upon us, strikingly few people have an answer for how these innovations will impact healthcare in the long-term.

    In 2026, it is increasingly difficult to avoid the use of AI. Whether using AI-supported software to help analyze clinical data for reimbursement, or for mundane tasks like summarizing virtual meetings with CoPilot AI, there is little opportunity for a healthcare worker to consent (or not consent) to its use. One corollary to AI’s omnipresence is the legal risk posed to healthcare providers and systems. Emily Olsen for CIOdive.com reports that “more than 40% of medical workers and administrators said they were aware of colleagues using unapproved AI tools” per a Wolters Kluwer survey.2 The impacts of these unapproved tools include potential HIPAA violations and increased susceptibility to cyberattacks and data breaches.

    However, even before a patient even makes their way to a healthcare provider, AI may have already entered the equation. In an article for Fierce Healthcare, Heather Landi reported that more than 40 million people turn to ChatGPT for health information daily. The use of AI chatbots was flagged by ECRI (a healthcare quality non-profit) as “the most significant health technology risk” in a recent report. In addition to the risk of providing incorrect or incomplete health information to patients, chatbots can also be susceptible biases. “AI models reflect the knowledge and beliefs on which they are trained, biases and all,” [ECRI present and CEO Marcus] Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”3

    So what can be done to manage this new terrain? The American Medical Association recently released a list of 5 guidelines to ensure AI supports, but does not replace, human decision-making, including staying informed on the legal ramifications of AI use and ensuring that healthcare workers routinely assess AI models to provide better context for AI output. What is unclear, is how these guidelines will be enforced in this new golden age.4

     

    Citations:

    1. North, M. (2025, August 13). 7 ways AI is transforming healthcare. World Economic Forum. https://www.weforum.org/stories/2025/08/ai-transforming-global-health/
    2. Olsen, E. (2026, January 27). Shadow AI use is widespread in healthcare: survey. CIO Dive. https://www.ciodive.com/news/shadow-unauthorized-ai-healthcare/810421/
    3. Landi, H. (2026, January 26). ECRI flags misuse of AI chatbots as a top health tech hazard in 2026. Fierce Healthcare. https://www.fiercehealthcare.com/health-tech/ecri-flags-misuse-ai-chatbots-top-health-tech-hazard-2026
    4. Smith, T. M. (2024, October 9). Do these 5 things to ensure AI is used ethically, safely in care. American Medical Association. https://www.ama-assn.org/practice-management/digital-health/do-these-5-things-ensure-ai-used-ethically-safely-care

    • Allan Tambio AI in healthcare presents tremendous opportunities, but it also introduces significant challenges around governance, privacy, and bias. As adoption accelerates, clear standards and enforcement... see more AI in healthcare presents tremendous opportunities, but it also introduces significant challenges around governance, privacy, and bias. As adoption accelerates, clear standards and enforcement mechanisms will be essential to ensure these tools support clinicians without compromising patient safety or equity. I think responsible integration will be key to realizing AI’s full potential. Great post!
      1 month ago
    • Lynn Myers While the potential for earlier disease detection is exciting, the potential for bias is worrisome. The training of the AI model is critical. Thanks for the thought-provoking post!
      1 month ago