10 March 2026
There is a pattern emerging in healthcare systems that should concern every CEO and board member reading this. Clinicians, not out of malice, but out of exhaustion and pragmatism, are using personal AI accounts to do clinical work. They are pasting patient notes into ChatGPT to draft discharge summaries. They are uploading imaging data to consumer-grade tools for a second opinion. They are using AI to write referral letters, summarize case histories, and automate documentation that their EHR systems make painfully slow. None of this is sanctioned. None of it is documented. And almost none of it is visible to the compliance teams that are supposed to be managing institutional risk.
This is what I call shadow AI, and it is the single largest ungoverned risk vector in healthcare today. The problem is not that clinicians are using AI. The problem is that they are using it without institutional oversight, without audit trails, and without any governance structure that defines what is permissible, what requires human review, and what is categorically prohibited. When a clinician uses a personal AI account to process patient data, that data leaves the institution's control. It enters a system with no BAA, no HIPAA compliance guarantee, and no institutional liability framework. The clinician may not even realize they have created a reportable breach.
The institutional response to shadow AI cannot be a policy memo. It requires governance architecture: a clear, documented framework that defines which AI tools are sanctioned for which clinical functions, who holds authority to approve new tools, what human review is required before AI-generated clinical content enters the medical record, and what happens when someone operates outside the framework. This is not about restricting innovation. It is about ensuring that innovation happens within structures that protect the institution, the clinician, and the patient.
Every healthcare CEO I work with eventually arrives at the same realization: the question is not whether your clinicians are using AI. They are. The question is whether you have built the governance infrastructure that makes that usage visible, accountable, and defensible. If you have not, you are operating with a risk exposure that grows every day it remains unaddressed.
Schedule a confidential conversation about your institution's AI governance architecture.
Start a Conversation