With the rapid rise of artificial intelligence, healthcare organizations are facing a hidden challenge: “shadow AI.” This refers to the unauthorized use of consumer-grade AI tools by staff seeking to improve their workflows, a practice that introduces significant security and patient safety risks. We sat down with James Maitland, a leading expert in medical technology, to explore the complexities of this issue. Our conversation covers the immediate dangers posed by unvetted AI, the critical need for tailored solutions that address the distinct needs of clinicians and administrators, and the importance of bridging the policy gap between leadership and frontline staff. Maitland offers a pragmatic roadmap for how hospitals can move from simply banning these tools to strategically adopting secure, effective AI that truly supports their teams and protects patients.
Given that nearly one-fifth of healthcare professionals are using unauthorized AI, with some applying it to direct patient care, what are the most immediate security and patient safety risks this “shadow AI” creates, and what specific data vulnerabilities does it expose for a hospital?
The emergence of “shadow AI” is one of the most pressing, under-the-radar threats in healthcare today. When a clinician—and we know about one in ten have done this—uses an unapproved tool for direct patient care, they are operating in a complete black box. There’s no validation of the algorithm, no oversight on the data it was trained on, and no guarantee it’s free from biases that could lead to a misdiagnosis or a flawed treatment suggestion. From a security standpoint, every piece of patient data entered into a public AI tool is a potential breach. It exposes the organization to massive data privacy violations. We saw from a study last year that 97% of organizations that had an AI-related security incident lacked proper access controls, and that’s exactly the environment shadow AI creates—a complete lack of control that can erode patient trust and incur devastating financial costs.
Many staff members cite the need for faster workflows as their reason for using unapproved AI. Since administrators and clinicians use these tools for different tasks—like data analysis versus patient scheduling—how should an organization’s strategy for providing approved AI solutions differ for these two groups?
A one-size-fits-all approach is doomed to fail because the core problems these groups are trying to solve are fundamentally different. Administrators, who we know are a bit more likely to use these tools, are often focused on operational efficiency—they’re leveraging AI for predictive analytics or sifting through large datasets to streamline back-office tasks. Their needs are about macro-level optimization. In contrast, providers are on the front lines. They’re using unsanctioned tools for patient scheduling and engagement, trying to claw back minutes in their packed days. An effective strategy must be two-pronged: for administrators, you provide powerful, secure data analytics platforms. For clinicians, you need to embed AI directly into the EMR or clinical workflow tools they already use, making it a seamless, time-saving assistant rather than another program they have to open.
A significant gap exists where administrators are far more involved in creating AI policies than clinicians. How might this disconnect impact the practicality of official AI tools, and what steps can leadership take to ensure frontline provider input is central to the governance process?
This disconnect is incredibly damaging. When you have administrators being three times more likely to be involved in policy development than providers—30% compared to just 9%—you end up with policies that look great on paper but are completely impractical in a clinical setting. An administrator might approve a tool based on its data security features, but if it adds three extra clicks to a nurse’s workflow, they’ll just find a workaround—which brings us right back to shadow AI. To fix this, leadership must intentionally create multidisciplinary AI governance committees. These can’t just be token appointments; they need to actively bring in physicians, nurses, and other frontline staff from the very beginning to pilot tools, provide direct feedback, and co-author the policies. Their real-world experience is the only thing that will ensure the official tools are not only safe but also useful enough to be adopted.
Instead of simply restricting access, it’s suggested leaders should understand why teams use unsanctioned tools. Can you outline a step-by-step process a hospital could use to diagnose these workflow challenges and then identify secure, enterprise-level tools that staff will actually adopt?
Absolutely. A punitive approach just drives the problem further underground. The first step is discovery: conduct anonymous surveys and informal focus groups to ask staff, “What are the most frustrating, time-consuming parts of your day?” and “What tools, official or not, are you using to solve them?” Second, analyze that feedback to identify the top three to five workflow bottlenecks. Maybe it’s summarizing patient charts or drafting referral letters. Third, you research and pilot a few secure, enterprise-grade AI solutions that specifically target those pain points. It’s crucial to involve the same end-users in this pilot phase. Finally, based on their feedback, you select a winner and roll it out with comprehensive training and a clear message: “We heard you, and this is the secure tool that will make your job easier.”
There’s great optimism about AI, yet half of professionals are concerned about patient safety. How can healthcare organizations balance the push for AI-driven efficiency with the absolute need to maintain patient trust and security? Please provide a specific example of this balance in action.
This is the central tightrope that every healthcare leader must walk. The optimism is real—nearly 90% of professionals see AI significantly improving healthcare. But that optimism is tempered by very valid concerns from half the workforce about patient safety. The key to balancing this is radical transparency and keeping a human in the loop. For example, instead of deploying an AI that independently diagnoses medical images, a hospital could implement an AI-powered “co-pilot” for radiologists. The AI could prescreen thousands of images, flag potential anomalies, and prioritize the most urgent cases for human review. The final diagnosis remains in the hands of the expert clinician, but their workflow is dramatically accelerated. This way, you gain massive efficiency without sacrificing the critical layer of human judgment, which is essential for maintaining both patient safety and trust.
What is your forecast for the integration of safe and effective AI tools in healthcare over the next five years?
I foresee a significant shift from broad, generic AI applications to highly specialized, workflow-integrated tools. The era of experimentation with shadow AI will force organizations to get serious about providing sanctioned, purpose-built solutions. We will see AI become a standard feature within electronic medical records, helping with everything from clinical documentation to predictive warnings for patient deterioration. The biggest challenge won’t be the technology itself, but the governance and change management required to integrate it safely. The organizations that succeed will be those that build trust by involving clinicians in every step of the process, ensuring that AI serves as a powerful assistant that enhances, rather than replaces, human expertise.
