AIM Media House

The Shadow AI Systems Running Your Company

The Shadow AI Systems Running Your Company

Employees are building with AI anyway. The question is whether companies choose to see it.

“We used to have shadow IT. Now we have the idea of shadow AI. People are just using all kinds of models without any governance.” - Tushar Katarki, Head of Product, Red Hat AI Platforms

Across large organizations, AI adoption is already happening outside official systems. More than 80% of employees now use unapproved AI tools at work, often without the knowledge or oversight of IT teams.

That activity is largely invisible. When employees use external generative AI services, organizations lose visibility into how data is processed, where it is stored, and which models are being used.

In practice, this means sensitive data is already flowing into these systems. Reports show that 77% of employees share company data with tools like ChatGPT, including internal documents, customer information, and proprietary code.

The pattern is not limited to one function. In healthcare systems, usage logs show AI tools being accessed across departments, including by clinicians operating in sensitive environments.

This is what shadow AI actually looks like: widespread, unsanctioned, and largely untracked. The default response has been to restrict it. But the data shows that approach fails. Nearly half of employees continue using personal AI tools even after bans are introduced.

That creates a different question. Not how to stop shadow AI, but what it reveals. Employees are already solving real problems with these tools. The organizations that treat that activity as signal rather than violation are the ones beginning to understand where AI is actually delivering value.

Shadow AI bans don't work. This is no longer a debate.

Nearly half of employees would continue using personal AI tools even after explicit prohibitions. 68% of CISOs use unauthorized AI themselves. The tools are too accessible, too useful, too frictionless compared to the glacial pace of corporate procurement.

Joseph Izzo, Chief Medical Information Officer at San Joaquin General Hospital, is already seeing the consequences inside clinical environments. Speaking at the RSAC 2026 Conference, he described how healthcare professionals are using AI tools for tasks such as dosing support, medical searches, clinical summaries, and billing workflows.

Much of this activity happens outside approved systems. Clinicians often rely on personal devices, unvetted tools, and public large language models, creating visibility gaps for security teams and increasing the risk that sensitive patient data enters unmanaged environments.

The behavior is not driven by policy evasion. It is driven by pressure. Healthcare professionals adopt these tools to manage workload and improve efficiency in settings where time directly affects patient care. As Izzo noted, clinicians are not trying to bypass controls; “they want to be more efficient.”

That creates a problem organizations cannot ignore. Security teams cannot monitor or manage systems they cannot see, yet employees are already integrating AI into core workflows. The question is now how organizations respond to it.

The Infrastructure Conversation

This is where the conversation shifts from security to strategy. Tushar Katarki, Head of Product, Red Hat AI Platforms, has been tracking how enterprise leaders are thinking about this. In a recent conversation with AIM Media House, he articulated the gap between experimentation and production.

"What does production mean?" Katarki said. "It means I need to provide accountability. That accountability could be everything from SLAs that I guarantee, to governance—everything from hallucinations to what I would call intentional or unintentional fallout from these AI systems. Then there is auditability. I need to be able to audit, root cause problems. And then I do need to have control. What action can I take in response to that? That's usually what differentiates what's in experimentation and what's in production."

The implication is stark: you cannot have production AI without visibility into all AI, authorized and otherwise. You cannot have accountability without knowing what's actually deployed. And you cannot have control without first understanding what you're trying to control.

This moves shadow AI from a compliance problem to an infrastructure problem. It's not about punishment. It's about building systems that allow organizations to learn at the speed their employees are already moving.

What Shadow AI Reveals

Here's where the paradox becomes impossible to ignore: companies are investing $30 billion to $40 billion in formal AI initiatives, according to MIT's 2025 research. Yet 95 percent report zero impact on their bottom line.

At the same time, employees are using AI tools directly in their workflows. Experimental and survey data show consistent productivity gains, including faster task completion, improved output quality, and measurable increases in individual performance.

Yage Zhang, a researcher at CISPA studying shadow deployment patterns, found something striking in the data. She audited unauthorized API usage across academic institutions and enterprise environments. Nearly 190 research papers relied on third-party endpoints that weren't official. Nearly half failed basic model fingerprint verification, meaning researchers couldn't even confirm they were hitting the official models.

The conventional reading would be alarming: shadow API usage is a security and compliance nightmare. But Zhang saw something else. "Shadow APIs aren't just a risk management problem," she said to AIM Media House. "They're actively shaping what gets adopted. The core issue is that users treat shadow APIs as interchangeable with official ones, but our evidence shows significant performance divergence. So it's both: they influence adoption by lowering access barriers, and they introduce risks that most teams aren't equipped to detect."

Both true simultaneously. The same systems enabling rapid innovation are creating governance blind spots. The question becomes not how to separate them, but how to learn from one while managing the other.

This is precisely what forward-thinking enterprises are attempting now.

At Shopify, 25% of the company uses Scout, an internal AI automation tool that handles over 1,000 tool calls per day. It wasn't built by specialists in an innovation lab. It was built by the Product Support Network team: people from customer success and sales backgrounds, using approved infrastructure. Duolingo has made AI fluency non-negotiable in hiring.

Health systems including Mass General Brigham and others are actively deploying and studying AI in clinical workflows, even as clinicians continue to use unapproved tools in practice. Industry experts increasingly point to “AI amnesty” approaches as one way organizations can surface and govern this usage, though documented implementations in healthcare remain limited.

Katarki captures the tension most CIOs are grappling with now. “A year ago I would have said, let it be a bit wild west,” he said. “But we’ve reached a point where this is increasingly important. The question isn’t control or innovation. It’s how do you do both?”

That’s the inflection point. Not whether to control shadow AI, since every mature organization will eventually need governance. But whether to build that governance on understanding or on suppression. Whether to learn from where employees are already deploying AI, or to ignore it until a breach forces the issue.

The difference compounds over time. Organizations that rely only on restriction often struggle with visibility, as employees continue to use AI tools outside approved systems. Those that provide enterprise-grade alternatives alongside governance gain more visibility into how AI is actually being used.