Vibe Coding Is the New Shadow IT. The Answer Isn't a Ban.

As AI makes building software effortless, enterprises are struggling to make deploying it safely just as fast.
There's a conversation happening in IT departments right now, and it's remarkably familiar. A finance manager builds a workflow tool over a weekend using an AI coding assistant. A marketing team spins up a customer intake app in an afternoon. An operations lead connects two internal systems using a script nobody reviewed. None of it went through IT. None of it was asked for. All of it is now running in production.
If this sounds like shadow IT, it is. Just with a new face.
Vibe coding, the practice of using generative AI to write functional software through natural language prompts, has made building software accessible to almost anyone. Tools like Lovable, Bolt, Replit, and others let a non-developer describe what they want and receive a working application in minutes. The code runs. The feature ships. The problem it solves is real.
And that's exactly what makes it so difficult to govern.
The Old Shadow IT Playbook Doesn't Apply Here
Shadow IT has always thrived in the gap between what people need and what IT can deliver. The instinct, historically, has been to clamp down: block the tool, restrict the access, and force everything through a procurement queue. It rarely worked. People found workarounds because their underlying need didn't go away.
The dynamics with vibe coding are identical, but the stakes are considerably higher. The previous generation of shadow IT typically introduced unauthorized SaaS subscriptions. This was a data risk, certainly, but a bounded one. A Dropbox account used without approval is a problem you can identify, trace, and address.
An AI-generated application connecting to production databases is a far riskier category of exposure. The code may work. It may even work well. But it almost certainly lacks proper error handling, documented architecture, secure data practices, and any audit trail. It has no understanding of your business context, your data governance policies, or your authentication guidelines. When something goes wrong - a data leak, a logic error, an unexpected interaction with an enterprise system - the person who built it often can't explain how it works, because they didn't write it in the conventional sense. They described the desired outcome and let AI fill in the blanks.
In the Stack Overflow 2025 Developer Survey, 66% of developers reported AI-generated code is frequently "almost right" but requires refinement and debugging. Experienced developers know where to look. Most of the people building vibe-coded apps inside your enterprise do not.
The Problem Compounds When AI Becomes the Application
There's a second dynamic worth understanding, because it goes beyond AI-generated code entirely. Some employees aren't building apps at all anymore. They're constructing workflows inside large language models directly - defining rules in plain language, executing tasks through AI prompts, and generating outputs in real time. No application to audit. No codebase to review.
Just a user, a prompt, and whatever the model interprets that prompt to mean. This is more insidious than vibe coding, not less. Code executes predictably. AI does not. The same input can produce different results. And when the workflow involves sensitive company data transmitted to an externally hosted model, there's limited visibility into what was sent, what was retained, or what might surface later as outputs seen by someone else.
The organizations trying to solve this with broad restrictions are already losing. You can't block AI tools comprehensively without ceding real productivity gains to the companies that don't. And even if you tried, your employees would find alternatives, just as they always have.
The Governance Gap Is the Actual Problem
Here's what the discussion about vibe coding usually misses: the issue was never the building.
It was always the deploying. Getting from "I built an app" to "the organization is using it safely" has always required steps that most enterprise teams couldn't complete without significant IT involvement: identity integration, data protection controls, compliance requirements, monitoring, policy enforcement, and quality assurance. That friction was the barrier, and it was intentional. And for most of software history, it was an appropriate one, because building something useful required skill and time that signaled genuine need.
AI removed that friction on the creation side, but it still exists on the deployment side. So what enterprises now have is a situation where the distance between "someone built a thing" and "that thing is running in production without any enterprise controls applied" has collapsed toalmost nothing, while the distance between "someone built a thing" and "that thing has proper identity, security, and governance applied to it" remains as long as it ever was.
That gap is where the risk lives. And closing it requires a different approach than what most enterprises are currently attempting.
Channel It, Don't Stop It
The more useful question isn't, "How do we prevent employees from building AI applications?" It's, "How do we make the path from building to deploying safely as short as the path from nothing to building?" This is the logic behind AI Publish, a capability within Island’s AI Services. The premise is straightforward: an employee uses AI to build an app in a tool like Lovable, a process that takes five or six minutes. Instead of that app then finding its way into the organization through uncontrolled channels, it publishes through Island's management console in just a few more minutes, with enterprise-grade identity enforcement, data protection, monitoring, and policy controls automatically applied.
The app inherits the enterprise’s requirements rather than requiring a separate process to add them. It sits alongside the tools employees already use. It's visible to IT. And no developer dependency is required on either the building side or the governance side. This isn't about adding bureaucracy to a fast process. It's about compressing the distance between "someone built something useful" and "that something is safe for the organization."
The freedom to build remains intact. The controls that enterprise deployment requires are applied automatically, rather than becoming the reason a useful tool never gets properly deployed at all.
What Vibe Coding Governance Actually Looks Like
The practical version of this for IT and security leaders involves a few concrete shifts in posture.
1. Assume teams are building. The 2025 developer survey data suggesting 80% of developers use AI in their workflows probably understates actual usage, and that number includes non-developers who are building things no survey has properly captured yet. If you're not seeing this activity, you don't have visibility, and it’s happening outside of your awareness.
2. Create a legitimate path. The appeal of shadow IT has always been that the legitimate path was slower or more painful than the workaround. If your governance infrastructure can accept and wrap a vibe-coded application in minutes rather than weeks, the calculus for employees changes. The fastest route and the governed route can be the same route.
3. Extend governance beyond approved tools. The risk surface for AI doesn't stop at the applications your organization has sanctioned. It extends to every AI tool being accessed through browsers, every extension with broad permissions, every model receiving data that wasn't intended to leave the organization. Visibility across that full surface - including AI interactions in the browser, desktop applications, and extensions - is what enables meaningful governance rather than the appearance of it.
4. Measure what's actually happening. One of the persistent challenges with AI governance is that organizations often don't know what productivity gains AI is delivering, what it's costing, or where the highest-risk usage is concentrated. That information should be available and actionable. Governance without visibility is just policy on paper.
The Parallel That Matters
Shadow IT has always followed the same arc. A behavior emerges because it solves a real problem faster than the official channel does. IT tries to block it. The behavior persists anyway, because the underlying need didn't go away. Eventually, the organizations that built governance frameworks capable of accommodating the behavior ended up with better outcomes than the ones that held the line until the line became irrelevant.
Early cloud adoption played out this way. BYOD played out this way. Consumer SaaS played out this way. In each case, the governance problem wasn't that people were doing something unreasonable. It was that the infrastructure for doing it safely didn't keep pace with the infrastructure for doing it at all.
Vibe coding is the same story, running faster and at greater scale. The applications being built today are more capable than the SaaS workarounds of a decade ago, the data exposure is more significant, and the pace of adoption is faster than any policy enforcement mechanism designed for the previous era can track.
The organizations that get ahead of this won't be the ones that try to prohibit the behavior. They'll be the ones that build governance infrastructure capable of meeting it where it actually is: at the moment an employee clicks "publish" and expects the thing they built to work safely and securely within the corporate data guardrails to which all enterprise applications must adhere.