AIM Media House

AI Is Everywhere in the Enterprise—But It’s Still Not in Production

AI Is Everywhere in the Enterprise—But It’s Still Not in Production

Most enterprise AI systems are deployed, but not production-ready. Without governance, control, and accountability, “AI in production” remains a claim, not reality.

Enterprises say they are moving AI into production. The data shows something else. Nearly 78% of organizations now use AI in at least one business function, but only about one-third say they are scaling it across the enterprise.

The gap is not model capability. It is the absence of systems that make AI accountable once it is deployed.

“Production means that I need to expose this to either end users, my customers, or even if they're internal. I need to provide some accountability for my service,” says Tushar Katarki, Senior Director at Red Hat, to AIM Media House.

That accountability includes service-level expectations, governance, auditability, and the ability to intervene when systems fail. Most enterprise AI systems do not meet that standard.

Production Means Accountability, Not Deployment

Enterprises have learned that deploying a model is not the same as operating a system.

These expectations reflect how enterprises have historically run software systems. They expect traceability, predictable behavior, and defined thresholds for failure. AI systems do not behave that way.

The first issue is visibility. Enterprises often cannot track who is using AI, where it is being used, or how costs are scaling.

“The challenges that they're facing really are first of all visibility into cost, [including] who is using it, what they are using, and what apps are using it,” Katarki says.

The second issue is risk. Early concerns focused on hallucinations. The problem has shifted to exposure.

“Not only are they hallucinating, but they are leaking PII data, or they could be leaking competitive data,” Katarki says.

These risks are not theoretical. Companies deploying AI systems have already reported financial impact tied to failures, including compliance issues and flawed outputs.

The third issue is the nature of the systems themselves. AI is probabilistic. Enterprises are built around deterministic systems.

“It is not deterministic. What is the threshold of what ‘wrong’ means, and is it acceptable?” Katarki says.

This forces enterprises to define acceptable error rates, something most are not equipped to do.

The Rise of the AI Control Plane

As these systems scale, enterprises are moving to impose structure.

“People are starting to think in terms of a control plane for AI. The AI control plane and the AI platform have become really relevant,” Katarki says.

The shift is driven by fragmentation. Teams experiment independently, often using different models, tools, and workflows. The result is a form of shadow AI, where usage grows without oversight.

One CIO described managing thousands of AI projects before stepping in to reduce and centralize them.

Most organizations using AI remain in early stages of deployment, with relatively few scaling systems across the enterprise

The control plane is the response to that gap. It is not a single product. It functions as a coordinating layer that provides visibility into usage and cost, enforces policies, and governs how models and applications are deployed. It also allows enterprises to intervene when systems fail.

At the same time, enterprises do not want to slow innovation.

“[They] want to access AI wherever [they] can get it, [and] don’t want to be tied down to one model provider,” Katarki says.

This creates a constraint. Systems must allow multiple models and tools while still enforcing control. That balance is not solved.

Agents Turn AI Into an Accountability Problem

The problem becomes more complex as systems evolve.

Early deployments focused on chatbots. Then came retrieval systems and code assistants.

“Now it’s agents. They are taking actions on your behalf,” Katarki says.

Agents do not just generate responses. They call tools, access systems, and execute tasks. This introduces a new requirement: accountability for actions.

Last month, an internal AI agent at Meta Platforms advised an engineer on a technical issue. The engineer followed that guidance, which led to sensitive company and user data being exposed internally for about two hours.

The system did not breach security on its own. It produced flawed guidance. A human acted on it. The result was a high-severity internal incident.

This type of failure is difficult to contain because it sits between system output and human execution.

Katarki points to the underlying issue.

“What identity should the agents have? [If] this agent took an action, who is accountable for it?” Katarki says.

Enterprises now have to define how these systems operate. That includes assigning identity and permissions to agents, setting clear boundaries on what actions they can take, and determining when human intervention is required. It also requires clear ownership, so responsibility does not become ambiguous when systems act.

There is also the question of evaluation.

“[Enterprises need to] measure for that drift on a continuous basis and [determine when to] take remedial action,” Katarki says.

Unlike traditional systems, these systems can change behavior over time. Evaluation cannot remain a one-time process before deployment.