AIM Media House

Uber Let AI Write the Code. It Blew the Budget

Uber Let AI Write the Code. It Blew the Budget

Uber’s AI coding push scaled faster than expected, exhausting its budget within months as token-based costs surged.

Uber exhausted its annual budget for AI coding tools within the first months of 2026 as internal adoption scaled faster than expected, according to a report by The Information.

Chief Technology Officer Praveen Neppalli Naga said the company is “back to the drawing board” after usage of tools such as Claude Code exceeded financial assumptions. “I’m back to the drawing board, because the budget I thought I would need is blown away already,” Naga said, according to the same report.

Other reports citing the same disclosure noted that Uber’s AI coding spend reached its full-year allocation within months.

The company reported that about 95% of its engineers use AI tools monthly, with internal systems generating roughly 1,800 code changes each week. Nearly 70% of code committed to production now involves AI-generated output, according to Business Insider.

AI Adoption Moves From Assistive to Autonomous

A month earlier, Uber said AI systems were generating 31% of its total codebase, with autonomous agents opening 11% of pull requests. The figures were shared by company engineers at the Pragmatic Summit in March.

This level of adoption offers a view into the shift in software development, where AI tools are becoming part of standard engineering workflows.

Across the industry, AI coding tools are used across software development workflows. A study of enterprise engineering teams found that 63% of companies now use AI tools in most of their development processes, according to Business Insider.

Uber built internal infrastructure to support that shift, including a Model Context Protocol gateway, an Agent Builder platform, and a command-line interface for deploying and managing AI agents. These systems connect AI models to internal code repositories, documentation, and engineering workflows.

This internal infrastructure allows AI systems to access production data and services directly, which enables them to perform multi-step tasks that go beyond code suggestions.

The company’s engineering teams run multiple agents in parallel to generate code, test changes, and manage migrations. Internal agents had expanded their role from contributing less than 1% of code changes to about 8% within months, according to Business Insider.

As these systems take on a larger share of code generation and execution, the role of engineers shifts toward supervising and validating outputs.

This shift is changing how engineering work is structured. Engineers direct AI systems to generate and modify code, focusing on system design, validation, and review.

As engineers take on more oversight responsibilities, research into AI-assisted development finds that generated code often requires review and modification before it is used in production, according to arXiv.

Token-Based Pricing Breaks Cost Estimates

The cost of these systems is tied to usage rather than headcount. Anthropic prices its models using tokens, which measure the amount of text processed by AI systems, with enterprise plans combining per-seat fees and usage-based billing.

Under this model, companies pay for access while committing to usage that is billed based on compute consumption, requiring them to estimate how often engineers invoke AI systems and how complex those requests are.

Pricing structures for AI coding tools have shifted from fixed subscription tiers to usage-based models. Anthropic said subscriptions were not designed for the usage patterns of these tools, which generate higher compute demand through repeated and multi-step interactions.

As vendors adjust pricing and usage policies to manage demand, developers are encountering constraints in how these tools are consumed. Developers have reported hitting usage limits faster than expected, with quotas depleting in shorter timeframes during some sessions, according to DevOps.com.

AI agents require more compute per task than earlier coding tools, particularly in multi-step workflows, increasing the number of tokens consumed per request and raising costs under usage-based pricing models.

At Uber, AI-related costs have risen alongside more complex use cases, including systems for rider-driver matching and dynamic pricing. The company reported $3.4 billion in research and development spending, with AI contributing to a growing share of compute usage, according to The Information.

Uber’s internal metrics show how much code AI systems generate and how widely they are used across engineering teams. The company has not disclosed a direct measure linking that output to cost per feature or overall engineering efficiency.

Companies report metrics such as code generation and usage rates, but do not disclose standardized measures linking output to cost or efficiency.

Naga said the company is reassessing its assumptions after early projections failed to account for the pace of adoption and the cost of running large-scale AI systems.