Uber's "Dara AI" Reveals What Executives Actually Fear About AI Adoption

Teams at Uber built an AI proxy for their CEO because preparing for actual executive review had become predictable enough to automate.
Dara Khosrowshahi revealed this week that Uber engineers have built an AI chatbot version of him, which teams use to rehearse presentations before meeting with the actual CEO.
The disclosure, made during an interview on Steven Bartlett's The Diary of a CEO podcast, has been framed as a reflection of their engineering culture.
It is a warning signal about the disconnect between how executives think AI is being deployed and what is actually happening inside their organizations.
Khosrowshahi described the "Dara AI" with pride saying, "One of my team members told me that some teams have built a Dara AI, you know, so that they basically make the presentation to the Dara AI as a prep for making a presentation to me. Because you can imagine, by the time something comes to me, there's been a prep and a meeting of the slide deck has been beautifully honed. So they have Dara AI to tune their prep."
The tone is lighthearted but the implication is not. Teams at Uber built an AI proxy for their CEO because preparing for actual executive review had become predictable enough to automate.
What Does This Reveal?
For an AI model to effectively simulate a CEO's questions and concerns, that CEO's decision-making patterns must be sufficiently regular to train on. The existence of Dara AI suggests that Khosrowshahi's feedback in meetings follows recognizable patterns.
His reactions are modelable. This is not necessarily a flaw. Consistency in leadership creates organizational stability. But it also means that the value executives provide in review meetings may be more procedural than strategic.
If teams can rehearse their presentations with an AI version of their CEO and improve their outcomes, it raises an uncomfortable question. What is the CEO adding that could not be communicated in advance?
The answer might be judgment on novel situations or strategic redirection that cannot be anticipated. But if the AI version is effective enough that teams rely on it, much of the executive review process may already be reducible to pattern matching.
The more significant revelation is not that the Dara AI exists, but that it was built without explicit executive mandate. Engineers at Uber did not ask permission to create an AI version of their CEO.
They identified a bottleneck, built a tool to address it, and deployed it across teams. This bottom-up innovation is exactly what technology companies claim to want. It is also exactly what makes executives nervous.
Khosrowshahi noted that approximately 90% of Uber's software engineers now incorporate AI tools into their daily workflow, with 30% qualifying as "power users" who are completely rethinking the company's technological architecture.
He framed this as productivity enhancement saying, "It really is changing their productivity in a way that I've never, ever seen before." But the Dara AI example exposes the governance gap. If 30% of engineers are rethinking architecture using AI, and leadership is learning about these initiatives through podcasts, the organization has already lost centralized control over how AI is deployed.
Khosrowshahi acknowledged the strategic tension explicitly. If AI makes engineers 25% more efficient, he could hire more engineers to "go faster." Or he could stop adding headcount and instead "add agents and buy some more GPUs from Nvidia."
This is the calculation every CEO is making, whether they admit it publicly or not. The difference at Uber is that engineers are forcing the question by building tools that demonstrate what AI can already replace.
When Steven Bartlett asked whether Khosrowshahi was concerned that teams would show "Dara AI" to the board, the CEO laughed it off. But the question is not a joke. If an AI version of the CEO can prepare teams for executive review, can an AI version prepare executives for board review?
Khosrowshahi argued that executives remain valuable because AI models cannot "learn in real-time" or make decisions based on new information. That is the current limitation. It will not remain the limitation indefinitely.
The real story is about what happens when technical teams move faster than governance structures can adapt. Uber's engineers did not build Dara AI because they were trying to replace their CEO.
They built it because the current system was inefficient, and they had the skills to fix it. That initiative should be celebrated. It should also terrify executives at companies where engineering teams have similar capabilities but less visibility into what tools are being deployed.
The question is not whether AI will replace executives. The question is whether executives will adapt quickly enough to remain relevant in organizations where AI tools are already being built, deployed, and iterated on without their direct oversight.