Jared Palmer, the founder of Turborepo and former VP of AI at Vercel, believes the future of software is prompt-driven. First came “text to UI.” Then “text to app.” And now, he says, we’re heading toward something bigger:
“We’re text-to-app right now. And I think we’re going to get to text-to-business in the future.”
In Palmer’s vision, AI won’t just write interfaces or snippets. It will spin up entire workflows, backends, even products, all from natural language. It’s a compelling idea. But there’s a catch: developers don’t even trust the current generation of AI tools to write a form field correctly.
Stack Overflow’s 2025 Developer Survey paints a stark picture. AI usage among developers is at an all-time high: 84% use or plan to use AI in their workflows. And yet, only 33% say they trust the accuracy of AI-generated code. Sixty-six percent name “almost right” solutions as their top frustration. These are answers that look plausible, compile cleanly, and break silently.
The result is a widening gap between the ambitions of AI dev tool creators and the current lived reality of developers who have to ship production code.
The Hidden Cost of “Almost Right”
The phrase that comes up over and over again in the Stack Overflow survey is “almost right.” Unlike clearly broken code, which developers can immediately discard, “almost right” code creates a trap. It works well enough to be accepted, and then fails subtly. That means more time debugging, more production outages, and more developer anxiety.
In fact, a recent randomized study found that experienced developers using AI tools actually took 19% longer to complete tasks, even though they believed they were going faster. That perception gap is part of the problem. AI feels productive, even when it introduces complexity that only surfaces later.
It’s no surprise, then, that 45% of developers say they spend more time debugging AI-generated code than writing from scratch. And it’s not just about output. Developers are frustrated by the AI’s inability to understand the bigger picture: edge cases, architecture, integration, security. In other words: business logic.
So when Palmer says “text to business,” the gap is hard to miss. Today’s AI tools can’t reliably generate business logic. But they’re being pitched as engines that are the business.
Vertical Integration Over Trust
Palmer’s own journey shows a deep awareness of these limitations. At Vercel, he built V0, one of the most visible “text to app” products on the market. But it didn’t start by trying to build full-stack systems. It started with a constraint: generate clean HTML and Tailwind CSS. Not even full React apps at first, just UI markup.
Why the limitation? Because AI gets brittle fast. The farther it reaches, the more unpredictable it becomes. By anchoring V0 to narrow output formats and tightly scoped interfaces, Palmer says they avoided the trap of “random acts of AI.” As he put it: “We had one rule, which was that no random acts of AI, no slop, it had to be pretty good.”
But as V0 grew, so did the ambition. It moved into full-stack code, then into “chat”-style app building. Palmer’s next vision is that products like V0 will become vertically integrated dev platforms, tightly coupling framework (Next.js), tooling (AI SDK), hosting (Vercel), and code generation (V0) into a seamless loop. “V0 will likely succeed because V0 is the vertical integration of coding framework, AI, editor and infrastructure.”
If developers don’t trust AI outputs in isolation, maybe they’ll trust them more inside curated, opinionated stacks.
But the Culture Hasn’t Shifted
Still, the Stack Overflow data suggests that even vertical integration won’t fix the underlying issue: developers want to understand what’s happening. They aren’t looking for shortcuts so much as clarity.
Developers still believe in the craft. The survey shows that 77% reject vibe coding. For most, that’s not software development. It’s gambling. Even Palmer seems to recognize this tension. He emphasizes “high-agency people”: those who raise their hands, get things wrong, and keep learning.
In the AI future he imagines, these people will be managing fleets of agents, overseeing orchestrators, and still writing the tricky pieces themselves. But if that’s true, it raises a question: is “text to business” really about removing developers? Or is it about giving them more leverage, while increasing their cognitive load?
Counterpoint: Trust in AI Coding Will Grow with Time and Tools
The concern that AI can’t be trusted to write production-grade code is not unfounded. Today’s AI systems occasionally produce hallucinations, misuse libraries, or miss edge cases, making them risky as standalone developers. But writing off AI’s role in software development entirely overlooks the pace of improvement, both in the models themselves and in the way humans work with them.
AI coding assistants are already advancing rapidly. With each generation, they’re getting better at understanding context, reasoning about state, and adhering to organizational style guides. Meanwhile, developer tooling is adapting in parallel, integrating AI more seamlessly into IDEs, CI/CD pipelines, and testing frameworks to catch errors earlier and automate guardrails.
Crucially, human developers aren’t static in this equation either. As familiarity grows, teams are learning how to structure prompts, validate outputs, and use AI not as a black box but as a collaborator. Just as software engineers once adapted to version control or containerization, they’ll develop norms and workflows for AI pair programming.
AI dev tools aren’t going away. Nor is developer skepticism. Palmer is right that we’re heading toward more abstraction, bigger systems built from fewer keystrokes. But abstraction doesn’t mean automation. It means more is being hidden under the hood, and developers are the ones responsible for what happens when it breaks.







