AIM Media House

“There Are No More Tech Moats” says Thoughtworks CDAO Shayan Mohanty

 “There Are No More Tech Moats” says Thoughtworks CDAO Shayan Mohanty

Why the consultancy is focusing on execution, not tools

Research suggests gen AI tools can significantly compress parts of the development lifecycle. McKinsey found developers using AI can complete tasks up to twice as fast, with complex tasks finished 25-30% more reliably, compared with traditional methods. A controlled trial of enterprise engineers showed roughly a 21% speed advantage for AI-assisted work.

Thoughtworks says this shift is forcing changes not only in software delivery but also in the economics behind it. “The pure time and materials world is kind of going away with AI,” said Shayan Mohanty, Chief Data and AI Officer at the consultancy to AIM Media House. As AI shortens development timelines, billing models tied to hours worked become harder to sustain, particularly for large modernization projects.

Consulting firms have responded by building internal platforms to standardize how AI is applied to client work. Thoughtworks’ approach is AI/works, an agentic development platform embedded into how the firm delivers projects. Mohanty said Thoughtworks does not see this direction as unique to the company. “Whether it’s AI/works or someone else’s platform, if you look at the entire industry, this is the motion everyone is moving towards, a platform-centric services model,” he said.

Standardizing AI Across Enterprise Projects

Mohanty said the company built AI/works to standardize how AI is used across client engagements and to reduce variation in outputs as work scales. “In order for us to fully leverage AI for and with our clients, we needed ways to control it,” he said. “Ways where we can consistently produce roughly the same shape of artifact, employ roughly the same set of techniques, and continually improve on those things in a platform-centric way so that we can reap the benefits of scale.”

Enterprise use of generative AI has often stalled after pilot phases due to governance gaps and inconsistent results. Thoughtworks says they designed AI/works to reduce that variability by embedding AI into defined workflows rather than leaving adoption to individual teams.

“We’re not just using surface-level AI. We’re not just repurposing Claude Code for various things. We go extremely deep in the stack,” Mohanty said. That depth includes program analysis techniques such as abstract syntax trees and graph-based representations, as well as model interpretability research conducted by the firm’s AI research group.

“We have an entire AI research arm that does interpretability research, and we take a lot of those techniques and put them into our platform,” he said. The aim, Mohanty added, is to decompose engineering problems so that deterministic methods handle structure and constraints, while AI systems operate within clearly defined boundaries.

“This is critical for us,” Mohanty said. “But we also expect that the entire industry is going through roughly the same set of transformations right now.”

Reverse Engineering Before Rewriting

Legacy systems remain a major obstacle to enterprise AI adoption. Many large organizations continue to rely on mainframes and tightly coupled systems that encode decades of business logic. “They may have large mainframe implementations, and what they’re trying to do is modernize their mainframe to something that is actually sustainable and maintainable,” Mohanty said.

Modernization efforts have traditionally taken years and carried high execution risk, particularly when system behavior is poorly documented. Thoughtworks starts these engagements by reverse engineering existing systems before attempting to rebuild them.

“We can actually reverse engineer what a mainframe is doing by modeling its inputs and outputs,” Mohanty said. “We record a couple of traces through it, and then we try and create software that does exactly the same thing. Then we do parity testing between them.”

AI/works supports multiple reverse-engineering paths depending on what information is available. When source code exists, it can be converted into normalized representations and knowledge graphs that are easier to analyze. When only binaries remain, the platform relies on observed behavior and state changes to infer functionality.

This approach underpins what Thoughtworks calls a 3-3-3 delivery model. “Three days to prototype, three weeks to first cut, and three months to having something in production,” Mohanty said. For modernization efforts, the firm often applies what it calls minimally viable modernization, isolating a single workflow from a larger system to demonstrate feasibility.

“We carve out a workflow and say, ‘We’re going to modernize this piece,’ and we’ll show you how quickly we’re able to do it,” Mohanty said.

He added that the process varies depending on client conditions. “It’s not so much about one specific shape making it really hard to use AI,” he said. “It’s about choosing the right combination of AI and non-AI approaches, instead of just throwing an LLM at the problem.”

Governing Agent-Written Code

For enterprises in regulated industries, speed alone is insufficient. Governance, auditability, and accountability remain central concerns as agentic systems generate production code.

At the center of Thoughtworks’ approach is what it calls a “super spec.” “We call this a super spec because it includes not only what needs to be built, but how to build it,” Mohanty said. The document is designed to be reviewed by humans and consumed by machines, serving as the reference point for downstream automation.

This structure is intended to keep human oversight in place. “The more valuable piece is multiplying the human’s capacity to do things,” Mohanty said. Regulatory and compliance requirements are integrated into both code generation and runtime environments. “We catch it on both sides, not only at code generation, but also the runtime,” he said.

To manage accountability when agents generate code, Thoughtworks relies on testing tied directly to specifications. “It’s almost this adversarial model of test agents and code agents,” Mohanty said. One set of agents generates tests derived from the super spec, while another generates code that must satisfy those tests, including property-based testing across large input ranges.

Defining accuracy in this context is difficult, Mohanty said. “It’s really difficult to define accuracy in net-new code generation,” he said, noting that outcomes depend on language maturity, environment constraints, and system complexity. Even so, he said the approach has produced consistent results. “Generally speaking, our error rates tend to be very low, single-digit percentages.”

As AI capabilities spread across the industry, Mohanty does not expect technological differentiation alone to last. “There are no more tech moats,” he said. “There are only execution moats.”

For Thoughtworks, that means continuing to integrate AI into how work is delivered and governed rather than treating it as a standalone offering. “Because we deeply understand the technology,” Mohanty said, “we have high confidence that we’re going to continue innovating at a pace our competitors can’t keep up with.”

Key Takeaways

  • AI significantly accelerates software development, with developers completing tasks up to twice as fast.
  • The rise of AI is ending traditional 'time and materials' billing models for software development.
  • Consulting firms are adopting platform-centric models to standardize and scale AI application in projects.
  • Platforms like Thoughtworks' AI/works aim to ensure consistent quality and leverage AI effectively.