In a recent interview with Citadel Securities, Nvidia CEO Jensen Huang said that “100% of the company’s engineers now use the AI tool Cursor.” He described the adoption as part of a vision where “workforces in enterprise will be a combination of humans and digital humans,” crediting Cursor with delivering “productivity gains” and “better work.” Huang also listed Cursor among six AI startups he considers central to this new model, alongside OpenAI, Harvey, OpenEvidence, Replit, and Lovable.
While Huang’s remarks reflected strong endorsement from one of the world’s most valuable technology companies, the broader industry’s experience shows that adoption of tools like Cursor is far from frictionless. At San Francisco–based Mixus, resistance emerged almost immediately. Two software engineers declined to follow instructions to rely heavily on Cursor, convinced they could outperform the AI assistant.
Speaking to AIM Media House, co-founder Shai Magzimof was blunt. “Engineering teams should make Cursor (or an equivalent AI IDE) a daily, measurable standard: target 70–80% AI-assisted coding time per engineer, and track usage alongside outcomes (stories closed, PR cycle time, post-merge defects, coverage deltas). Treat these metrics as part of performance and team health reviews. ”
With just five employees, the standoff quickly became a major obstacle. Magzimof ultimately dismissed both engineers, one after only a week on the job. He attributed the pushback to ego. “If top-model access is constrained, prioritize budget toward enterprise Cursor credits and training so the team can consistently ship faster with higher quality backed by guardrails like human sign-off, secure-coding prompts, and audit logs of accepted vs. rejected AI suggestions. The goal isn’t to cut people—it’s to elevate people.” he said.
AI in Coding
Coding assistants have quickly become one of the most prominent applications of large language models. OpenAI and Anthropic continue to release new tools, while Cursor’s parent company Anysphere has seen its valuation climb nearly twelvefold in a year, reaching around $30 billion in recent private share sales.
Top executives have emphasized the transformation. In March, Anthropic CEO Dario Amodei predicted that AI would write 90% of new code by September. Microsoft’s Satya Nadella, Alphabet’s Sundar Pichai, and Salesforce’s Marc Benioff have each said AI already generates between 20% and 50% of their companies’ code.
Startups are adopting the tools as well. At Ramp, a fintech company, leadership has strongly encouraged engineers to use AI coding assistants. One employee noted that Ramp even maintains an internal leaderboard of “Claude Code power users” to measure usage among staff.
Concerns About Quality
Similar debates exist at other companies. At Massachusetts-based Liquid AI, co-founder and CEO Ramin Hasani said AI would slow down engineers working on kernels, the software that governs how chips perform the mathematical operations that run AI models. “Every line of code is important in designing kernels to make the software as efficient as possible,” Hasani said. “You cannot really give this job to an AI today.”
At Brightpick, which develops warehouse robotics, co-founder and CEO Jan Zizka observed that some experienced engineers were the most reluctant to use AI coding tools. “There are programmers—mainly, I think, older programmers—who are not so flexible, who are very much attached to the technology and to the code,” he said. He pointed to robot control software as an area where such engineers insist on direct oversight.
The Technical Limits
AI assistants are also constrained by technical boundaries such as limited context windows, which restrict the amount of code a model can analyze at once. For engineers working across large codebases, this limitation can make the tools impractical.
By contrast, non-engineering staff sometimes find the tools especially useful. At Dexterity, a robotics company, product managers have used AI coding assistants to create simulations for potential customers. “They can quickly close some deals because of these simulations,” said founding engineer Rob Sun.
The risks increase when inexperienced programmers rely too heavily on AI. One roboticist described hiring a college student who worked primarily in Cursor. After the student left, it took other engineers two months to understand the AI-generated code. The company concluded it would have been better off hiring a more experienced engineer from the start.
According to multiple engineers, junior programmers are particularly likely to accept buggy code because they may not be able to judge whether AI’s output is correct.
Human Concerns About Obsolescence
The pushback is not only technical. For many developers, AI coding tools raise questions about the future of their profession. Software engineering has long been regarded as a secure, high-demand career, but some fear that Cursor and similar platforms could reduce the need for human coders.
Igor Ostrovsky, co-founder of Mountain View–based Augment Code, which builds software to complement AI coding assistants, described what some engineers are experiencing as “a bit of a crisis.” He summarized the core worry: “If AI can write excellent code, what does that mean for my value as a human?”
Research Shows Productivity Trade-offs
New research highlights how these concerns may not be unfounded. The nonprofit Model Evaluation and Threat Research (METR) studied 16 experienced developers tasked with fixing 136 issues in open-source repositories, paying them $150 per hour. Developers were recorded across 146 hours of work. Some used AI tools, mainly Cursor with Sonnet 3.5 or 3.7, while others did not.
The results were unexpected. Developers using AI took 19% longer to complete tasks than those without AI. Before the study, participants estimated AI would make them 24% faster. Even after the slowdown, they still believed it had sped them up by 20%.
Analysis showed that AI reduced time spent writing code, researching, and testing. But those gains were offset by delays waiting for AI, reviewing outputs, promoting changes, and dealing with “IDE overhead.” Only one developer—who had previously logged more than 50 hours on Cursor—achieved a speed increase, finishing 38% faster.
The researchers concluded: “Both experts and developers drastically overestimate the usefulness of AI on developer productivity, even after they have spent many hours using the tools.”
Despite these findings, some executives are exploring AI coding tools themselves. Klarna CEO Sebastian Siemiatkowski spoke on the Sourcery podcast about using Cursor to prototype features, despite not being a developer by training. “Rather than disturbing my poor engineers and product people with what is half good ideas and half bad ideas, now I test it myself,” he said. “I come say, ‘Look, I’ve actually made this work, this is how it works, what do you think, could we do it this way?’”
His approach reflects a wider industry pattern where non-technical staff use AI to explore concepts, though others worry that engineers then spend more time reviewing prototypes. A survey by Fastly found that 95% of developers spend extra time resolving issues in AI-generated code.
From Advantage to Baseline
For enterprise buyers, adoption patterns are influenced less by speed than by security and compliance. A VentureBeat analysis of 86 engineering teams found that GitHub Copilot dominates large organizations with 82% adoption, while Anthropic’s Claude Code leads overall usage at 53%. Faster tools such as Replit and Lovable remain less common because procurement teams prioritize deployment flexibility and compliance over raw performance.
The report concluded that AI coding tools have moved past novelty. “The fact that 30% or 50% or even 70% of your code is written by AI now no longer really matters,” the analysis stated. “They are now just tools. Tools we pay $1B+ for, gladly. Disruptive tools. But just tools in everyone’s toolkit now.”
From Nvidia’s full adoption of Cursor to startups grappling with internal resistance, AI coding assistants have become unavoidable. Every engineering team now faces the same reality: these systems are in the toolkit, whether embraced enthusiastically or used cautiously. The debate is less about whether engineers are banning Cursor and more about how companies balance the speed, quality, and risks that come with it.