Cognitive scientist Gary Marcus recently reposted a message from an engineer: “We fired our first AI agent today. Bye bye CodeRabbit.”
According to the engineer, CodeRabbit had remained quiet for months before suddenly flooding pull requests with nitpicks. Its feedback often took longer to scroll through than the code changes it was supposed to review. In six months, it had flagged only one meaningful issue, while human reviewers caught hundreds. Instead of saving time, the agent had become another burden to manage.
Companies are now documenting both the adoption and dismissal of AI agents. Some are posting new job titles entirely dedicated to building them. Others are tying them directly to workforce cuts. And in several cases, firms are placing explicit bans on their use because of security risks.
The Rise of Strange New Jobs
While some companies are pulling back from ineffective AI agents, others are inventing entirely new roles to build and manage them. Job postings show just how unusual these roles have become.
At GenAI Tech EA, a listing for a “Vibe Coder in Residence” required the candidate to shadow the company’s Vice President, track their workflow, and build a digital twin capable of automating tasks. The position came with explicit metrics: five production-ready agents within thirty days and a thirty percent reduction in the executive’s calendar load within ninety. The job description detailed responsibilities from attending meetings and reviewing documents to chaining agents together, optimizing schedules, and producing dashboards that tracked adoption and minutes saved.
Other postings reveal the same pattern. A “Vibe Growth Marketer” role focused on designing AI humans that could function as therapists, trainers, or coaches. A “Rapid Prototyper” was expected to use AI tools and no-code platforms to ship prototypes faster than entire teams. A director-level role in “AI Agentic Experience Design” emphasized giving agents personalities customers could connect with, blending engineering skills with human-centered design.
The common thread is clear: entire job categories are emerging around building the very agents that companies are also beginning to sideline.
Klarna’s Cuts and Profits
Among large enterprises, few have spoken as openly about AI’s workforce impact as Klarna. The Swedish fintech has reduced its employee base from 7,400 to around 3,000. Chief executive Sebastian Siemiatkowski has attributed much of that shift to the company’s deployment of an AI chatbot.
The system now manages two-thirds of all customer service interactions, performing the equivalent work of roughly 700 employees. According to Klarna, the chatbot generated a $40 million profit improvement in its first year. The company redirected savings into higher wages for remaining staff, keeping payroll stable while headcount fell. Siemiatkowski has said the change reflects a broader trend that will affect knowledge work far beyond payments, including fields such as translation.
Salesforce Turns to Automation
Salesforce has also made significant workforce changes tied to AI. Earlier this year, chief executive Marc Benioff confirmed that the company cut 4,000 customer service jobs after shifting half of all customer conversations to AI agents. Speaking on a podcast, Benioff said directly, “I need less heads,” marking a sharp break from his earlier assurances that AI would augment, not replace, white-collar employees.
Not every organization is leaning in. Palo Alto Networks, one of the world’s largest cybersecurity companies, has taken the opposite approach. The company has banned AI agents from joining web meetings altogether. In an internal presentation, the message was explicit: “No AI Agents are allowed.”
Company executives have also warned that so-called “agentic browsers”, software designed to act on behalf of users, will likely be prohibited across enterprises within two years. The concern is less about productivity and more about risk. For an AI agent to work well, it often requires access to corporate credentials, tokens, and even payment systems. That level of access has already made agents a new target for attackers.
IBM data published this year illustrates the problem. Thirteen percent of organizations surveyed had experienced a breach through their AI systems, while another eight percent said they did not know if they had been compromised. Of those that had been breached, 97 percent lacked proper AI access controls.
The vulnerabilities are not hypothetical. Hackers exploited Salesloft’s Drift AI chat agent earlier this year, compromising OAuth tokens and exposing Salesforce data across more than 700 organizations. What was intended as a layer of efficiency quickly became a gateway for attackers.
Agents on Trial
Klarna has confirmed that its customer service chatbot now handles two-thirds of inquiries, doing the work of about 700 people and adding an estimated $40 million in annual profit. Salesforce has disclosed that it cut 4,000 customer service jobs after shifting half of its customer conversations to AI agents, with chief executive Marc Benioff saying, “I need less heads.” Palo Alto Networks has issued a company-wide rule barring AI agents from web meetings, warning of the security risks of granting them corporate credentials and tokens.
IBM has reported that 13 percent of organizations it surveyed had already experienced breaches through AI systems, while another 8 percent said they did not know if they had been compromised. And in one software team, an engineer publicly announced that the CodeRabbit agent was removed after catching only one legitimate issue in six months while creating additional work for reviewers.
At one end, enterprises like Klarna and Salesforce are linking them directly to profit gains and staff reductions. At the other, organizations such as Palo Alto Networks are restricting their use entirely, and smaller engineering teams are abandoning them when they fail to provide consistent value.