Use AI or Get Fired, A New Crisis in Corporate Leadership?

Accenture and IgniteTech pushes workers to master AI, firing those who can't keep up.

Corporate America has spent the past few years whispering about AI’s potential to replace workers. That whisper is finally changing into reality. Julie Sweet, CEO of consulting giant Accenture, recently told investors that the company is “exiting” employees who can’t master AI tools, regardless of their past performance or institutional knowledge. 

Imagine you’ve navigated market crashes, managed critical client relationships, and built institutional knowledge that can’t be found in any database. Your manager still tells you that none of it matters anymore. 

You’re being let go because you haven’t embraced ChatGPT with sufficient enthusiasm. At Accenture, 70% of 779,000 employees have received generative AI training. The remaining 30%? Those deemed “untrainable” face termination. 

At IgniteTech, CEO Eric Vaughan went further, eliminating nearly 80% of his workforce for what he viewed as insufficient AI enthusiasm. This is not workforce optimization. It’s institutional panic disguised as strategic necessity.​

Sweet and Vaughan are betting billions that human judgment and experience are negotiable commodities that can easily be replaced by someone more enthusiastic about AI. They’re wrong. And their approach reveals something way more troubling. A fundamental misunderstanding of how to actually build competitive advantage in an AI-driven economy.

The False Binary 

The “use AI or you’re fired” narrative creates a false and dangerous separation. It assumes that AI adoption exists on a binary, either you’re enthusiastically using the technology, or you’re irrelevant. Reality is far more nuanced. 

A skilled litigator who questions whether AI adequately handles case strategy doesn’t need firing, they need thoughtful integration of AI tools into their existing expertise. An experienced engineer who raises concerns about AI hallucinations in critical code isn’t “resisting transformation.” They’re demonstrating the kind of skepticism that prevents disasters.

A Gallup survey reveals that more than 40% of U.S. workers who don’t use AI cite the primary reason. They don’t believe it can actually help their work. This isn’t obstinacy. It’s informed skepticism. 

MIT researchers reviewing 300+ AI initiatives found only 5% delivering quantifiable value. When researchers at major institutions question AI’s current utility, suggesting that rank-and-file employees are wrong to harbor doubts is strategically foolish.​

Yet, Sweet’s framing positions doubt as disqualifying. Those who aren’t “getting the hang of” AI are treated as obsolete rather than potentially cautious. Greg Coyle, IgniteTech’s chief product officer, was fired after raising concerns about “brute force” culling of talented staff at an early stage of AI development. 

His sin? Suggesting that rapid, wholesale workforce replacement based on an emerging technology might represent unacceptable business risk. That’s not resistance. That’s prudent risk management.​

Sweet’s rhetoric, that employees must “retrain and retool” or exit masks a deeper corporate failure. The inability to develop genuine AI competency across organizations. Training 70% of employees in “generative AI fundamentals” is not the same as developing deep, contextual expertise. Yet Accenture’s strategy conflates the two. The company is essentially saying that if surface-level training hasn’t transformed your work, you’re irredeemable.​

This approach has costs that balance sheets don’t immediately capture. Institutional memory walks out the door. Domain expertise accumulated over decades gets replaced by enthusiasm for new tools. 

The litigator with 15 years of case strategy knows patterns that no prompt can capture. The operations manager who has navigated industry crises brings judgment that AI training courses cannot impart.

IgniteTech’s experience illustrates this danger. Vaughan required employees to spend 20% of their time on “AI Mondays,” dedicating entire workdays exclusively to AI projects. 

“Every single Monday was called ‘AI Monday’. You couldn’t have customer calls; you couldn’t work on budgets; you had to only work on AI projects,” said Eric. 

When a chief product officer with years of institutional knowledge resisted, he was terminated. IgniteTech may have achieved 75% EBITDA margins after the purge, but at what hidden cost? The company replaced experience with enthusiasm.

Misalignment of Incentives 

Both Sweet and Vaughan frame their decisions as necessary investments in the future. But there’s a critical incentive misalignment. When executives declare that AI adoption is existential and simultaneously terminate those who question the implementation approach, they’re not encouraging thoughtful integration. They’re installing fear.

Research from the Writer AI platform found that one in three employees admitted to “actively sabotaging” their company’s AI rollout. But this “sabotage” often stems from legitimate frustration, handing employees tools that don’t work and expecting enthusiasm is unreasonable. Firing these employees doesn’t solve the underlying problem. It just removes the voices identifying where implementation is failing.​

McKinsey, KPMG, and PwC are taking different approaches, integrating AI into performance reviews and training pipelines rather than using it as a litmus test for termination. This is smarter. It acknowledges that AI adoption is gradual, contextual, and sometimes reveals genuine limitations of the technology. It clearly shows that skepticism and innovation can coexist.​

What’s most troubling about Sweet and Vaughan’s approach is the assumption that they fully understand AI’s role in their businesses better than the people actually doing the work. This is a classic executive blind spot. 

When leaders declare technologies “existential” and then fire anyone questioning the implementation, they’re not driving transformation. They’re suppressing the very feedback mechanisms that reveal whether their strategy actually works.

Sweet claims that Accenture’s AI investments are “yielding returns.” But the company is simultaneously claiming that workforce reductions were necessary because existing staff couldn’t adapt. These statements are contradictory. 

If AI is delivering massive returns and the company continues to expect headcount growth, why is wholesale replacement necessary? The answer is that it’s not about AI’s productivity gains. It’s about margin expansion through labor reduction.​

Vaughan frames his 80% replacement as culturally necessary. But culture isn’t built by eliminating dissent. It’s built by creating environments where people genuinely understand why change matters and where legitimate concerns surface before they become catastrophes. 

Firing thoughtful, experienced people for asking hard questions doesn’t create culture. It creates a compliance organization where fear substitutes for engagement.​

The most thoughtful approach to AI adoption acknowledges a fundamental truth. The technology amplifies human capability rather than replacing human judgment. This requires patient integration, not panic-driven elimination. It means training people to use AI effectively within their existing expertise, not demanding they become AI specialists or leave.

Companies like Multiverse, an AI-focused education tech firm, take a different stance. Rather than firing employees for insufficient enthusiasm, they reward creative AI applications. They hire for “AI will, not just skill,” recognizing that mindset matters more than current proficiency. This approach builds genuine organizational transformation rather than installing compliance through fear.​

Concentrix’s approach also offers lessons. Rather than mass terminations, it deployed AI strategically, having attorneys use it to redline contracts, enabling those 10 attorneys to move into higher-value negotiation work. 

This isn’t a replacement. It’s augmentation. And it actually captures AI’s value of freeing experienced professionals from routine tasks to focus on judgment-driven work.​

📣 Want to advertise in AIM Media House? Book here >

Picture of Sachin Mohan
Sachin Mohan
Sachin is a Senior Content Writer at AIM Media House. He is a tech enthusiast and holds a very keen interest in emerging technologies and how they fare in the current market. He can be reached at sachin.mohan@aimmediahouse.com
Global leaders, intimate gatherings, bold visions for AI.
CDO Vision is a premier, year-round networking initiative connecting top Chief
Data Officers (CDOs) & Enterprise AI Leaders across major cities worldwide.

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.