Citadel has rolled out a new AI assistant across its equities division. The tool processes filings, transcripts, and analyst reports at high speed. Reuters reported that “nearly all” of Citadel’s equities investors now use it as part of their daily workflow.
According to reports, the system is trained on licensed data, including brokerage research and management-team transcripts, and is built to surface themes and risks faster than an analyst could do manually.
Citadel describes the tool as optional, but almost every equities analyst relies on it. The firm’s chief technology officer, Umesh Subramanian, said, “We don’t want PMs offloading their human investment judgment to AI.”
The message is clear: Citadel is deploying AI aggressively, but it does not want AI making decisions.
The tension is central to understanding the firm’s position in 2025. Citadel is investing heavily in AI infrastructure, while its founder argues that AI cannot achieve the thing that matters most: edge.
AI as Infrastructure
Citadel CEO Ken Griffin has become one of the most vocal skeptics of AI’s ability to generate alpha. Speaking at the Robin Hood Investors Conference, he said generative AI “fails to help hedge funds produce alpha.”
On S&P Global’s “Leaders” podcast, he offered the more detailed version of the same thesis. “Your AI technologies read the internet… read books… read work that has been completed, but they’re not designed to project the future,” he said.
Griffin added that “Humans are far more able to project future outcomes than AI technologies.”
He used a concrete example. “If you’d asked a generative AI model in 2004 or 5, how would mobile impact e-commerce… you would have gotten nothing.” Two decades later, mobile commerce determines the survival of most retailers.
For Citadel, this presents a structural limit. AI can summarize what has happened. It cannot explain what happens next. And it cannot recognize regime shifts that lack historical precedent.
Academic and industry research supports this. Machine-learning models trained on historical financial data often fail under non-stationary conditions, particularly during macro shocks or volatility events.
This limits AI’s value for hedge funds that depend on differentiated forecasts.
Subramanian’s comment: “We don’t want PMs offloading their judgment”, is consistent with this. Citadel considers its AI assistant a tool for speed and information processing, not a tool for investment decision-making.
Griffin reinforced this point on S&P Global’s podcast. “We still view, at the core, what we do at Citadel is we engage in research. When we… uncover a differentiated view from market consensus, that’s where we monetize our research.”
The AI assistant does not produce difference. It produces consistency. It compresses the workday and standardizes inputs, but it cannot identify the structural inflections that matter most.
Other hedge funds are in the same position. Business Insider reported that Point72, Balyasny, and other large multi-manager firms now use internal AI tools for research summarization. Their stated purpose is productivity, not automated alpha.
None of the major firms describe AI as a predictive system.
Caution Rooted in Risk and Regulation
Citadel’s stance is also shaped by risk-management and regulatory constraints.
The U.S. Securities and Exchange Commission has warned investment advisers that the use of AI and algorithmic tools introduces “model risk, data risk, and supervisory obligations.”
FINRA’s 2024 guidance requires firms to supervise the use of generative AI and maintain human oversight of any AI-assisted recommendations or analytics.
In Europe, the EU AI Act classifies financial-decision systems as “high risk,” subject to transparency, testing, and audit requirements.
Citadel cannot delegate portfolio decisions to a system that supervisors consider unverified or unexplainable.
The firm also emphasizes the fragility of model-driven environments. In the S&P interview, Griffin said, “Access to high quality data is really important. If you have a data vendor that doesn’t dot the I’s and cross the T’s… and there’s a mistake… it’s a huge setback.”
This concern reflects long-standing industry experience. Automated strategies tend to fail during regime shifts. The short-volatility collapse in 2018 (known as “Volmageddon”) remains one example of model assumptions breaking under stress.
Citadel’s internal posture mirrors these lessons. The firm uses AI to accelerate research, unify data sources, reduce repetition, and increase analyst coverage. It does not use AI to generate independent views or to automate trades.
Citadel’s 2025 results support that approach. Reuters reported that its flagship Wellington fund rose 8.3% year-to-date through November while the firm expanded its AI tooling. Performance has not depended on AI forecasts.
Citadel’s contradiction of building an AI tool it does not trust with judgment is a valid operating model. AI automates the inputs. Humans own the conclusions. This is a clear signal in hedge-fund AI adoption: the biggest firms see AI as infrastructure, not intelligence.








