Goldman Sachs Tests Agentic AI for Trade Surveillance

After decades of automating trades, Goldman tests AI agents for real-time surveillance.
Goldman Sachs has spent decades automating its trading floors. Risk models, algorithmic execution, and quantitative strategies. Technology has always been central to how the bank competes. What is new is where the automation is pointing next. Inward, into the compliance and surveillance functions that have historically resisted it.
Bloomberg reported that Goldman Sachs is exploring the deployment of agentic AI tools for trading surveillance, mainly to look for suspicious signals or movements in the market. A representative for Goldman declined to comment.
But the report landed three weeks after Goldman's CIO Marco Argenti told CNBC something more specific. The bank has spent six months co-developing autonomous AI agents with Anthropic, built on the Claude model, targeting trade accounting, client onboarding, and compliance, with employee surveillance explicitly named as a next step.
Many banks are already evaluating ways to integrate AI to save costs and improve efficiency. Currently, most trading surveillance is done using rule-based algorithms programmed to detect issues. When a trade exceeds a certain size, deviates from a benchmark, or fits a known risk pattern, it triggers an alert. Compliance teams then review the case manually.
"Think of it as a digital co-worker for many of the professions within the firm that are scaled, are complex and very process intensive," said Marco Argenti, Chief Information Officer, Goldman Sachs.
Implementing Agentic AI
Unlike AI chatbots that simply supply information, agentic AI is designed to plan and take action autonomously. In a trading context, this means the software can decide what data to examine next, compare multiple signals, and escalate findings without constant human input.
It might monitor order flows, price movements, communications metadata, and historical behavior to assess whether activity aligns with normal patterns.
The challenge with current systems is scale and complexity. Modern markets generate huge volumes of data across asset classes, time zones, and trading venues. Static rules can generate large numbers of false positives, while more subtle forms of manipulation may not match known patterns.
Agentic systems aim to go beyond that approach by examining trading behavior across multiple signals, comparing it with historical activity, and detecting unusual combinations of actions that might not trigger traditional rule-based alerts.
The tools are not described as replacing compliance officers. They function as an additional layer of monitoring, surfacing cases that warrant closer human inspection.
Financial institutions operate under strict regulatory regimes, accountability remains with human supervisors. The agent's role is to identify and organize information more effectively than static systems can.
Market Growth and Security
Global trade surveillance spending is rising. Grand View Research values the 2024 market at approximately $1.7 billion, with projections suggesting $5.2 billion by 2030, reflecting roughly 20 percent compound annual growth. Regulation remains the primary catalyst, though operational efficiency now rivals it as a driver.
Regulators in the United States and Europe have encouraged firms to improve monitoring of market abuse and manipulation. While rules do not mandate agentic AI, they require firms to maintain effective systems and controls.
Goldman Sachs has invested heavily in AI in its trading and risk systems in recent years. The surveillance effort extends that work into compliance. Many banks are using AI to monitor communications of traders, salespeople, and other client-facing staff.
Some are applying generative AI architectures to internal control functions. Banks retiring legacy rule engines expect cost savings and improved analytics.
At the same time, AI in compliance raises its own questions. Banks must ensure that models are explainable, that they do not introduce bias, and that they can withstand regulatory review. Model governance, data security, and audit trails remain central concerns. Regulation requires transparent models that supervisors can interrogate at any time.
However, large language model reasoning chains often appear opaque, challenging audit obligations. Additionally, agentic architectures expand attack surfaces through numerous service accounts and APIs.
According to Benny Porat, chief executive officer of Twine Security, agentic AI can introduce new vulnerabilities if not tightly controlled. If compromised, it could expose sensitive customer data or take unauthorized action, such as revoking system access or failing to explain why it took a decision.
"It opens up to external systems and when left unchecked there's a risk that it will accidentally expose data," Porat said. "We spent decades refining how we hire and trust humans. AI agents? Most organizations are still figuring that out."
Goldman's move into agentic surveillance is not happening in isolation. It sits within a broader and deliberate AI build-out, six months of embedded Anthropic engineers, agents already live for trade accounting and client onboarding, and a leadership team publicly committed to a multiyear reorganisation around generative AI. The surveillance effort is the next logical step in that sequence
Key Takeaways
- Goldman Sachs is actively testing agentic AI for real-time trade surveillance to detect suspicious market activities.
- The bank co-developed autonomous AI agents with Anthropic, targeting compliance and employee surveillance.
- Agentic AI aims to automate complex, process-intensive tasks traditionally done manually by compliance teams.
- This move signifies a shift from rule-based algorithms to more sophisticated AI for financial compliance.