AIM Media House

Your AI Security Tools Are Only as Good as the Humans Governing Them

Your AI Security Tools Are Only as Good as the Humans Governing Them

AI is speeding up attacks. The bigger problem is that defenses are running on autopilot

Cyberattacks are rising, and artificial intelligence is making them faster.

IBM's 2026 X-Force Threat Intelligence Index reports a 44% rise in attacks targeting public-facing applications, and a 49% increase in active ransomware groups. Manufacturing is the most targeted sector worldwide for the fifth consecutive year.

Jim Gumbley, Business Information Security Officer at Thoughtworks, thinks the arms race framing misses something important.

"Attackers aren't reinventing playbooks, they're speeding them up," he says to AIM Media House. The more pressing problem, in his view, is that organizations are deploying defensive tools faster than they understand how to govern them.

Speed is just the beginning

AI functions as a “force multiplier” on the attacker side. The techniques themselves are not new. Exploiting unpatched systems and stealing credentials are techniques attackers have relied on for years. What AI changes is the speed and scale at which they can be executed.

Scanning for vulnerabilities across large attack surfaces, a task that once required significant time and skill, is becoming faster and cheaper. Legacy systems are particularly exposed. Organizations that have accumulated technology over decades, through growth and acquisitions, often have sprawling, poorly documented attack surfaces.

These systems are harder to patch and harder to defend. IBM's data reflects this, as the most common entry points in 2025 were public-facing applications and stolen credentials, both areas where older, complex environments tend to be weakest.

The combination of AI tooling and commodity cybercrime services, ransomware-as-a-service platforms and exploit marketplaces that require little technical skill to use, has not yet reached critical mass.

Gumbley points to a historical parallel: an open-source SQL injection tool, built for defensive security research, sat freely available for five years before someone connected it to a large-scale attack on telecommunications companies. When that kind of convergence happens with AI, the organizations without clear visibility into their own systems will be the most exposed. "There's nothing to say it's not going to get worse," he says.

U.S. enterprises are responding. Organizations are exploring AI across zero trust frameworks, vulnerability management and attack surface monitoring.

But according to ISC2's 2025 Cybersecurity Workforce Study, nearly 90% of security professionals said their organization experienced at least one significant cybersecurity incident due to skills shortages, and tools are arriving faster than the organizational frameworks needed to govern them.

More tools, less clarity

According to Deloitte's 2026 State of AI in the Enterprise report, only one in five companies has a mature governance model for autonomous AI agents, even as their use is set to rise sharply.

Organizations often cannot clearly define what their systems should do, under what conditions, and what the consequences of failure look like. Without that clarity, controls cannot be designed well, and AI-powered defenses become hard to evaluate and harder to trust.

In large language model-based systems, malicious instructions hidden in external content can hijack an AI agent's behavior, a vulnerability known as prompt injection.

"You could have either data exfiltration as a risk, or you could have actions being taken unintentionally," Gumbley says. OpenAI has acknowledged that prompt injection in agentic systems is unlikely to ever be fully solved.

In February 2026, security researchers documented the first known case of infostealer malware stealing credentials directly from an AI agent framework, giving attackers access to cloud services, encryption keys and private conversation logs.

The incident confirmed that AI agents are now targets in their own right, and ones attackers are actively learning to exploit.

What machines can't decide

Gumbley separates governance from computation.

"The governance part is knowing what the system should do under what conditions. And as far as I understand, that's still a completely human problem," he says. Computation follows: automating controls and implementation once that human judgment is in place.

Treating governance as something that can be installed and switched on is where organizations go wrong. A simple public-facing website with no sensitive data carries low risk and needs proportionate controls. A financial system that can authorize transfers, or a healthcare platform handling patient data, requires a fundamentally different level of scrutiny.

The professionals best positioned for what comes next, Gumbley says, are those with expertise across both governance and cyber.

Generating code is increasingly something machines can do. Understanding what a system should do, and the consequences when it falls short, remains firmly in human hands.