In 2020, the SolarWinds “Sunburst” attack demonstrated how a trusted software update could become a global entry point for espionage. The compromised Orion monitoring platform reached more than 18,000 organizations, including U.S. government agencies and Fortune 500 firms. Investigators later described it as one of the most sophisticated supply-chain intrusions ever uncovered.
Five years later, SolarWinds is again in the spotlight. This week, the company released a new AI agent designed to automate incident response, summarize system alerts, and recommend remediation actions across its observability and IT service management products.
Rebuilding trust through design
Since 2020, SolarWinds has rebuilt its engineering pipeline around what it calls Secure by Design, a program emphasizing parallel build environments, code-signing, and stronger access controls. The company says independent audits have validated those controls, and in mid-2025, it reached a preliminary agreement with the U.S. Securities and Exchange Commission to settle litigation over its breach disclosures.
The new AI system is governed under a complementary framework called AI by Design, which SolarWinds describes as incorporating human-in-the-loop oversight, traceability of decision chains, and continuous logging. Product documentation states that all AI actions are “recorded, traced, and auditable,” and that human operators retain approval authority for system-level changes.
New risks in autonomous systems
The move toward AI-driven automation comes amid growing concern about the security of enterprise AI infrastructure. A 2025 report from Infosys found that 95% of surveyed executives said their organizations had faced AI-related incidents in the past two years, ranging from data exposure to model misconfiguration.
Tenable’s global survey of security professionals reached similar conclusions: more than one-third of respondents said AI-related incidents stemmed from misconfigured cloud services, excessive permissions, or unsanctioned “shadow AI” deployments.
Recent incidents have shown how quickly AI systems can become new attack surfaces. In July 2025, a prompt-injection exploit dubbed “EchoLeak” demonstrated that data from Microsoft’s Copilot assistant could be exfiltrated through manipulated context prompts. Around the same time, a misconfigured cloud instance exposed more than a million records from Chinese AI startup DeepSeek, including chat histories and API keys.
Even underlying AI infrastructure has proven vulnerable. In August 2025, NVIDIA disclosed multiple critical flaws in its Triton Inference Server that allowed remote code execution on both Windows and Linux systems.
These examples underscore the challenges SolarWinds and similar vendors face: autonomous agents amplify both capability and exposure. The same privileges that allow an AI agent to automate remediation could also enable wide-scale impact if the agent itself were compromised.
Transparency and verification
SolarWinds says the AI Agent was developed under the same security oversight used for its rebuilt software pipeline. The company reports that its products undergo independent environmental reviews and penetration testing, though as of October 2025 it has not published any third-party red-team report or external assessment focused specifically on the AI Agent.
Cybersecurity experts have cautioned that procedural controls alone are insufficient without sustained cultural and external validation. Former CISA Director Jen Easterly framed cybersecurity as a software-quality problem, arguing that decades of insecure products have created a large defensive industry: “We have a multi-billion dollar cybersecurity industry because for decades, technology vendors have been allowed to create defective, insecure, flawed software.”
That view is consistent with the findings of “Cyber Hard Problems: Focused Steps Toward a Resilient Digital Future (2025)”, a consensus study from the National Academies of Sciences, Engineering, and Medicine. The report identifies systemic challenges in cybersecurity, including secure development, supply chain integrity, and institutional trust, and calls for ongoing community coordination and external oversight as foundations for long-term resilience.
Federal policy that followed the SolarWinds breach has embedded these expectations into software governance. The U.S. Executive Order on Improving the Nation’s Cybersecurity (2021) requires suppliers to provide software-bill-of-materials (SBOMs) and secure-development attestations, codifying a model of verifiable trust rather than declared assurance.
A new reminder of SolarWinds’ lingering security challenges arrived last month when SolarWinds disclosed a new remote code execution (RCE) flaw in its Web Help Desk software, which bypasses two earlier fixes for related deserialization bugs. The company issued a hotfix and urged users to patch immediately.
Although there is no evidence of active exploitation, previous variants of the same flaw were added to CISA’s Known Exploited Vulnerabilities catalogue. Ryan Dewhurst of WatchTowr said the recurrence “serves as a warning from history” for a vendor still closely associated with supply-chain risk.
The legacy of Sunburst continues to shape how SolarWinds operates. Federal reports credit the incident with accelerating industry-wide adoption of secure-build practices and dependency mapping. Whether the company’s new AI framework meets the same scrutiny that its software pipeline now does will depend on openness. For the industry as a whole, publishing third-party test results, SBOM attestations, and audit summaries would demonstrate that the principles of “Secure by Design” are verifiable.