When Adobe unveiled its latest suite of artificial intelligence tools this week, it emphasized automation that preserves brand voice. Among the updates is a new B2B configuration of its AI agents, designed specifically for business-to-business marketing. While these agents were first introduced for general customer experience management, the October release adapts them to handle the complexities of B2B sales, including multi-person buying committees, diverse decision-making processes, and longer sales cycles.
The Audience Agent in Journey Optimizer B2B Edition identifies key decision-makers and high-value buying group personas using structured and unstructured customer data. The Journey Agent then runs multi-channel campaigns, while the Data Insights Agent delivers insights to help marketers visualize trends and optimize experiences. Adobe frames these agents as collaborators for human marketers, accelerating deal closures and maintaining brand consistency, but past controversies over AI training and content use raise the question of whether automation can truly respect creative and brand autonomy.
Industry analysts see these agentic tools as Adobe’s attempt to make AI work at the level of brand governance, letting teams automate routine tasks while staying on-message. “What these agents represent is Adobe leveraging its deep understanding of customers, content, journeys and engagement to deliver highly focused, purpose-built agents,” said Liz Miller, vice president and principal analyst at Constellation Research, to SiliconANGLE.
For Adobe, which now positions itself less as a creative software maker and more as an enterprise AI company, this move continues a shift years in the making. The company’s flagship Experience Cloud already drives marketing automation for brands such as Cisco, Hershey, and Lenovo, which use Adobe’s ecosystem to personalize content at scale. Cisco’s vice president of demand marketing, Brett Rafuse, said the new tools “shorten the time it takes to identify key decision makers and orchestrate compelling cross-channel journeys,” helping boost engagement and accelerate deal closures.
Adobe’s latest messaging leans on its creative heritage, the same legacy that made “Photoshop” a verb. The company’s narrative around brand safety and autonomy taps into that history of empowering creators. The newly announced Agent Composer, for instance, will let businesses customize AI agents “based on brand policies and workflow needs,” a nod to the fear of losing creative control to automation .
But while Adobe now sells “governance” as a feature, it has also spent much of the past year defending its record on the same issue. In June 2024, Adobe faced backlash after updating its Terms of Service to include language stating that “automated systems may analyze your Content … using techniques such as machine learning” to improve services. Many creators interpreted this as giving Adobe permission to train AI on private or NDA-bound work. Adobe denied the claim, clarifying in a blog post that Firefly, its generative AI model, was trained only on licensed and public domain data, not user files. But the vague phrasing fueled distrust across creative communities.
The same summer, controversy erupted over AI-generated “Ansel Adams-style” images listed on Adobe Stock, prompting the Adams estate to accuse the platform of misusing the late photographer’s name and aesthetic. Adobe removed the listings, but the incident spotlighted a deeper tension: when does imitation cross into brand or artistic misrepresentation?
Adobe insists that its generative AI system, Firefly, is “commercially safe” and trained on licensed stock and open data. But contributors to Adobe Stock and the now-defunct Fotolia platform have questioned whether their past uploads were used without explicit consent. The company’s denials have done little to quiet concerns that its datasets may blur the line between inspiration and appropriation. Adobe’s assurances haven’t erased the mistrust among creators who feel they’ve lost control of their work to systems they never agreed to build.
Those questions cut to the core of what “brand governance” means in an AI era. If AI systems learn from content created by others, how can brands (or individuals) guarantee that their own voice isn’t being diluted, borrowed, or inadvertently mimicked?
Adobe’s dominance also raises structural concerns. Its attempted $20 billion acquisition of Figma, the collaborative design platform, was abandoned after antitrust regulators in the U.S., U.K., and EU warned it could “reduce innovation and choice in the design software market”. That same logic applies to AI: if Adobe’s tools become the de facto standard for brand automation, the diversity of creative expression could narrow around whatever norms Adobe’s systems encode.
This consolidation risk makes Adobe’s new emphasis on “autonomy” more complicated. As the company integrates Firefly, GenStudio, and now AI Agents across its suite, it is effectively defining how creative governance operates for much of the marketing world. The question becomes whether that standard serves brands, or Adobe itself.
For Adobe, the AI agents rollout is a strategic rehabilitation. Recent coverage has framed the company’s enterprise pivot as both lucrative and alienating. At the Cannes Lions Festival of Creativity earlier this year, Adobe introduced its LLM Optimizer, a tool for brands to monitor and influence how they appear in AI-generated results from systems like ChatGPT and Gemini. It was an enterprise innovation that underscored the growing gap between Adobe’s corporate ambitions and its creative roots.
The company’s latest AI push, rooted in “brand-safe autonomy”, attempts to close that gap. The idea that brands can deploy autonomous AI without losing their distinctive voice plays directly to Adobe’s heritage.