Can Cloudflare Really Stop Shadow AI, Or Will Employees Just Work Around It?

Employees who want to skirt oversight may simply turn to personal devices or unmonitored connections.
Cloudflare is pitching itself as the answer to one of the messiest problems in corporate IT. Employees quietly funneling sensitive data into ChatGPT, Claude, or Gemini. Its new Cloudflare One update promises “X-ray vision” into shadow AI usage at work. But the launch has sparked more concern than confidence, with critics questioning whether a network provider can or should police how staff use generative AI. The pitch is simple: employees are pasting confidential code, internal strategy documents, or financial statements into chatbots without oversight. Cloudflare argues that companies are exposing themselves to untraceable leaks, with sensitive data ending up in models they don’t control. If left unchecked, this could mean an external AI system is trained on the very secrets that
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM Media House? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States

Subscribe to our Newsletter: AIM Research’s most stimulating intellectual contributions on matters molding the future of AI and Data.