Cloudflare is pitching itself as the answer to one of the messiest problems in corporate IT. Employees quietly funneling sensitive data into ChatGPT, Claude, or Gemini. Its new Cloudflare One update promises “X-ray vision” into shadow AI usage at work. But the launch has sparked more concern than confidence, with critics questioning whether a network provider can or should police how staff use generative AI.
The pitch is simple: employees are pasting confidential code, internal strategy documents, or financial statements into chatbots without oversight. Cloudflare argues that companies are exposing themselves to untraceable leaks, with sensitive data ending up in models they don’t control. If left unchecked, this could mean an external AI system is trained on the very secrets that
Can Cloudflare Really Stop Shadow AI, Or Will Employees Just Work Around It?
- By Anshika Mathews
- Published on
Employees who want to skirt oversight may simply turn to personal devices or unmonitored connections.
