Netskope Capitalizes on GenAI Risk
Netskope
Generative AI turns ordinary employee behavior into a new outbound data channel, which lets Netskope sell more data protection, not just more network security. The concrete problem is simple. An employee pastes source code, a customer list, or contract language into ChatGPT, Copilot, or another model. Netskope sits in that traffic path, identifies the app, inspects the text or file, and applies block, allow, or coaching policies using the same CASB and DLP controls it already uses for SaaS and web apps.
-
This fits Netskope’s product history. Netskope and Zscaler won by replacing VPNs, web proxies, and firewalls with a cloud control point where security teams can see app usage and set granular policies. GenAI adds another class of apps to govern, so the same control plane becomes more valuable as usage spreads.
-
The spending tailwind is real because usage is rising fast. Netskope tracked 317 genAI apps in its 2025 report, said genAI usage had more than tripled year over year in its ChatGPT Enterprise integration launch, and reported a 30x increase in data sent to genAI apps by internal users over the prior year in its AI security update.
-
Competition is moving in the same direction, which shows this is becoming a core buying criterion in SASE and SSE. Palo Alto added AI Access Security with controls to identify genAI apps and prevent sensitive data exfiltration in prompts and uploads, while Netskope has leaned on its deeper CASB and DLP heritage as a key differentiator against Zscaler and larger incumbents.
This pushes the market toward unified data security platforms that can enforce one policy across SaaS, web, private apps, and AI tools. As enterprises standardize on Copilot, ChatGPT Enterprise, and AI agents, the winning vendors will be the ones that can inspect prompts and uploads inline, classify sensitive data accurately, and turn AI adoption into a larger recurring security budget.