GenAI Security Moves Into Enterprise Budgets
Sublime Security
Microsoft adding AI specific filtering into Entra means GenAI security has moved from experimental tooling into the core enterprise security budget. Once a control shows up inside identity and secure web access, buyers start treating AI use like any other governed traffic. That matters because the real problem is not just blocking ChatGPT, it is deciding which employees can use which AI apps, and stopping sensitive text, files, and prompts from leaking into them.
-
Entra Internet Access now includes a generally available AI web category filter for controlling access to AI apps by user and group, and Microsoft paired that with Purview browser DLP for generative AI apps. That combination shows a defined budget line for shadow AI control plus data leakage prevention, not just generic web filtering.
-
This is adjacent to what Sublime already does well. Sublime ingests messages, turns them into structured data, and lets security teams apply auditable rules to content, metadata, attachments, and intent. Extending that model to prompts, AI tool sessions, and agent generated actions would be a natural product move, especially for regulated customers that need to explain every enforcement decision.
-
The broader market is moving the same way. Microsoft is adding detections for prompt injection and sensitive data exposure in AI apps, while network security platforms like Cato are folding AI app inspection and governance into their core stack. That suggests GenAI security will be bought as an operational control layer, not a standalone novelty feature.
The next phase is a merge of email security, browser and network controls, and agent governance into one policy layer for human and machine generated actions. Vendors that can inspect content deeply and show clean audit trails will capture the most durable spend as enterprises formalize who can use AI, what data can enter it, and what automated agents are allowed to do.