TechRadar’s "90% of enterprise AI systems could be breached within 90 minutes" headline is attention-grabbing, but the useful takeaway isn’t the exact number, but rather the operational pattern it’s pointing at.
This story is summarizing the Zscaler/ThreatLabz red-team findings. They go on to describe "time to first critical failure" (median ~16 minutes) and report that most tested systems were "compromised" quickly under adversarial conditions. Definitions of "critical failure/compromised" matter here, and headlines rarely capture that nuance - so I treat this less as a universal claim and more as a wake-up call: many LLM apps, copilots, embedded AI features, and agentic workflows are being deployed faster than governance is being built.
The core issue, is not "AI is inherently insecure." It’s that AI is being dropped into fragmented enterprise architectures with inconsistent identity, permissions, and monitoring - and once systems can take actions (not just answer questions), they effectively become privileged actors. When permissions and context are separated from the data, traditional security models breakdown.
If this headline makes you pause, here are a few practical checks I would start running this week:
Inventory &visibility: Where is AI embedded today (apps, copilots, agents, plugins, "AI features" enabled by default)? Do you know where sensitive data is flowing? This may be within the applications you know are running, but also what shadow AI processes are being run? The scope includes not only known, running applications but also those "shadow" AI processes that might be operating unseen.
Agent "clearance" (metaphor for standard controls): Who is the agent acting on behalf of(delegated identity), what can it touch/do (scoped tokens + least-privilege tools), and what is allowed to leave the approved policy/trust boundary (e.g., tenant/workspace/workroom, system-of-record, or jurisdiction) via outputs or tool actions (egress controls + approval gates for high-risk actions)?
Reduce excessive agency: avoid blanket tool access; use allowlists, time-bounded credentials, and explicit action constraints.
Policy-aware retrieval: Make sure access policies and context travel with your data - not as a separate enforcement layer that gets bypassed when data moves.
Audit + detection: you should be able to reconstruct what data/tools were used or excluded, what policy was applied, and why - with logs that security teams can actually monitor. This ensures you understand the policy decisions that were in place at any given time versus only checking after a potential security breach.
Last item. Open Worldwide Application Security Project (OWASP) calls out "excessive agency" as a top risk class for LLM applications: granting systems unchecked autonomy to take action. The practical fix isn’t panic; instead treat AI security as a challenge at the architecture layer, rather than the application level. That means disciplined identity and permissions that move with your data across distributed environments, policy enforcement at the data layer (not just the application layer), and audit trails that capture the 'why' behind every decision. Additionally, it requires securing at the least privilege level, egress control, strict observability, and continuous adversarial testing. The alternative is trying to bolt governance onto systems that were never designed for it—which is exactly how you end up in that 16-minute window.
This is why at Kamiwaza we've taken a different approach: security and access control built into the data layer itself, not bolted on afterward. When your architecture treats identity, permissions, and audit as inseparable from your data, you're not trying to secure AI—you're deploying AI that's secure by design