Insights

You Cannot Govern What You Cannot Name: Why Agentic AI Demands a New Identity Model

Written by Krti Tallam | Feb 13, 2026 12:07:17 AM

The rise of AI agents is turning identity into a graph problem. Here's why ontologies and relationship-based access control are the answer to agentic AI governance.

Agentic AI changes everything about enterprise security, including how we think about identity.

Traditional IAM was optimized for humans and deterministic services: a person/service authenticates, receives scoped permissions, and accesses resources through predictable application paths. It was slow, predictable, and governed at human speed. But AI agents can chain actions quickly across tools and systems; fast enough that humans cannot review every step. And that creates a problem legacy identity models were never built to solve.

Recent research confirms what forward-thinking enterprises are already discovering: agentic AI introduces new challenges to traditional Identity & Access Management (IAM) strategies, especially around identity registration, governance, credential automation, and policy-driven authorization for machine actors. Organizations that fail to adapt face a higher risk of access-related incidents as autonomous agents become more prevalent.

What’s at the core of this issue? Many enterprise controls evaluate access per resource and per system, but do not compose cleanly across heterogeneous systems and workflows. But with agentic AI, the question becomes far more complex: What can an agent infer from everything it's allowed to traverse?

Identity Becomes a Graph Problem

When an AI agent operates on behalf of a user, it doesn't just access one document at a time. It navigates relationships: files linked to projects, projects linked to teams, teams linked to sensitive deals. A well-intentioned agent could piece together confidential information from dozens of "permitted" sources, without ever touching a restricted file.

This is the mosaic effect: sensitive outcomes inferred by synthesizing individually authorized fragments. File-level permissions govern objects; they don't govern inference paths across objects.

The insight here is fundamental: agentic AI turns identity into a fast-moving graph problem. You're no longer just asking "who is this user?" You're asking "what is this agent's relationship to this data, in this context, at this moment?"

And here's the hard truth: you cannot govern what you cannot name.

If your system can't articulate the relationships between agents, tools, data classes, and permitted actions, you have no foundation for enforcing least privilege as AI evolves. You need a shared vocabulary, a lightweight ontology, that makes your business structure legible to both humans and machines.

Ontologies: The Missing Layer for Agentic AI Security

An ontology is a structured representation of concepts and their relationships. In the context of enterprise AI, it's the "context graph" that maps how your data, users, workflows, and policies connect.

Without this layer, AI agents operate in the dark. They can read documents, but they can't understand why a particular file matters, who should see it, or how it relates to sensitive operations. That's why agents hallucinate, overstep, and create compliance nightmares.

With a living ontology, you give AI something it desperately needs: business context as a first-class security primitive. The ontology doesn't just describe your data; it defines the relationships that govern access. It answers questions like:

  • What project does this document belong to?
  • Who is on the deal team?
  • What classification level applies to this workflow?
  • What actions are permitted for this agent in this context?

This is how you move from "access control" to "inference control." The ontology becomes the map; relationship-based access control becomes the enforcement layer.

ReBAC: Governing Access Through Relationships

Relationship-Based Access Control (ReBAC) complements RBAC/ABAC by making relationships first-class - team, project, deal room, data domain - so policy decisions match real enterprise structure. What is your relationship to this data in this context?

When an agent operates on behalf of a user, you want it to inherit the user’s effective permissions via delegated identity (on-behalf-of) and relationship-aware policy evaluation, not a broad service principle. . If the user isn't on the deal team, the agent can't traverse into deal room artifacts. If the user lacks clearance for a classification level, the agent stops at the boundary.

Enforce before data enters the agent’s context window (or before a tool call returns sensitive payload.) That is how you avoid “filter after exposure.” You're not filtering outputs; you're preventing exposure at the source.

The Bottom Line

The shift to agentic AI isn't just a technology upgrade. It's a governance transformation. Legacy identity models that worked for human-speed access will not scale to machine-speed inference. Organizations need:

  1. A shared vocabulary (ontology) that names the entities, relationships, and actions AI must understand
  2. Relationship-aware enforcement (ReBAC) that governs access based on contextual connections, not just static roles
  3. Pre-retrieval controls that prevent sensitive information from entering an agent's context in the first place

The enterprises that get this right will deploy autonomous workflows with confidence. The ones that don't will remain stuck in pilot purgatory, or worse, face the breach that rewrites their risk calculus.

You cannot govern what you cannot name. Start by building the ontology that makes your business legible to AI, then enforce it with relationship-based controls that match the speed and complexity of agentic operations.

Frequently Asked Questions

What is agentic AI governance?

Agentic AI governance is the discipline of managing autonomous AI agents through identity controls, access policies, and audit mechanisms. Unlike traditional AI oversight, it treats agents as first-class identities requiring relationship-aware authorization, contextual permissions, and continuous monitoring.

Why do AI agents create identity security challenges?

AI agents traverse repositories, synthesize information across sources, and act autonomously at machine speed. Traditional identity models designed for human users with static roles can't handle autonomous systems that infer sensitive information by connecting individually authorized data points.

What is the mosaic effect?

The mosaic effect occurs when AI agents infer sensitive information by synthesizing individually authorized data fragments, even without accessing restricted files. This is why file-level permissions are insufficient; organizations need inference control, not just access control.

How do ontologies and ReBAC work together?

Ontologies provide the structured vocabulary defining entities, relationships, and permitted actions. ReBAC uses this vocabulary to enforce access based on actual organizational relationships. You can't identify the relationship without understanding the context, and ontologies provide that context.

Ready to secure your agentic AI deployments?

Download the ReBAC Whitepaper →