AI Agents and Identity: From Delegated Authorization to Agentic Trust

AI Agents and Identity: From Delegated Authorization to Agentic Trust
Share

AI Agents and Identity: From Delegated Authorization to Agentic Trust

Author: Sudipto Biswas 

Author’s Note: This is part 1 of a multi-part series on AI Agents and Identity written in conjunction with my colleagues Monotosh Das, Sauvik Biswas and Ashutosh Pal.

The era of the enterprise AI Agent is here. As businesses rush to build and integrate agents, a fundamental question is arising: what makes an agent different from a standard application?

The Pre-Agent Foundation: Predictable Delegated Authorization

Before the rise of autonomous AI agents, applications acted on behalf of users in highly structured, predictable ways based on the OAuth 2.0 framework. To understand future agentic identities, we must first understand this pre-agent model of “delegated authorization.”

A prime example is an app-to-app interaction in an enterprise environment: Zoom needs to perform an action within Slack on behalf of an employee, “Sudipto.” In this “pre-agent” model, trust is not established directly between the two apps. Instead, they rely on a mutual, trusted third party—a central Identity Provider (IdP) like Google Workspace.

The Centralized Policy Check (The IdP’s Role)

When the interaction begins, Google acts as a real-time policy enforcement hub for Sudipto’s organization (“Andromeda”) and performs a three-part check:

  • Authentication: Is this really Sudipto?
  • App Legitimacy: Is Zoom a valid, registered application?
  • Enterprise Policy: Has the Andromeda organization explicitly allowed this specific Zoom app to act on behalf of users in Sudipto’s unit?

Only if all checks pass does Google “mint” a digital access token and hand it to Zoom.

Trusting the Token (The Resource Server’s Role)

Zoom then presents this token to Slack to execute the command. Crucially, Slack does not inherently trust Zoom, nor does it need prior knowledge of Sudipto’s current status. Slack has simply configured its backend to trust any token digitally signed by Google. Slack validates Google’s signature on the token, extracts Sudipto’s email from the claims within, matches it to a local Slack user, and executes the command.

This model, where applications hold rigid “assignments” (tokens) vetted by a central authority to do specific jobs, is the building block that agentic communication is now evolving beyond.

What is an AI Agent?

To understand agentic identity, we must first define what an agent actually is. An AI agent is a software program. However, it differs from traditional applications in how it handles logic and control flow.

The “Agentic” Property: Runtime Reasoning

In traditional software, the developer hard-codes the path: if X happens, do Y. In an AI agent, the logic is goal-oriented; the agent is given a high-level objective and uses an LLM to perform multi-step reasoning to achieve the desired outcome. The agent decides which systems to call and which other agents to consult in real-time.

This makes the agent’s control flow unpredictable (or probabilistic). Unlike a standard script, an agent’s execution path is non-deterministic; we cannot predict which API endpoints it will invoke or in what sequence, as it determines the optimal workflow dynamically at runtime.

Contrast: The Standard AI Chatbot (The RAG Model)

To understand the magnitude of this shift, consider the contrast with a standard enterprise AI chatbot. Most chatbots today rely on a predictable Retrieval-Augmented Generation (RAG) architecture, which follows a linear, three-step process:

  1. Input: Takes a natural language query.
  2. Retrieval: Converts the query to a vector and searches a knowledge base for similar documents.
  3. Synthesis: Uses an LLM to summarize the found data for the user.

The Key Difference: While a chatbot is primarily passive—reading data to answer a question—an AI agent is active. An agent's ability to "take action" requires elevated privileges to interact with tools and systems, which creates a far more complex authorization challenge than simple information retrieval.

Access Patterns of AI Agents

When we look at how agents interact with the world, we can coarsely map their access patterns into two distinct types:

  • The Autonomous Agent (The “Service” Model): These agents possess their own identity and credentials. They run independently of a specific user session—for example, a Log Pattern Analyzer that runs 24/7. In IAM terms, these utilize Workload Identities or Service Accounts.
  • The Delegated Agent (The “Copilot” Model): Currently the majority of the market. These agents essentially act as a sophisticated interface for a human. They operate using an On-Behalf-Of (OBO) flow, where a human must be “in the loop” to authorize access to resources (e.g., “Summarize my emails”).

How Agents Connect: The 4 Emerging Patterns

The technical mechanism for accessing tools and data tends to fall into one of four patterns:

  1. The Standardized Protocol (MCP): Agents increasingly use the Model Context Protocol (MCP) to interface with external tools. For remote connections, this often leverages OAuth 2.1 to secure the handshake between the agent and the tool server.
  2. Recursive Discovery (A2A): A growing standard where an agent doesn’t call a tool directly but calls another agent. Using protocols like A2A (Agent to Agent), the primary agent recursively “discovers” the tools available to the secondary agent, effectively chaining capabilities dynamically.
  3. The “Enterprise” Principal: This involves defining an identity for the agent and then assigning specific entitlements to it.
  4. The Legacy Flow: Historically, and still far too common, agents are granted access via hard-coded credentials (API keys) stored in environment variables to bypass complex auth flows.

Next in our Series

AI agents represent a fundamental shift in enterprise software. As we move away from hard-coded paths toward dynamic, goal-oriented reasoning, our security models must evolve accordingly.

Relying on “legacy flows” like hard-coded API keys is no longer sufficient for agents that operate on the fly. Because an agent's path is unpredictable, the industry must now solve for identifying them through attestation and trust built specifically for their identities.

In the next post, we’ll break down the internal anatomy of an AI agent and explore why governance, attestation, and lifecycle controls are foundational to trusting agents at enterprise scale.

AI Agents and Identity: From Delegated Authorization to Agentic Trust

Author: Sudipto Biswas 

Author’s Note: This is part 1 of a multi-part series on AI Agents and Identity written in conjunction with my colleagues Monotosh Das, Sauvik Biswas and Ashutosh Pal.

The era of the enterprise AI Agent is here. As businesses rush to build and integrate agents, a fundamental question is arising: what makes an agent different from a standard application?

The Pre-Agent Foundation: Predictable Delegated Authorization

Before the rise of autonomous AI agents, applications acted on behalf of users in highly structured, predictable ways based on the OAuth 2.0 framework. To understand future agentic identities, we must first understand this pre-agent model of “delegated authorization.”

A prime example is an app-to-app interaction in an enterprise environment: Zoom needs to perform an action within Slack on behalf of an employee, “Sudipto.” In this “pre-agent” model, trust is not established directly between the two apps. Instead, they rely on a mutual, trusted third party—a central Identity Provider (IdP) like Google Workspace.

The Centralized Policy Check (The IdP’s Role)

When the interaction begins, Google acts as a real-time policy enforcement hub for Sudipto’s organization (“Andromeda”) and performs a three-part check:

  • Authentication: Is this really Sudipto?
  • App Legitimacy: Is Zoom a valid, registered application?
  • Enterprise Policy: Has the Andromeda organization explicitly allowed this specific Zoom app to act on behalf of users in Sudipto’s unit?

Only if all checks pass does Google “mint” a digital access token and hand it to Zoom.

Trusting the Token (The Resource Server’s Role)

Zoom then presents this token to Slack to execute the command. Crucially, Slack does not inherently trust Zoom, nor does it need prior knowledge of Sudipto’s current status. Slack has simply configured its backend to trust any token digitally signed by Google. Slack validates Google’s signature on the token, extracts Sudipto’s email from the claims within, matches it to a local Slack user, and executes the command.

This model, where applications hold rigid “assignments” (tokens) vetted by a central authority to do specific jobs, is the building block that agentic communication is now evolving beyond.

What is an AI Agent?

To understand agentic identity, we must first define what an agent actually is. An AI agent is a software program. However, it differs from traditional applications in how it handles logic and control flow.

The “Agentic” Property: Runtime Reasoning

In traditional software, the developer hard-codes the path: if X happens, do Y. In an AI agent, the logic is goal-oriented; the agent is given a high-level objective and uses an LLM to perform multi-step reasoning to achieve the desired outcome. The agent decides which systems to call and which other agents to consult in real-time.

This makes the agent’s control flow unpredictable (or probabilistic). Unlike a standard script, an agent’s execution path is non-deterministic; we cannot predict which API endpoints it will invoke or in what sequence, as it determines the optimal workflow dynamically at runtime.

Contrast: The Standard AI Chatbot (The RAG Model)

To understand the magnitude of this shift, consider the contrast with a standard enterprise AI chatbot. Most chatbots today rely on a predictable Retrieval-Augmented Generation (RAG) architecture, which follows a linear, three-step process:

  1. Input: Takes a natural language query.
  2. Retrieval: Converts the query to a vector and searches a knowledge base for similar documents.
  3. Synthesis: Uses an LLM to summarize the found data for the user.

The Key Difference: While a chatbot is primarily passive—reading data to answer a question—an AI agent is active. An agent's ability to "take action" requires elevated privileges to interact with tools and systems, which creates a far more complex authorization challenge than simple information retrieval.

Access Patterns of AI Agents

When we look at how agents interact with the world, we can coarsely map their access patterns into two distinct types:

  • The Autonomous Agent (The “Service” Model): These agents possess their own identity and credentials. They run independently of a specific user session—for example, a Log Pattern Analyzer that runs 24/7. In IAM terms, these utilize Workload Identities or Service Accounts.
  • The Delegated Agent (The “Copilot” Model): Currently the majority of the market. These agents essentially act as a sophisticated interface for a human. They operate using an On-Behalf-Of (OBO) flow, where a human must be “in the loop” to authorize access to resources (e.g., “Summarize my emails”).

How Agents Connect: The 4 Emerging Patterns

The technical mechanism for accessing tools and data tends to fall into one of four patterns:

  1. The Standardized Protocol (MCP): Agents increasingly use the Model Context Protocol (MCP) to interface with external tools. For remote connections, this often leverages OAuth 2.1 to secure the handshake between the agent and the tool server.
  2. Recursive Discovery (A2A): A growing standard where an agent doesn’t call a tool directly but calls another agent. Using protocols like A2A (Agent to Agent), the primary agent recursively “discovers” the tools available to the secondary agent, effectively chaining capabilities dynamically.
  3. The “Enterprise” Principal: This involves defining an identity for the agent and then assigning specific entitlements to it.
  4. The Legacy Flow: Historically, and still far too common, agents are granted access via hard-coded credentials (API keys) stored in environment variables to bypass complex auth flows.

Next in our Series

AI agents represent a fundamental shift in enterprise software. As we move away from hard-coded paths toward dynamic, goal-oriented reasoning, our security models must evolve accordingly.

Relying on “legacy flows” like hard-coded API keys is no longer sufficient for agents that operate on the fly. Because an agent's path is unpredictable, the industry must now solve for identifying them through attestation and trust built specifically for their identities.

In the next post, we’ll break down the internal anatomy of an AI agent and explore why governance, attestation, and lifecycle controls are foundational to trusting agents at enterprise scale.