
AI agents represent a paradigm shift in enterprise automation. Unlike static scripts, these agents operate autonomously, accessing and manipulating multiple applications to achieve high-level goals. They process information at machine speed, discovering and exercising the full extent of privileges available to them across hybrid environments.
While transformative, this autonomy introduces profound risks. AI agents lower the bar for compromise; they can utilize LLM reasoning to autonomously discover misconfigurations or accidentally trigger undesirable changes.
This, combined with excessive privilege, creates a new crisis for security administrators who are already struggling to enforce least privilege. Recommended Reading on Agent Architecture:
This post dissects the security of AI agents through two key lenses: Autonomous Bots and Human-Assisted Agents, analyzing the specific identity and authorization challenges of each.
To understand the risk, we must distinguish Agentic AI from existing identity models. We can categorize enterprise identities into three buckets:

There are two primary ways Agentic AI is deployed, each with unique authorization profiles.
These agents resemble traditional application bots, but their workflows are driven by LLMs rather than hard-coded logic. Example: An AI agent conducting high-frequency algorithmic stock trading.
When humans interact with agents, we face a "Delegation Dilemma." For example, a customer service agent who interacts with users, assesses severity, and executes refunds across distinct billing and CRM systems. There are two models for handling such agents:
The human "shares" their authorization context with the AI agent. The agent acts on behalf of the user.
The AI agent is deployed with its own set of keys (credentials) configured by an admin.
Note: Some vendors are addressing this challenge by automating bulk authorizations (e.g., Cloudflare’s vision for AI Agents). While standards are still evolving, updates to OAuth 2.1 (short-lived tokens, mandatory PKCE, refresh token rotation) and the MCP Authorization Flow are critical steps forward.
Unlike standard NHIs, Agentic AI behaves more like a hyper-fast human. Anomaly detection models must shift from "single-app predictability" to "cross-app behavioral analysis."
We recommend the following immediate actions for CISOs and AppSec leaders:
The Bottom Line: Agentic AI offers transformative value but carries disproportionate identity risks. Proactive entitlement management and token security are no longer optional—they are the only barrier preventing your AI from becoming the weakest link in your enterprise defenses.
AI agents represent a paradigm shift in enterprise automation. Unlike static scripts, these agents operate autonomously, accessing and manipulating multiple applications to achieve high-level goals. They process information at machine speed, discovering and exercising the full extent of privileges available to them across hybrid environments.
While transformative, this autonomy introduces profound risks. AI agents lower the bar for compromise; they can utilize LLM reasoning to autonomously discover misconfigurations or accidentally trigger undesirable changes.
This, combined with excessive privilege, creates a new crisis for security administrators who are already struggling to enforce least privilege. Recommended Reading on Agent Architecture:
This post dissects the security of AI agents through two key lenses: Autonomous Bots and Human-Assisted Agents, analyzing the specific identity and authorization challenges of each.
To understand the risk, we must distinguish Agentic AI from existing identity models. We can categorize enterprise identities into three buckets:

There are two primary ways Agentic AI is deployed, each with unique authorization profiles.
These agents resemble traditional application bots, but their workflows are driven by LLMs rather than hard-coded logic. Example: An AI agent conducting high-frequency algorithmic stock trading.
When humans interact with agents, we face a "Delegation Dilemma." For example, a customer service agent who interacts with users, assesses severity, and executes refunds across distinct billing and CRM systems. There are two models for handling such agents:
The human "shares" their authorization context with the AI agent. The agent acts on behalf of the user.
The AI agent is deployed with its own set of keys (credentials) configured by an admin.
Note: Some vendors are addressing this challenge by automating bulk authorizations (e.g., Cloudflare’s vision for AI Agents). While standards are still evolving, updates to OAuth 2.1 (short-lived tokens, mandatory PKCE, refresh token rotation) and the MCP Authorization Flow are critical steps forward.
Unlike standard NHIs, Agentic AI behaves more like a hyper-fast human. Anomaly detection models must shift from "single-app predictability" to "cross-app behavioral analysis."
We recommend the following immediate actions for CISOs and AppSec leaders:
The Bottom Line: Agentic AI offers transformative value but carries disproportionate identity risks. Proactive entitlement management and token security are no longer optional—they are the only barrier preventing your AI from becoming the weakest link in your enterprise defenses.