Saviynt's $700 million Series B at a ~$3 billion valuation isn't just another security funding round. It's venture capital declaring that identity is becoming the control plane for AI, not just a compliance checkbox.
KKR's investment thesis is explicit: identity is "foundational infrastructure for organizations deploying AI at scale." The capital is earmarked for an "AI-native identity platform" with explicit governance for AI agents and non-human identities. This isn't feature development—it's category creation.
The market backdrop explains why. Non-human and AI identities now outnumber humans 82 to 1 in enterprise environments. 68% of large enterprises (>1,000 employees) have deployed AI agents. 72% of enterprises are using or testing agents, and 84% plan to increase investment in the next 12 months. But here's the gap: 68% lack identity security controls for AI.
The numbers tell a clear story:
- IGA market: $3.99B → $16.85B (2025-2030)
- PAM market: $2.9B → $7.7B (2023-2028)
- AI agents market: $7.92B → $236.03B (2025-2034) at 45.82% CAGR
"Agent identity" is emerging as a distinct infrastructure layer on par with Identity Governance & Administration (IGA) and Privileged Access Management (PAM), not just a feature add-on.
Here's why.
For broader context on how agents fit into enterprise security, see our Agent Safety Stack and Agent Ecosystem Map.
The Identity Crisis: Why Agents Break IGA/PAM
Traditional identity and access management was built on a set of assumptions that made perfect sense when the workforce was human:
The old model assumed:
- Identities = humans + a small set of static service accounts
- Access models are Role-Based Access Control (RBAC) with long-lived roles and periodic access review campaigns
- Credentials are explicit and durable—accounts and passwords/keys in vaults, rotated periodically
- Auditing is event-level: "Who logged in, to what, when, and what commands did they run?"
- Lifecycle is slow-moving, tied to HR events (joiner/mover/leaver), with quarterly or yearly certification campaigns
AI agents shatter every one of these assumptions.
Dynamic and Ephemeral
Agents don't get hired. They spawn, chain, delegate, and terminate per task, not per employee onboarding. A single user request might trigger the creation of multiple sub-agents, each with different scopes and lifespans measured in seconds or minutes.
Agents mutate behavior via prompt updates and model version changes, not just configuration files. Traditional IGA systems expect identity lifecycle to map to HR records. Agents have no HR records.
Autonomous Decision-Making
Agents plan, call tools, and invoke other agents without real-time human approval. They adapt their access patterns based on context—learning what data they need as they reason through a task, rather than requesting pre-defined entitlements.
This is fundamentally different from a service account running a batch job. Service accounts are deterministic. Agents are probabilistic and adaptive.
Delegated Identity Chains
A single agent workflow might use:
- The end user's delegated OAuth token for one API call
- A service principal for another
- The agent's own "agent identity" for a third
All within the same task execution.
This creates a new version of the "confused deputy" problem: agents using highly privileged service accounts can be manipulated via prompt injection or tool misuse into abusing that privilege. The agent becomes a deputy for an attacker, not the user.
For more on prompt injection and agent attack vectors, see Agent Attack Surface.
Reasoning Telemetry Required
In traditional systems, audit logs capture events: login timestamps, database queries, file accesses. For agents, "what happened" isn't enough. Post-incident forensics require understanding "why did the agent decide to do this?"
Auditors need:
- The chain of reasoning
- Which prompts influenced the decision
- Which tool calls were considered and executed
- What data sources informed the response
As Silverfort, Opal, CyberArk, and Okta have noted: AI agents sit on top of non-human identities, but introduce new risks—autonomous overreach, intent confusion, delegation chains, and explainability gaps—that traditional service-account hygiene can't address.
The challenge is captured perfectly in one statistic: "Your company has a million identities, and only 100,000 are human." The other 900,000 are machines, services, and now—agents.
7AI's PLAID framework offers one model for governing agent autonomy levels—see 7AI Deep Dive for details.
The Liability and Compliance Mandate
The identity problem isn't just technical. It's legal, regulatory, and increasingly—a balance sheet risk.
Who's Responsible When the Agent Acts?
Current legal analysis is converging on a few principles:
No legal personhood for agents. AI systems, including agents, are not recognized as legal persons. Liability must attach to human or corporate actors.
Responsibility tied to control and benefit:
- Developers who design and train the model and its guardrails
- Operators (deployers/integrators) who configure and approve use cases
- Organizations that profit from and supervise the agent's use
The EU AI Act is explicit: "apparent autonomy" does not absolve humans or organizations. The organization deploying the agent—and by extension, its executives—remain on the hook, even when the agent's decision path is opaque.
Insurance Is Tightening the Screws
The insurance industry is responding with a sharp shift:
Broad AI exclusions. Errors & Omissions (E&O), General Liability (GL), and Directors & Officers (D&O) carriers are introducing "absolute" AI exclusions that can bar coverage for any claim "arising out of or related to" AI use—even when AI is only tangentially involved. New ISO-style endorsements (e.g., Verisk's CG 40 47/48) formally exclude generative AI exposures starting in 2026.
Narrow affirmative AI coverage. Niche players offer specific coverage for gen-AI errors, IP infringement, and defamation, but often exclude AI vendors and foundational model providers.
Cyber & AI agent privilege. Cyber insurers and brokers increasingly focus on privileged non-human identities and AI agent controls as underwriting signals. CyberArk notes that extending PAM, vaulting, and behavioral analytics to AI agents is becoming part of the expected control baseline.
AI warranties demand proof. Munich Re-style AI warranties and risk-sharing constructs are starting to require proof of guardrails, monitoring, and explainability. These map directly to agent identity and audit controls.
Net effect: Enterprises cannot rely on insurance to absorb agent failures unless they can demonstrate strong AI and identity governance—including for agents.
Regulatory Hooks Converging on Traceability
Multiple regulatory regimes are converging on the same requirements: attribution, provenance, and human oversight.
GDPR Article 22 – Automated Decision-Making
The GDPR includes a qualified prohibition against decisions solely based on automated processing that have legal or similarly significant effects. Exceptions (contract necessity, law authorization, explicit consent) require:
- Meaningful human intervention
- Transparency about logic and consequences
- The right to obtain human review and contest decisions
EU AI Act
A risk-based regime with transparency, traceability, logging, and registration obligations for high-risk systems. Requirements include:
- Logging and event recording
- Technical documentation, model evaluations, adversarial testing
- Registration of certain high-risk systems in an EU database
SOC 2 / ISO 42001
SOC 2 has no AI-specific Trust Services Criteria, but security, availability, confidentiality, privacy, and processing integrity all apply to AI and agents. SOC 2+ guidance increasingly ties AI controls to ISO 42001, including:
- AI-specific logging
- Model governance
- Risk assessment and monitoring
HIPAA
The Privacy Rule, Security Rule, and audit controls apply equally when Protected Health Information (PHI) is accessed or processed by AI agents. AI agents must:
- Enforce RBAC, MFA, encryption
- Produce audit trails of PHI access and anomalies
The Compliance Gap
"The agent did it" isn't an audit trail.
Auditors and regulators need:
- Attribution: Which identity (human or agent) initiated which action, and on whose behalf?
- Provenance: What inputs (data, prompts, tools) and policies influenced a decision?
- Oversight evidence: When and how did a human review, override, or approve the agent's actions?
- Control effectiveness: Evidence that controls (access limits, risk detection, HITL decision points) operated effectively over time.
This is why many SOC 2 and ISO 27001 consultancies are already recommending AI-specific log schemas that capture: user, agent, tool, data source, decision rationale, and escalation events.
For patterns on integrating human oversight with agent systems, see HITL Firewall. For compliance in a vertical context, see Legal AI Exception.
The Technical Architecture: How Agent Identity Actually Works
If agents need their own identity layer, what does it look like? The architecture is emerging from work at Microsoft, Okta, Entra, Veza, the Cloud Security Alliance, and others.
Identifying an Agent
Agent identity primitives:
- Agent ID / Object ID: Primary unique identifier in a directory (e.g., Microsoft Entra Agent ID uses identical Object ID and App ID)
- Model ID: Underlying LLM or model version the agent is built on
- Deployment ID / environment: Specific runtime instantiation (tenant, region, environment)
- Versioning and drift tracking: Metadata about agent configuration, policies, prompt templates, and model versions over time
- Prompt fingerprinting: Hashes or signatures of initial system prompts and critical guardrail prompts to detect drift or tampering
Directory representation:
Treat agents as first-class objects in IAM directories:
- Microsoft Entra Agent ID: Purpose-built "agent identity" accounts in Entra ID with Object ID/App ID, conditional access, identity protection, and lifecycle management. Tokens are JIT and scoped; no long-lived passwords or secrets for agents. Network-level protection via Global Secure Access.
- Okta Universal Directory: Agents registered with lifecycle tracking, governance, and threat protection.
Attributes attached to agent identities:
- Owner/sponsor
- Purpose and autonomy level
- Allowed tools/domains
- Data classification scope
Granting Access to Agents
The authentication and authorization patterns are converging on what multiple vendors call "OAuth for agents"—though no single standard has settled yet.
OAuth 2.x / OIDC as baseline:
- For user-delegated access: Authorization Code + PKCE
- For agent-only access: Client credentials
- Tokens scoped by:
- Resource server (API)
- Actions (read/write/admin)
- Data domains (e.g., PII vs non-PII)
- Short-lived, revocable tokens with minimal standing privilege; prefer "transaction tokens" per session
Agent-centric authorization protocols emerging:
- Okta Cross App Access (XAA): An OAuth-like standard that centralizes agent/app-to-app authorization at the identity fabric instead of per-app consents.
- Google Agent2Agent (A2A) protocol: Agent cards, JWT/OIDC, mTLS for secure agent-to-agent communication. AWS Bedrock AgentCore also supports A2A.
- IETF draft AAuth (Agentic Authorization Grant): An OAuth 2.1 extension for agents collecting PII via natural language and then obtaining access tokens securely.
- Model Context Protocol (MCP) and MCP-Identity (MCP-I): Emerging specs for tool/server identity and delegation semantics, aiming to standardize "who is this agent, acting for whom, under what grant?"
Least-privilege scopes:
Permissions are encoded as policies (ABAC / OPA / Cerbos-style Policy Decision Points) consulted at tool call time, rather than static roles. This allows dynamic, context-aware access decisions based on:
- Agent identity
- User context
- Resource sensitivity
- Task requirements
- Current autonomy level
Auditing Agent Actions
Key telemetry dimensions for compliance and forensics:
Distributed tracing for agent chains:
- Trace IDs that span: End user → agent → sub-agents → tools → downstream APIs
Decision provenance logs:
- Prompts (system/user)
- Intermediate reasoning summaries (or full traces in sensitive contexts)
- Model responses
- Tool decisions
Tool and data lineage:
- Which tools/APIs were called, with what parameters and scopes
- What data sets/tables/views or files were read/written
Identity linkage:
- Explicit mapping of:
- Human user
- Agent identity
- Underlying service/machine accounts used along the chain
Compliance-ready storage:
- Immutable logs (WORM / S3 Object Lock style)
- Verifiable integrity
- Retention aligned to SOC 2 / HIPAA / GDPR requirements
For integration with agent observability platforms, see Agent Observability.
Revoking Agent Access
Revocation patterns for "kill switches":
Identity-level kill switch:
- Disable or delete the agent identity in IAM (Okta/Entra) to immediately cut off tokens and new sessions
Token-level revocation:
- Central token blacklisting/rotation
- Shortening TTLs to tighten recovery windows
Runtime circuit breakers:
- Agent runtime can:
- Quarantine an agent identity upon risk score or anomaly
- Suspend high-risk capabilities (e.g., write access) while preserving read-only
Rollback mechanisms:
- Recordable/compensating operations for "reverse the last X actions" where feasible (e.g., revert access changes, revert transactions), coupled with human approval
For operational playbooks on agent lifecycle and incident response, see Agent Operations Playbook.
The Competitive Landscape: Who's Building the Agent Identity Layer
The market is heating up fast, with identity incumbents racing to adapt and pure-play vendors carving out niches.
Identity Incumbents
Saviynt ($700M Series B, ~$3B valuation)
- Positioning: Converged IGA + PAM + ISPM as an AI-native Identity Cloud
- Capabilities:
- ISPM for AI agents with central inventory of non-human identities including AI agents, machines, applications, service accounts
- Agentic AI Onboarding for Applications—uses agentic AI to discover and onboard disconnected and cloud apps
- Risk scoring, violation detection, one-click remediation
- Visibility into MCP servers and tools behind AI agents, enriched with security signals from partners like CrowdStrike
- Scale: >500 enterprise customers, >50M human/third-party identities governed, ARR >$185M (Q3 2024)
Okta (Identity Security Fabric)
- Positioning: Identity fabric + agent lifecycle + cross-app access
- Capabilities:
- Okta for AI Agents: registration and ownership tracking in Universal Directory
- Identity Security Posture Management for agents
- Okta Privileged Access for least-privilege and JIT access
- Cross App Access (XAA): Open standard extending OAuth for agent/app-to-app interactions
- Differentiation: Treats agents as first-class identities with full lifecycle, governance, and threat protection
Microsoft Entra (Agent ID)
- Positioning: Deep Microsoft 365/Copilot integration story
- Capabilities:
- Purpose-built "agent identity" accounts in Entra ID
- Object ID/App ID with conditional access, identity protection, lifecycle management
- JIT, scoped tokens; no long-lived passwords/secrets for agents
- Network-level protection and traffic logging via Global Secure Access
- Differentiation: Tight integration with Entra Conditional Access and Microsoft's entire security stack
CyberArk (Secure AI Agents Solution)
- Positioning: PAM vantage point—privileged agents as admins
- Capabilities:
- Agents treated as privileged digital identities
- JIT access, secrets management, behavioral analytics, lifecycle governance
- Market data: CyberArk cites 68% of orgs lack identity controls for AI/LLMs; 82 machine identities per human
- Differentiation: Identity-first agent security from a privileged access management background
Pure-Play / Adjacent Agent Security
P0 – Privileged access control plane extended to agents. Closed-loop agentic governance and JIT access for AI agents.
MintMCP – MCP Gateway for agent security and centralized access controls. Stats report 82% of enterprises deploying AI agents but only 44% with security policies.
Veza – ISPM and Access Graph for non-human identities and AI agents as a "privileged workforce." Graph-based visibility across identity relationships.
Opal, Aembit, Akeyless, SecureAuth – Non-human identity management and AI agent identity solutions emphasizing unified governance, least privilege, and OAuth/OIDC for agents.
Differentiation Themes
- Saviynt: Converged platform (IGA + PAM + ISPM) with explicit AI agent and NHI governance
- Okta: Identity fabric + XAA standard; deep focus on agent lifecycle and cross-app access
- Microsoft: Agent ID tightly integrated with Entra, Conditional Access, and network controls
- CyberArk: Identity-first agent security from a PAM vantage point (privileged agents as admins)
- Pure-plays (P0, MintMCP, Veza, Opal): Focus on production env access, MCP/A2A integration, and graph-based visibility
For the full landscape of agent security and governance vendors, see Agent Ecosystem Map.
Real-World Use Cases: Where Agent Identity Is Critical Today
Agent identity isn't theoretical. It's already critical in regulated verticals where autonomous systems meet sensitive data.
Financial Services
Agents assisting with trading, fraud detection, loan approvals, and KYC (Know Your Customer). Identity and audit are crucial for SOX, AML (Anti-Money Laundering), and local banking regulations.
Common scenario: Agent needs temporary elevated access to execute a trade or approve a loan. Without JIT access and strong identity controls, teams fall back to long-lived admin tokens—which become high-value attacker targets.
Healthcare
Agentic AI for PHI (Protected Health Information) access: virtual assistants, triage, scheduling, clinical documentation. HIPAA requires strict access control and audit logging.
Common scenario: Agent accesses patient records across multiple EMR (Electronic Medical Record) systems to generate a clinical summary. Compliance requires proving which agent, on whose behalf, for what purpose, with what data sources—and that a human physician reviewed the output before clinical use.
When completed, see Abridge Deep Dive for how a healthcare AI company handles Epic integration and HIPAA compliance at scale.
Legal / Regulated Professional Services
Agents summarizing cases, drafting documents, and accessing privileged client data. Confidentiality and explainability obligations are high.
Common scenario: Agent chain spans multiple security domains—e.g., an LLM agent calls an incident-response agent that in turn calls cloud runbooks. Each step requires different credentials and scopes. Identity layer must ensure proper delegation and audit trail.
When completed, see Harvey Deep Dive for how a legal AI company approaches "The Vault" architecture for data privacy.
HR / Internal Operations
Agents manipulating HRIS data, payroll, and personnel records. Intersection of privacy, labor law, and SOX/ISO 27001 obligations.
Common scenario: Agent is steered into calling sensitive APIs outside its intended scope via prompt injection. Identity layer must block or down-scope the request based on policies, not just rely on the agent's "intent."
For more on prompt injection and agent attack vectors, see Agent Attack Surface.
The Policy and Observability Stack
Identity systems need policy and monitoring layers to be effective.
Why RBAC/ABAC Alone Are Insufficient
Static RBAC (Role-Based Access Control) is too coarse and brittle for ephemeral, multi-tenant, dynamic agent workflows. Roles were designed for humans with stable job functions, not agents that spawn and terminate per task.
ABAC (Attribute-Based Access Control) improves expressiveness, but policies can still be static and lack situational awareness about:
- Chain length (how many hops in the agent chain)
- Current autonomy level (L1 supervised vs L3 autonomous)
- Data sensitivity context (is this PII? Financial data?)
Policy-as-Code and PLAID-Style Models
Emerging best practices:
Policy-as-Code (PaC):
- External Policy Decision Points (e.g., OPA, Cerbos) used to evaluate agent actions at runtime using attributes: agent, user, resource, environment, task
Autonomy tiers / PLAID-like models:
- Levels of autonomy (L1–L3 or 0–5) map to:
- Required human-in-the-loop (HITL)
- Maximum allowed transaction values
- Data domains agents can touch
Example policy elements:
- "Agent can read customer data but not PII fields"
- "Agent can write to database only via audited API X, not direct DB connections"
- "Agent may execute trades under $10K; above that, escalate to human approver"
- "Agent must request human approval when its confidence < threshold or when GDPR Article 22-relevant automated decision is detected"
These map directly into a PLAID (People-Led, AI-Driven) governance model, where identity and policy engines enforce progressive trust in lock-step with agent autonomy.
For details on PLAID autonomy levels, see 7AI Deep Dive.
Observability Integration
LangSmith / Langfuse / LLM tracing tools
- For reasoning and tool-call traces
- Need to correlate with identity logs from IAM systems
- Example: Trace ID links agent decision chain to specific agent identity and user context
SIEM/SOAR (Security Information and Event Management / Security Orchestration, Automation and Response)
- Entra Agent ID logs, Okta agent events, CyberArk PAM logs, and MCP/A2A traces forwarded to SIEM
- SOAR used to auto-quarantine agents on anomaly detection
- Example: Agent exhibits unusual sign-in pattern → SIEM flags → SOAR disables agent identity
Custom dashboards (Veza-style Access Graphs)
- Agents by owner, autonomy, privilege score, data domain
- Active vs dormant privileges
- Violations and open investigations
- Example: Dashboard shows 300 agent identities, 45 with dormant high-privilege access, 3 with recent violations
For broader agent observability patterns, see Agent Observability.
HITL Integration
Human-in-the-loop isn't just UX; it's an identity and audit problem.
Approval gates:
- Agents must route high-risk actions (e.g., large transactions, PHI exports, access revocations) to human approvers
- Approvers themselves are authenticated and logged via IGA/PAM
Escalation and circuit breakers:
- Identity-aware circuit breakers that:
- Block all write operations for an agent when risk scores exceed threshold
- Require explicit human re-enablement
Demonstrable "meaningful human intervention" (GDPR Article 22 compliance):
- Logs must show:
- Who reviewed the automated decision
- What additional information they considered
- Whether they overrode or confirmed the agent
For detailed HITL patterns, see HITL Firewall.
The Future: Agent Credentials and Standards
Will agents have "credentials" like humans? The answer is starting to form.
Agent "Passports" and Verifiable Credentials
CSA Agentic AI IAM framework:
- Uses DIDs (Decentralized Identifiers) + VCs (Verifiable Credentials)
- Agents can present cryptographically verifiable identities and delegated rights
Google AP2 (Agent Payments Protocol):
- Uses Mandates and VCs to prove users authorized specific financial actions
- Example: User delegates payment authority to agent within certain limits; agent presents VC to payment processor
MCP-Identity (MCP-I):
- Nascent spec for verifiable identity and delegation for AI agents
- Compatible with web standards
- Aims to standardize "who is this agent, acting for whom, under what grant?"
OAuth Extensions and "OAuth for Agents"
OAuth 2.1 / OIDC as baseline, but new extensions emerging:
IETF AAuth: For agents over PSTN/SMS, obtaining tokens from conversational PII securely
Okta Cross App Access (XAA): Agent-to-agent OAuth patterns with centralized policy enforcement
Google A2A protocol: Agent cards, JWT/OIDC, mTLS; role-based and zero-trust governance across agents
Key open questions:
- Cross-tenant, cross-platform federated agent identity (e.g., an agent from Vendor A running in Tenant B accessing SaaS C)
- Standard metadata for autonomy, allowed scopes, and provenance
- Will agents have literal "credentials" like humans? (The "Agent Social Security Number" concept)
Net: There is no single, settled "OAuth for agents" yet—but the shape is emerging: OAuth/OIDC + MCP/A2A + DIDs/VCs + directory-native agent identities.
Industry consortiums and working groups are forming, but standards work typically lags deployment by 2-3 years. The market is moving faster than the standards bodies.
Contrarian Take: Is This Just IAM Rebranded?
Before declaring "agent identity" a new category, it's worth hearing the skeptic's case.
The Skeptic's Case
Agents are just fancy service accounts with orchestration. They use API keys, make HTTP calls, and access databases—just like service accounts have done for decades.
Traditional PAM + good secrets hygiene should be sufficient. Vault your credentials, rotate them regularly, monitor for anomalies. Why do we need a new category?
Vendors are rebranding existing IGA/PAM to ride AI hype. Every identity vendor is slapping "AI" on their product. Is this substance or marketing?
Market may be too early. Most enterprises haven't deployed enough agents to justify new infrastructure. This is a solution in search of a problem.
The Bull Case
Agent autonomy and reasoning fundamentally change the attribution and audit problem. Service accounts execute deterministic code. Agents make autonomous decisions based on reasoning chains. When an agent makes a mistake, you need to understand why it decided to do that—not just what API it called.
Liability and insurance mandates require provenance and explainability, not just event logs. Regulators want to know: "Who is responsible when the agent acts?" Insurance underwriters are making AI agent controls a coverage requirement. This isn't solved by traditional PAM.
82:1 machine-to-human ratio + 68% enterprise agent adoption = demand is real. The scale of non-human identities already dwarfs humans. Adding autonomous reasoning to those identities creates a new control problem.
$700M raise at $3B valuation + Okta, Microsoft, CyberArk all building explicit agent identity = market signal is strong. This isn't one vendor hyping a feature. It's the entire identity industry converging on a new control plane.
The Verdict
Agent identity may start as "IAM++" but the compliance, liability, and standards work required suggests it's evolving into a distinct layer—just as PAM emerged from IGA in the early 2010s.
The question isn't whether you need it. It's whether you build it in-house or buy it as a service—before your first agent-related audit failure.
Conclusion: Agent Identity as Infrastructure
Saviynt's $700M raise is early proof that "agent identity" is becoming a standalone infrastructure layer.
The case is clear:
- Agents break IGA/PAM assumptions around static roles, durable credentials, and event-level auditing
- Liability, insurance, and regulation are channeling responsibility back to identity and audit capabilities
- Technical architecture is converging on directory-native agent identities + OAuth-like protocols + distributed tracing + kill switches
- Competitive landscape shows incumbents (Okta, Microsoft, CyberArk) and pure-plays (P0, Veza, MintMCP) all racing to claim the layer
By 2027, "agent identity" will be as standard a procurement category as IGA and PAM are today. CISOs will evaluate:
- Agent lifecycle management (registration, provisioning, de-provisioning)
- Agent access governance (ABAC/PaC policies, JIT, least privilege)
- Agent audit and compliance (provenance logs, GDPR/HIPAA/SOC 2 readiness)
- Agent observability integration (SIEM, LangSmith, custom dashboards)
Agent identity is the backbone connecting the Agent Safety Stack, Agent Attack Surface, HITL Firewall, Agent Observability, and Agent Operations Playbook into a coherent governance framework.
The question isn't whether you need agent identity. It's whether you build it in-house or buy it as a service—before your first agent-related audit failure.
For a comprehensive view of the agent security and governance landscape, see our Agent Ecosystem Map.