You Cannot Secure AI Agents with Human-Era Identity Models
By Dritan Saliovski
Machine identities are on track to outnumber human identities in most enterprises this year. Yet 78% of organizations have no formal policies for creating or removing AI identities, and 92% lack confidence that their legacy IAM systems can handle the shift.
Key Takeaways
- 78% of organizations lack formal policies for AI identity lifecycle management
- 88% of organizations report suspected or confirmed AI agent security incidents
- 80% of IT professionals have witnessed AI agents performing unauthorized actions
- Only 22% of organizations treat AI agents as independent, identity-bearing entities
The Identity Model Was Built for Humans
Traditional identity and access management follows a predictable pattern. A human user is onboarded, assigned a role, granted permissions based on that role, authenticates through a defined workflow, and eventually offboards. Sessions are predictable. Behavior patterns are recognizable. Access reviews happen quarterly or annually.
AI agents operate under none of these assumptions. They spawn on demand for specific tasks. They chain actions across multiple systems in seconds. They may create sub-agents that inherit permissions without explicit provisioning. They operate at machine speed, making thousands of access decisions in the time it takes a human to complete a single login. And when they finish, they may simply stop existing, leaving behind incomplete or temporary audit records.
The following table highlights the structural mismatch:
| Dimension | Human Identity Model | AI Agent Reality |
|---|---|---|
| Lifecycle | Onboard, assign role, periodic review, offboard | Spawn on demand, dynamic scope, ephemeral existence |
| Access pattern | Predictable, session-based, human speed | Dynamic, tool-chaining, machine speed |
| Authentication | Defined start/end, MFA, session tokens | Continuous or ephemeral, no clear session boundary |
| Permission model | Role-based, quarterly review | Task-specific, changes at runtime when tools are invoked |
| Sub-identity creation | Rare (delegation is manual) | Common (agents spawn sub-agents with inherited permissions) |
| Deprovisioning | Manual offboarding process | Requires automated credential revocation |
The IAM infrastructure that governs human access was not designed for this. Role-based access control assumes stable roles with predictable access patterns. AI agents change behavior dynamically at runtime when they call tools or shift contexts. Session-based authentication assumes a defined start and end. Agents may operate continuously or ephemerally with no clear session boundary. For a deeper look at how AI agents differ from chatbots in their security implications, the identity gap is the root cause.
The Ghost Process Problem
The most immediate risk is what can be described as the ghost process problem: AI agents operating within enterprise environments with real access and real authority, but without a defined identity record, lifecycle management, or audit trail.
This is not theoretical. At RSAC 2026, the dominant theme across hundreds of vendor presentations was agentic AI security. The conversation has moved from experimentation to operational deployment. Organizations are deploying AI agents that read customer data, modify configurations, invoke APIs, and chain actions across systems. Many of these agents operate with elevated permissions that no one explicitly granted.
The blast radius of a compromised AI agent is defined by its entitlements. Unlike a compromised human account, where behavior anomaly detection may flag unusual activity, a compromised agent may behave indistinguishably from its normal operation pattern, simply directed toward a different objective. For how enterprise AI agent security risks are evolving, the ghost process problem is the entry point.
What a Reference Design Looks Like
Securing AI agents requires treating them as a distinct identity class with purpose-built controls. The following elements form a minimum viable reference design:
| Control Domain | Requirement | Implementation Example |
|---|---|---|
| Naming and registration | Unique, discoverable identity in directory | Okta Universal Directory expansion for non-human identities |
| Scoping and least privilege | Task-specific, time-bound access | Intent-based access control evaluated at runtime |
| Secrets and credentials | Short-lived tokens, automatic rotation | HashiCorp Vault adapted for agent credential cadence |
| Observability and audit | Full decision-chain logging | What the agent did, why, what data accessed, what sub-agents spawned |
| Deprovisioning | Automated credential revocation on task completion | Orphaned agent identities treated like orphaned service accounts |
For organizations that have already begun deploying agents, the security-first deployment framework maps these identity controls to the six operational domains that cover the full agent security lifecycle.
What To Do Now
Start with visibility. Inventory every AI agent, bot, and automated workflow operating in your environment. Classify them by access level, data sensitivity, and lifecycle status. Identify which ones have identity records and which are operating as ghost processes.
From there, the priority actions are: establish a formal AI identity policy covering creation, scoping, monitoring, and removal; implement time-bound, least-privilege access for all agent identities; deploy logging and observability that captures the full decision chain of agent actions; and integrate agent identity management into your existing IAM governance reviews. For organizations operating under NIS2 and the Swedish Cybersecurity Act, agent identities fall within the Act's entity-wide compliance perimeter.
The Agent Identity Reference Architecture covers the complete identity lifecycle design, IAM gap assessment framework, and an implementation roadmap organized by organizational maturity level.
Build an AI Agent Identity Architecture
Innovaiden works with leadership teams deploying AI agents across their organizations, from initial setup and training to security framework alignment and governance readiness. Reach out to discuss how we can help your team.
Get in TouchFrequently Asked Questions
Why can't traditional IAM systems handle AI agent identities?
Traditional IAM assumes stable roles with predictable access patterns, session-based authentication with defined start and end points, and human-speed access decisions. AI agents operate under none of these assumptions: they spawn on demand, chain actions across systems in seconds, may create sub-agents that inherit permissions, and operate at machine speed making thousands of access decisions in the time a human completes a single login.
What is the ghost process problem in AI agent security?
The ghost process problem describes AI agents operating within enterprise environments with real access and real authority, but without a defined identity record, lifecycle management, or audit trail. These agents read customer data, modify configurations, and invoke APIs, often with elevated permissions that no one explicitly granted.
What percentage of organizations have formal AI identity policies?
According to the Cloud Security Alliance and Oasis Security's 2026 NHI and AI Security Report, 78% of organizations lack formal policies for creating or removing AI identities. Additionally, 92% lack confidence that their legacy IAM systems can handle the shift to machine identities, and only 22% treat AI agents as independent, identity-bearing entities.
What are the five elements of a minimum viable agent identity reference design?
The five elements are: naming and registration (unique discoverable identity in directory), scoping and least privilege (task-specific time-bound access), secrets and credential management (short-lived tokens with automatic rotation), observability and audit (full decision-chain logging), and deprovisioning (automated credential revocation when agents complete tasks).
What happened at RSAC 2026 regarding AI agent security?
At RSAC 2026, the dominant theme across hundreds of vendor presentations was agentic AI security. The conversation has moved from experimentation to operational deployment. Organizations are deploying AI agents that read customer data, modify configurations, invoke APIs, and chain actions across systems, many operating with elevated permissions that no one explicitly granted.
Related Insights
Sources
- Cloud Security Alliance and Oasis Security - NHI and AI Security Report, 2026
- Gravitee - State of AI Agent Security 2026
- SailPoint - AI Agent Authorization Survey, via Strata
- Okta - Showcase 2026, Universal Directory for Non-Human Identities
- IBM Think / RSAC 2026 - Agentic AI Security Coverage
- CyberArk - Non-Human Identity Research
- MSSP Alert - NHI Reporting