AI Agents
We've curated 52 cybersecurity statistics about AI agents to help you understand how automated systems are being used for threat detection and response in 2025, enhancing security practices while also introducing new vulnerabilities.
Showing 21-40 of 52 results
42% of healthcare companies failed an identity-related compliance audit.
Only 6% of security leaders rank securing non-human identities as their most difficult challenge.
34% of healthcare organizations name AI impersonation of users as their top emerging threat.
Only 23% of healthcare organizations offer passwordless authentication
Only 17% of healthcare organizations list compliance as a top concern.
Fewer than 50% of organizations monitor access or behaviour for the AI systems they deploy.
Over 50% of organizations use AI to detect threats.
85% of organizations lack proper security controls for AI agents.
85% of organizations state they are "ready for AI in security".
31% of respondents say AI agents accessed inappropriate data.
57% of respondents say AI agents sharing privileged data is a factor contributing to AI agents as a security risk.
54% of respondents say AI agents accessing and sharing inappropriate information is a factor contributing to AI agents as a security risk.
55% of respondents say AI agents making decisions based on inaccurate or unverified data is a factor contributing to AI agents as a security risk.
80% of companies say their AI agents have taken unintended actions.
31% of respondents say AI agents accessed inappropriate data.
55% of respondents say AI agents making decisions based on inaccurate or unverified data is a factor contributing to AI agents as a security risk.
72% of respondents state AI agents pose a greater risk than machine identities.
An overwhelming 92% state that governing AI agents is critical to enterprise security.
32% of respondents say AI agents downloaded sensitive content.
Only 44% of organizations report having policies in place to secure AI agents.