AI Security
We've curated 23 cybersecurity statistics about AI security to help you understand how AI is being used to detect threats, enhance defenses, and even automate responses in the ever-evolving landscape of cybersecurity in 2025.
Related Topics
Showing 1-20 of 23 results
Use of risk-ranking methods to determine where LLM-generated code is safe to deploy increased by 12%.
In 2025, an AI agent placed in the top 5% of teams in a major cybersecurity competition.
Application of custom rules to automated code review tools to catch issues unique to AI-generated code increased by 10%.
Teams using attack intelligence to track emerging AI vulnerabilities increased by 10%.
65% of enterprises consider action-level guardrails and runtime controls to be a critical priority for AI agents.
Only 21% of enterprises report full visibility into agent actions, MCPs tool invocations, or data access.
AI applications experienced a 43% increase in security incidents over the past 12 months, marking the second-largest increase across all channels.
30% of businesses reported that they use AI and must now protect it like any other critical system.
53% of IT leaders globally are highly or extremely concerned about AI security risks.
76% of respondents reported that autonomous AI agents are the hardest systems to secure.
68% of organizations invested in AI-powered protection capabilities in 2025
74% of organizations expect their focus on AI security to increase significantly over the next two years.
78% of U.S. CISOs expect AI to create a moderate or significant amount of new IT or security work for their teams due to AI-related security risks and vulnerabilities.
78% of CISOs lack a formal strategy for handling AI identities in a zero trust security architecture in 2025.
100% of organizations plan to invest more of their budget in AI-related security initiatives in the next 12 months.
69% of healthcare IT leaders feel pressured to adopt AI faster than they can secure it.
83% of healthcare IT and compliance leaders have raised concerns about AI security.
Approximately 70% of cloud AI workloads contain at least one unremediated vulnerability
91% of Amazon SageMaker users have at least one notebook that, if compromised, could grant unauthorized access
Tenable Research found CVE-2023-38545—a critical curl vulnerability—in 30% of cloud AI workloads