AI
Cybersecurity statistics about ai
Showing 1441-1460 of 1475 results
Speed of incident response was used to evaluate AI efficacy by 51% of respondents
Less than half (48%) of organisations express high confidence in controlling sensitive data used for AI/ML training.
97% of CISOs rate metadata lake technology as either “critical” (36%) or “very valuable” (61%) for solving their data visibility and AI governance issues.
64% of security teams have trouble tracking what data feeds their AI systems.
Costs were an obstacle for 46% of respondents in effective use of AI.
False positive and negative rates are the No. 1 way that organizations reported that they evaluate the efficacy of AI in security, named by 66% of respondents
Machine learning-based discovery tools often identify 31% more API endpoints than those reported by enterprises.
49% perform a privacy risk assessment to monitor their privacy programs.
Only 11% of AI-powered APIs implemented robust security measures, such as bearer tokens with expiration times.
Gartner predicts that Spending on AI-optimised servers will double that of traditional servers in 2025, reaching $202 billion.
47% of organisations cite adversarial advances powered by generative AI (GenAI) as their primary concern.
84% of organizations say a lack of transparency in applying AI applications within business processes is causing regulatory compliance issues.
Wallarm tracked 439 AI-related CVEs in 2024.
21.5% of AI vulnerabilities are indirectly tied to APIs, including flaws in third-party integrations.
90% of surveyed executives expect to scale, optimise, or innovate with AI within the next two years.
66% of organisations expect AI to have the most significant impact on cybersecurity in the year to come, but only 37% report having processes in place to assess the security of AI tools before deployment.
AI vulnerabilities increased by 1,025% from 2023 to 2024.
35% of enterprises are just beginning their AI journey.
63% of enterprise leaders believe AI increases API security risk.
77.4% of API-related vulnerabilities in AI products are directly API-related, such as weak API authentication, inadequate rate limiting, and broken access controls.