Gen AI
We've curated 125 cybersecurity statistics about Gen AI to help you understand how generative artificial intelligence is shaping threat landscapes, enhancing security practices, and influencing detection technologies in 2025.
Showing 1-20 of 125 results
The pass rates for Log Injection vulnerabilities were near 12% across all evaluated models.
The pass rates for Cross-Site Scripting (XSS) vulnerabilities remained below 14% across all evaluated models.
OpenAI’s non-reasoning GPT-5-chat model delivered a 52% pass rate on security tests.
xAI Grok 4 achieved a 55% pass rate on security tests.
OpenAI’s GPT-5 Mini achieved a 72% pass rate on security tests, marking the highest recorded to date.
Google Gemini 2.5 Pro achieved a 59% pass rate on security tests.
Anthropic’s Claude Sonnet 4.5 achieved a 50% pass rate on security tests.
OpenAI’s standard GPT-5 achieved a 70% pass rate on security tests.
Qwen3 Coder achieved a 50% pass rate on security tests.
Over 85% of tasks related to Cryptographic Algorithms passed across the industry.
The average enterprise uploaded more than three times as much data to generative AI platforms in Q3 2025, with 4.4GB compared to 1.32GB in Q2 2025.
12% of all sensitive data exposures originate from personal accounts, including free versions of generative AI tools.
63% of retailers plan to invest significantly in generative AI for social engineering attacks.
57% of sensitive data uploaded to generative AI tools is classified as business or legal data, with 35% of that involving contract or policy drafting.
15% of all sensitive data uploaded to generative AI tools involves personal or employee data, including identifiers such as names and addresses.
The average organization used 27 distinct AI tools in Q3 2025, down from 23 new tools introduced in Q2 2025.
26.4% of all file uploads to generative AI tools contained sensitive data between July and September 2025, an increase from 22% in Q2 2025.
25% of all sensitive data disclosures involve technical data, with 65% of that consisting of proprietary source code copied into generative AI tools.
64% of organizations identified data compromise through generative AI as their top mobile risk.
Organizations are likely to make significant investments in generative AI to defend against social engineering attacks (31%).