Generative AI
Cybersecurity statistics about generative ai
Showing 1-16 of 16 results
The average employee enters sensitive data into AI tools once every three days.
39.7% of all data movements into AI tools involve sensitive data, including prompts or copy-paste actions.
The top 1% of early adopter organizations use more than 300 GenAI tools.
Cautious enterprises typically employ fewer than 15 GenAI tools.
Across the top 100 most-used GenAI SaaS applications, 82% are classified as medium, high, or critical risk.
32.3% of ChatGPT usage occurs through personal accounts.
24.9% of Gemini usage occurs through personal accounts.
The average organization experienced 223 incidents of data policy violations related to generative AI applications each month from October 2024 to October 2025.
The average organization saw a twofold increase in data policy violations related to generative AI applications over the past year.
The top 25% of organizations experienced an average of 2,100 data policy violation incidents per month across 13% of their generative AI user base in 2025.
In 2025, attacks leveraging generative AI were reported in 10% of phishing attacks.
33% of UK consumers have no trust at all in generative AI, while 50% report that it makes them anxious in 2025.
75% of financial institutions say fraudsters outpace defenders with generative AI.
75% of financial institutions say fraudsters outpace defenders with generative AI.
Just 18% have utilized GenAI to speed up summarization and reporting work
Approximately 1 in 4 organizations said they’re concerned about how AI use in the enterprise will make them more attackable (AI and generative AI concerns)