Gen AI
We've curated 125 cybersecurity statistics about Gen AI to help you understand how generative artificial intelligence is shaping threat landscapes, enhancing security practices, and influencing detection technologies in 2025.
Showing 21-40 of 125 results
70% of students are early adopters of generative AI and use it to create or modify images.
67% of organisations are implementing usage guidelines for GenAI.
More than half (59%) of organisations restrict employee use of GenAI tools altogether.
64% of global CISOs say enabling GenAI tool use is a strategic priority over the next two years.
In the U.S., 80% of CISOs express concern over potential customer data loss via public GenAI platforms.
Three in five CISOs (60%) worry about customer data loss via public GenAI tools.
Of analyzed prompts and files submitted to 300 GenAI tools and AI-enabled SaaS applications between April and June, 22% of files (totaling 4,400 files) and 4.37% of prompts (totaling 43,700 prompts) were found to contain sensitive information.
15% of Google Gemini use by employees was via personal accounts.
Of these incidents involving Chinese GenAI tools, the exposed data types included: 32.8% involving source code, access credentials, or proprietary algorithms; 18.2% including M&A documents and investment models; 17.8% exposing PII such as customer or employee records; and 14.4% containing internal financial data.
1.8% of all sensitive prompts analysed in Q2 originated in Perplexity.
72.6% of all sensitive prompts analysed in Q2 originated in ChatGPT.
2.1% of all sensitive prompts analysed in Q2 originated in Poe.
13.7% of all sensitive prompts analysed in Q2 originated in Microsoft Copilot.
5.0% of all sensitive prompts analysed in Q2 originated in Google Gemini.
In Q2, the average enterprise saw 23 previously unknown GenAI tools newly used by their employees.
2.5% of all sensitive prompts analysed in Q2 originated in Claude.
Generative AI (GenAI) was involved in 70% of real-world AI security incidents.
68% of CISOs consider supply chain risk and generative AI security to be top concerns, viewing them as intertwined challenges that are redefining the attack surface.
47.42% of sensitive employee uploads to Perplexity were from users with standard (non-enterprise) accounts.
Sensitive data in files sent to GenAI tools showed a disproportionate concentration of sensitive and strategic content compared to prompt data, with files being the source of 79.7% of all stored credit card exposures, 75.3% of customer profile leaks, 68.8% of employee PII incidents, and ◦ 52.6% of total exposure volume in financial projections.