Skip to main content
HomeTopicsGen AI

Gen AI

We've curated 125 cybersecurity statistics about Gen AI to help you understand how generative artificial intelligence is shaping threat landscapes, enhancing security practices, and influencing detection technologies in 2025.

Showing 41-60 of 125 results

26.3% of ChatGPT use by employees was via personal accounts.

Harmonic Security7/31/2025
AIChatGPT

68% of security leaders state that their boards now view the secure deployment of generative AI as a critical priority.

Cobalt7/31/2025

535 separate incidents of sensitive exposure were recorded involving Chinese GenAI tools.

Harmonic Security7/31/2025
AIChinese Gen AI

7.95% of employees in the average enterprise used a Chinese GenAI tool.

Harmonic Security7/31/2025
AIChinese Gen AI

Code leakage was the most common type of sensitive data sent to GenAI tools.

Harmonic Security7/31/2025
AISensitive data

The average enterprise uploaded 1.32GB of files (half of which were PDFs) to GenAI tools and AI-enabled SaaS applications in Q2. A full 21.86% of these files contained sensitive data.

Harmonic Security7/31/2025
AISensitive data

LLMs failed to secure code against log injection (CWE-117) in 88% of cases

Veracode7/30/2025
AI codeLLMs

LLMs failed to secure code against cross-site scripting (CWE-80) in 86% of cases.

Veracode7/30/2025
AI codeLLMs

AI-generated code introduces security vulnerabilities in 45% of cases.

Veracode7/30/2025
AI codeSecurity vulnerabilities

When given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45% of the time.

Veracode7/30/2025
AI codeSecurity vulnerabilities

In 45% of all test cases, LLMs introduced vulnerabilities classified within the OWASP Top 10.

Veracode7/30/2025
AI codeLLMs

Java was found to be the riskiest language for AI code generation, with a security failure rate over 70%. Other major languages, such as Python, C#, and JavaScript, presented significant risk, with failure rates between 38 percent and 45 percent.

Veracode7/30/2025
AI codeJava

Organisations that implement light-touch guardrails and nudges, rather than blanket blocking of Chinese GenAI tools, have seen up to a 72% reduction in sensitive data exposure, while increasing AI adoption by as much as 300%.

Harmonic Security7/17/2025
AI

Customer data represented 12.0% of sensitive data exposed through employee use of Chinese GenAI tools at work.

Harmonic Security7/17/2025
AISensitive data exposure

Legal documents made up 4.9% of sensitive data exposed through employee use of Chinese GenAI tools at work.

Harmonic Security7/17/2025
AISensitive data exposure

Among the 1,059 users who engaged with Chinese GenAI tools, there were 535 incidents of sensitive data exposure.

Harmonic Security7/17/2025
AISensitive data exposure

The majority of sensitive data exposure (roughly 85%) due to the use of Chinese GenAI tools occurred via DeepSeek, followed by Moonshot Kimi, Qwen, Baidu Chat and Manus.

Harmonic Security7/17/2025
AISensitive data exposure

Financial information accounted for 14.4% of sensitive data exposed through employee use of Chinese GenAI tools at work.

Harmonic Security7/17/2025
AISensitive data exposure

Personally identifiable information (PII) comprised 17.8% of sensitive data exposed through employee use of Chinese GenAI tools at work.

Harmonic Security7/17/2025
AISensitive data exposure

1 in 12 employees, or 7.95%, used at least one Chinese GenAI tool at work.

Harmonic Security7/17/2025
AI