Skip to main content
HomeTopicsAI Risk

AI Risk

Cybersecurity statistics about ai risk

Showing 1-13 of 13 results

The average employee enters sensitive data into AI tools once every three days.

Cyberhaven Labs2/8/2026
Enterprise AIGenerative AI

Across the top 100 most-used GenAI SaaS applications, 82% are classified as medium, high, or critical risk.

Cyberhaven Labs2/8/2026
Enterprise AIGenerative AI

32.3% of ChatGPT usage occurs through personal accounts.

Cyberhaven Labs2/8/2026
Enterprise AIGenerative AI

24.9% of Gemini usage occurs through personal accounts.

Cyberhaven Labs2/8/2026
Enterprise AIGenerative AI

64% of enterprise leaders in the UK view AI as posing little or no threat to networks.

Arelion2/5/2026
Data SecurityUK

56% of enterprise leaders classify AI-related risks to their critical data as moderate to extreme.

Arelion2/5/2026
Data SecurityEnterprise

Top AI-related cybersecurity concerns are data leakage through copilots and agents (22%), third-party and supply chain risks (21%), evolving regulations (20%), shadow AI (18%), and prompt injection attacks (18%).

Tines2/4/2026
CybersecuritySupply Chain Risk

83.8% of enterprise data input into AI tools flows to platforms classified as medium, high, or critical risk.

Cyberhaven4/23/2025
AIAI risk

39.5% of AI tools have the key risk factor of inadvertent exposure of user interactions and training data.

Cyberhaven4/23/2025
AIAI risk

34.4% of AI tools have user data accessible to third parties without adequate controls.

Cyberhaven4/23/2025
AIAI risk

Only 11% of AI tools assessed qualify for low or very low risk classifications.

Cyberhaven4/23/2025
AIAI risk

Cyberhaven's assessment of over 700 AI tools found that a troubling 71.7% fall into high or critical risk categories.

Cyberhaven4/23/2025
AIAI risk

Only 16.2% of enterprise data input into AI tools is destined for enterprise-ready, low-risk alternatives.

Cyberhaven4/23/2025
AIAI risk