AI Risk
Cybersecurity statistics about ai risk
Related Topics
Top Vendors
Showing 1-13 of 13 results
The average employee enters sensitive data into AI tools once every three days.
Across the top 100 most-used GenAI SaaS applications, 82% are classified as medium, high, or critical risk.
32.3% of ChatGPT usage occurs through personal accounts.
24.9% of Gemini usage occurs through personal accounts.
64% of enterprise leaders in the UK view AI as posing little or no threat to networks.
56% of enterprise leaders classify AI-related risks to their critical data as moderate to extreme.
Top AI-related cybersecurity concerns are data leakage through copilots and agents (22%), third-party and supply chain risks (21%), evolving regulations (20%), shadow AI (18%), and prompt injection attacks (18%).
83.8% of enterprise data input into AI tools flows to platforms classified as medium, high, or critical risk.
39.5% of AI tools have the key risk factor of inadvertent exposure of user interactions and training data.
34.4% of AI tools have user data accessible to third parties without adequate controls.
Only 11% of AI tools assessed qualify for low or very low risk classifications.
Cyberhaven's assessment of over 700 AI tools found that a troubling 71.7% fall into high or critical risk categories.
Only 16.2% of enterprise data input into AI tools is destined for enterprise-ready, low-risk alternatives.