Skip to main content
HomeTopicsAI Risks

AI Risks

We've curated 17 cybersecurity statistics about AI Risks to help you understand how emerging threats like deepfakes and automated attacks are reshaping the landscape of cybersecurity in 2025.

Showing 1-17 of 17 results

48% of UK consumers express concern about the risk of fraud or identity theft related to AI in banking.

FIS12/13/2025
FraudIdentity Theft

57% of organizations reported lacking the ability to block risky AI actions in real time.

Cybersecurity Insiders & Cyera12/6/2025
AI ToolsData Security

66% of organizations reported catching AI tools over-accessing sensitive information.

Cybersecurity Insiders & Cyera12/6/2025
AI ToolsData Security

60-70% of AI-generated code lacks deployment environment awareness, generating code that runs locally but fails in production.

OX Security10/23/2025
AI

40-50% of AI-generated code inflates coverage metrics with meaningless tests rather than validating logic.

OX Security10/23/2025
AI

80-90% of AI-generated code rigidly follows conventional rules, missing opportunities for more innovative, improved solutions.

OX Security10/23/2025
AI

80-90% of AI-generated code creates hyper-specific, single-use solutions instead of generalizable, reusable components.

OX Security10/23/2025
AI

80-90% of AI-generated code generates functional code for immediate prompts but never refactors or architecturally improves existing code.

OX Security10/23/2025
AI

70-80% of AI-generated code violates code reuse principles, causing identical bugs to recur throughout codebases, requiring redundant fixes.

OX Security10/23/2025
AI

40-50% of AI-generated code reimplements from scratch instead of using established libraries, SDKs, or proven solutions.

OX Security10/23/2025
AI

20-30% of AI-generated code over-engineers for improbable edge cases, causing performance degradation and resource waste.

OX Security10/23/2025
AI

90-100% of AI-generated code contains excessive inline commenting, which dramatically increases computational burden and makes code harder to check.

OX Security10/23/2025
AI

40-50% of AI-generated code defaults to tightly-coupled monolithic architectures, reversing decade-long progress toward microservices.

OX Security10/23/2025
AI

The percentage of companies globally that felt very prepared to manage AI risks has remained relatively flat over the past three years, with 9% in 2023, 8% in 2024, and 12% in 2025.

Riskonnect10/22/2025
Risk

94% of all AI services are at risk for at least one of the top Large Language Model (LLM) risk vectors, including prompt injection/jailbreak, malware generation, toxicity, and bias.

Skyhigh Security4/24/2025
AIAI risks

30% of respondents report the emergence of a new attack surface due to the use of AI by their business users.

Netwrix4/23/2025
AIAI risks

Approximately 1 in 4 organizations said they’re concerned about how AI use in the enterprise will make them more attackable (AI and generative AI concerns)

Seemplicity3/1/2025
AIAttack Surface