Skip to main content
HomeTopicsLLM

LLM

We've curated 19 cybersecurity statistics about LLM to help you understand how large language models are being leveraged for both innovative security solutions and emerging threats in 2025.

Showing 1-19 of 19 results

62% of security practitioners report having no visibility into the usage of large language models (LLMs) within their organizations.

Harness11/16/2025
AI

76% of enterprises have experienced security incidents involving LLM prompt injection.

Harness11/16/2025
AILLM prompt injection

96% of IT executives prefer deploying a useful AI tool over the newest Large Language Model (LLM), emphasizing practical results.

Moveworks11/15/2025
AI

When choosing Large Language Model (LLM) providers, executives' top concern is now privacy and security.

Google Cloud9/4/2025
AI

Data privacy and security are among the top three LLM provider considerations for 37% of respondents, followed by integration with existing systems and cost.

Google Cloud9/4/2025
AIPrivacy

Data privacy and security are among the top three LLM provider considerations for 37% of respondents, followed by integration with existing systems and cost.

Google Cloud9/4/2025
AIPrivacy

34% of organisations are currently using Large Language Model (LLM) interfaces.

Netskope8/4/2025
EnterpriseGenAI

Out of 131 hostnames provided by the LLM in response to natural language queries for 50 brands, a significant 34% were not controlled by the brands at all.

Netcraft7/1/2025
AI

Threat actors have generated more than 17,000 AI-written GitBook phishing pages specifically targeting crypto users.

Netcraft7/1/2025
AI

29% of the suggested incorrect domains given by an LLM in return to a query were unregistered, parked, or had no active content, leaving them vulnerable to takeover by malicious actors

Netcraft7/1/2025
AI

LLM model returned the correct URL for brands two-thirds (66%) of the time.

Netcraft7/1/2025
AI

5% of the suggested incorrect domains given by an LLM in return to a query pointed users to completely unrelated, albeit legitimate, businesses

Netcraft7/1/2025
AI

In a sophisticated campaign to poison AI coding assistants, Netcraft uncovered an effort where an attacker promoted a fake API. At least five victims were found to have copied this malicious code into their own public projects, some of which showed signs of being built using AI coding tools.

Netcraft7/1/2025
AI

33% of respondents are still not conducting regular security assessments, including penetration testing, for their Large Language Model (LLM) deployments.

Cobalt6/24/2025
AIGen AI

Nearly half of all respondents (47%) are seeing a rise in attacks targeting their organization’s large language model (LLM) deployments.

Gigamon5/21/2025

95% of organizations see value in using AI to Conduct conversational coaching by leveraging LLMs.

Abnormal AI5/13/2025
Human errorSecurity awareness training

LLM pentests yield the highest proportion of serious vulnerabilities (32%) than any other asset type tested.

Cobalt4/14/2025
Pen testingOffensive security

Only 21% of serious vulnerabilities discovered in LLM tests are being resolved.

Cobalt4/14/2025
Pen testingOffensive security

AI and LLM security has emerged as the top concern among security professionals (72%).

Cobalt4/14/2025
AI