LLM
We've curated 19 cybersecurity statistics about LLM to help you understand how large language models are being leveraged for both innovative security solutions and emerging threats in 2025.
Related Topics
Showing 1-19 of 19 results
62% of security practitioners report having no visibility into the usage of large language models (LLMs) within their organizations.
76% of enterprises have experienced security incidents involving LLM prompt injection.
96% of IT executives prefer deploying a useful AI tool over the newest Large Language Model (LLM), emphasizing practical results.
When choosing Large Language Model (LLM) providers, executives' top concern is now privacy and security.
Data privacy and security are among the top three LLM provider considerations for 37% of respondents, followed by integration with existing systems and cost.
Data privacy and security are among the top three LLM provider considerations for 37% of respondents, followed by integration with existing systems and cost.
34% of organisations are currently using Large Language Model (LLM) interfaces.
Out of 131 hostnames provided by the LLM in response to natural language queries for 50 brands, a significant 34% were not controlled by the brands at all.
Threat actors have generated more than 17,000 AI-written GitBook phishing pages specifically targeting crypto users.
29% of the suggested incorrect domains given by an LLM in return to a query were unregistered, parked, or had no active content, leaving them vulnerable to takeover by malicious actors
LLM model returned the correct URL for brands two-thirds (66%) of the time.
5% of the suggested incorrect domains given by an LLM in return to a query pointed users to completely unrelated, albeit legitimate, businesses
In a sophisticated campaign to poison AI coding assistants, Netcraft uncovered an effort where an attacker promoted a fake API. At least five victims were found to have copied this malicious code into their own public projects, some of which showed signs of being built using AI coding tools.
33% of respondents are still not conducting regular security assessments, including penetration testing, for their Large Language Model (LLM) deployments.
Nearly half of all respondents (47%) are seeing a rise in attacks targeting their organization’s large language model (LLM) deployments.
95% of organizations see value in using AI to Conduct conversational coaching by leveraging LLMs.
LLM pentests yield the highest proportion of serious vulnerabilities (32%) than any other asset type tested.
Only 21% of serious vulnerabilities discovered in LLM tests are being resolved.
AI and LLM security has emerged as the top concern among security professionals (72%).