Skip to main content
Back to Home

56% of tested Large Language Models (LLMs) are susceptible to Prompt Injection Attacks (PIAs)

June 25, 2025

Source

View Original Report

Published on 6/25/2025

Share or Copy this stat

Want More Statistics Like This?

Get the latest cybersecurity stats delivered to your inbox every week

Related Statistics

Browse more stats from NCC Group or explore Ransomware

Stay Ahead of Cyber Threats

Join 1,000+ security professionals getting weekly insights