Prompts specifying a need for security or requesting OWASP best practices produced more secure results, yet still yielded some code vulnerabilities for 5 out of the 7 LLMs tested.
Prompts specifying a need for security or requesting OWASP best practices produced more secure results, yet still yielded some code vulnerabilities for 5 out of the 7 LLMs tested. — This cybersecurity statistic was published by Backslash Security in April 2025. It covers topics including AI, LLMs, Vulnerabilities. The original data appears in Can AI “Vibe Coding” Be Trusted? It Depends…. For the full methodology and detailed findings, refer to the original report.
Share or Copy this stat
Frequently Asked Questions
What does this statistic say?
Prompts specifying a need for security or requesting OWASP best practices produced more secure results, yet still yielded some code vulnerabilities for 5 out of the 7 LLMs tested. This data was published by Backslash Security and covers AI, LLMs, Vulnerabilities.
Where does this data come from?
This statistic comes from Can AI “Vibe Coding” Be Trusted? It Depends…, published by Backslash Security on April 24, 2025. You can view the original report at https://www.backslash.security/blog/can-ai-vibe-coding-be-trusted.
What cybersecurity topics does this cover?
This statistic relates to AI, LLMs, Vulnerabilities. Browse more statistics on AI or from Backslash Security.