Report

2025 GenAI Code Security Report

2025 GenAI Code Security Report

Pages 18 Pages

The 2025 GenAI Code Security Report examines whether large language models (LLMs) generate secure code when given functional prompts without explicit security guidance. Testing 100+ models across 80 coding tasks in Java, JavaScript, C#, and Python, it found that only 55% of outputs were secure, leaving 45% vulnerable to issues like SQL injection, cross-site scripting, log injection, or weak cryptography. Results showed little improvement over time, minimal benefit from larger models, and major weaknesses in Java compared to other languages. While syntax accuracy has improved, persistent security flaws stem from training data that often contains insecure code. The report concludes that AI-generated code, though functionally correct, remains a significant security risk.

Join for free to read