AI-generated code: Armis Labs uncovers hidden security risks

August 18, 2025

Michael Freeman, Head of Threat Intelligence at Armis

The increasing use of AI-powered tools in software development promises faster programming, fewer routine tasks and increased productivity. However, a recent analysis by Armis Labs reveals the security risks that arise when developers rely too heavily on AI-generated code – especially when manual checks are omitted and automated suggestions are accepted without review.

An interesting example from the latest report is DeepSeek Coder, an AI-based code assistant designed to speed up development processes. In a simulated scenario, a team of developers used DeepSeek to automatically select code and external libraries, focusing on speed rather than accuracy. The result: serious security vulnerabilities. The AI recommended third-party libraries with known, exploitable vulnerabilities and generated source code with numerous common security flaws. In total, the resulting application had 18 different issues from the CWE Top 25 list of the most critical software vulnerabilities.

These included outdated PDF and logging libraries vulnerable to arbitrary code execution (CWE-94), insecure deserialisation (CWE-502) and faulty cryptographic implementations (CWE-321). Even more worrying were vulnerabilities directly in the generated code, including cross-site scripting (CWE-79), SQL injection (CWE-89), buffer overflows (CWE-119) and insufficient authentication and access control (CWE-287, CWE-306). All of these security issues are known and potentially serious – yet AI failed to detect or prevent them.

Key finding: AI-powered code assistants are only as reliable as their training data and design. They can unknowingly recommend insecure libraries or adopt poor programming practices from publicly available code. Without manual checks or automated security scans, these vulnerabilities quickly spread across entire projects – increasing risk instead of reducing it.

The researchers therefore recommend integrating security checks into the development process. This includes mandatory code reviews, especially for AI-generated suggestions, as well as automated scans to detect risky dependencies or insecure patterns. Developers should also be trained to critically question AI results rather than assuming they are automatically correct. Similarly, AI tools should be based exclusively on secure, up-to-date sources to avoid reproducing known errors.

How AI exacerbates security risks in the software supply chain

These findings are particularly relevant for software teams in Germany, especially in critical sectors such as industry, healthcare and finance. With the increasing prevalence of AI, there is also a growing risk of invisible vulnerabilities being introduced into critical infrastructure. The convenience of AI-generated code must not come at the expense of fundamental security standards.

Artificial intelligence will undoubtedly be one of the defining forces of future software development. However, the findings of the report make it clear that productivity gains require increased vigilance. Automation alone is no substitute for a security strategy – and without robust protection mechanisms, tools that make developers’ work easier can just as quickly become a significant risk.

Related Articles

Mobile video towers: Opportunities, challenges and future prospects

Mobile video towers: Opportunities, challenges and future prospects

A growth market in transition At Security Expo 2025 in Berlin, the focus was on a topic that is becoming increasingly important not only for security service providers, but also for construction companies, industrial enterprises and operators of critical...

Share This