TÜV Rheinland: Effectively securing AI systems

November 4, 2025

IT security requirements are increasing due to generative AI and large language models.

New IT security requirements due to generative AI and large language models / Identifying threats, closing vulnerabilities – penetration testing as an effective measure / White paper with recommendations for action for companies

Generative artificial intelligence (AI) is not only revolutionising numerous industries, it is also creating new vulnerabilities and security challenges – for example, when companies use large language models in chatbots, assistance systems or automated decision-making processes. But what specific risks arise when using large language models (LLMs)? TÜV Rheinland addresses this question in its latest white paper, ‘Is your AI system secure?’ In addition, the cybersecurity experts show how companies can effectively secure their AI applications.

Attacks possible through manipulated inputs and training data

The white paper describes how AI systems can be attacked. One example is prompt injections, in which attackers manipulate the model with their input so that it behaves unpredictably or reveals information that should not be accessible. Other risks include the unsafe handling of generative AI results – for example, when users execute unvalidated code – and the manipulation of training data by an attacker.

Both attacks and the incorrect handling of AI results can have fatal consequences: from data leaks and incorrect decisions to economic damage. Systematic risk management is therefore essential for companies – not least because regulations such as the EU AI Act are increasingly demanding it. ‘Companies need to adapt their security concepts to address the risks of AI systems,’ explains Daniel Hanke, AI security expert at TÜV Rheinland.

Penetration testing as the key to AI security

One of the most effective measures for detecting threats early on and closing vulnerabilities is penetration testing (pentesting): In a controlled environment, experts simulate attacks on AI systems to identify and remedy potential vulnerabilities. Methods such as black-box and grey-box testing are adapted to the requirements of generative AI. “AI systems are complex and opaque. This requires new testing approaches. Through regular penetration testing, companies can make their systems resilient and thus comply with regulatory requirements. They also strengthen the trust of partners and customers,” Hanke continues.

Generative AI: Innovative power with responsibility

TÜV Rheinland provides comprehensive support to companies in the safe use of AI – from professional penetration testing and data-based risk analyses to certification according to internationally valid standards such as ISO 42001. ‘Anyone who wants to take advantage of the opportunities offered by generative AI must give top priority to its security. This is the only way to responsibly tap into the potential of these technologies,’ emphasises AI expert Daniel Hanke.

Further information and the white paper are available at: www.tuv.com/pentest.

Related Articles

VdS certifies first mobile flood protection element

VdS certifies first mobile flood protection element

The mobile dyke has proven its strength: the mobile flood protection element from Mobildeich GmbH has performed impressively in practical tests conducted by VdS and is the first system ever to receive certification in accordance with VdS 3855 guidelines for flood...

Smiths Detection’s iCMORE receives R&D certification

Smiths Detection’s iCMORE receives R&D certification

The German Federal Police Research and Testing Centre (FuE) certifies Smiths Detection's proprietary iCMORE system for automated detection of prohibited items. Smiths Detection, a global leader in security and inspection technologies and a Smiths Group company,...

Share This