TÜV Rheinland: Effectively securing AI systems

November 4, 2025

IT security requirements are increasing due to generative AI and large language models.

New IT security requirements due to generative AI and large language models / Identifying threats, closing vulnerabilities – penetration testing as an effective measure / White paper with recommendations for action for companies

Generative artificial intelligence (AI) is not only revolutionising numerous industries, it is also creating new vulnerabilities and security challenges – for example, when companies use large language models in chatbots, assistance systems or automated decision-making processes. But what specific risks arise when using large language models (LLMs)? TÜV Rheinland addresses this question in its latest white paper, ‘Is your AI system secure?’ In addition, the cybersecurity experts show how companies can effectively secure their AI applications.

Attacks possible through manipulated inputs and training data

The white paper describes how AI systems can be attacked. One example is prompt injections, in which attackers manipulate the model with their input so that it behaves unpredictably or reveals information that should not be accessible. Other risks include the unsafe handling of generative AI results – for example, when users execute unvalidated code – and the manipulation of training data by an attacker.

Both attacks and the incorrect handling of AI results can have fatal consequences: from data leaks and incorrect decisions to economic damage. Systematic risk management is therefore essential for companies – not least because regulations such as the EU AI Act are increasingly demanding it. ‘Companies need to adapt their security concepts to address the risks of AI systems,’ explains Daniel Hanke, AI security expert at TÜV Rheinland.

Penetration testing as the key to AI security

One of the most effective measures for detecting threats early on and closing vulnerabilities is penetration testing (pentesting): In a controlled environment, experts simulate attacks on AI systems to identify and remedy potential vulnerabilities. Methods such as black-box and grey-box testing are adapted to the requirements of generative AI. “AI systems are complex and opaque. This requires new testing approaches. Through regular penetration testing, companies can make their systems resilient and thus comply with regulatory requirements. They also strengthen the trust of partners and customers,” Hanke continues.

Generative AI: Innovative power with responsibility

TÜV Rheinland provides comprehensive support to companies in the safe use of AI – from professional penetration testing and data-based risk analyses to certification according to internationally valid standards such as ISO 42001. ‘Anyone who wants to take advantage of the opportunities offered by generative AI must give top priority to its security. This is the only way to responsibly tap into the potential of these technologies,’ emphasises AI expert Daniel Hanke.

Further information and the white paper are available at: www.tuv.com/pentest.

Related Articles

Focus on the importance of cooperation and innovation

Herrmann at the Security and Innovation Forum at Friedrich-Alexander University Erlangen-Nuremberg At the Security and Innovation Forum at Friedrich-Alexander University Erlangen-Nuremberg (FAU) on Monday, Bavaria's Interior Minister Joachim Herrmann emphasised the...

Airbus’ OneSat selected for Oman’s first satellite

Space Communication Technologies (SCT), Oman's national satellite operator, has awarded Airbus Defence and Space a contract for OmanSat-1, a state-of-the-art, fully reconfigurable, high-throughput OneSat telecommunications satellite, including the associated system....

Black Friday: Half go bargain hunting

On average, 312 euros are spent – around 11 per cent more than last year Online shops from China polarise opinion: half avoid them, the other half have already ordered from them Four out of ten young people would send AI shopping on its own When Black Friday and the...

Share This