Trust is good, understanding is better: The future of AI security in companies

August 27, 2025

Guest contribution: Christian Nern, Markus Hupfauer & Julian Krautwald KPMG Financial Services

In a world where cyber attacks are becoming increasingly automated and sophisticated, AI-powered security solutions can offer a decisive advantage. At the same time, however, the use of artificial intelligence also poses new risks to cyber security: in particular, the danger of losing control due to growing system complexity is a serious challenge that companies cannot ignore.

There is no question that AI makes processes more efficient and can also increase protection against cyber attacks. But with increasing integration, security requirements also rise – the smarter the systems, the more numerous the points of attack and the greater the responsibility to protect them effectively. At the same time, the technology itself can pose a risk. It can make wrong decisions and escape human control.

This makes it all the more important to think about how to integrate AI into business processes in a meaningful way at an early stage. Three clear success factors can be identified here. Firstly, strict management of third-party providers and service providers that are relied upon in the development and use of AI systems – this also includes, for example, third-party libraries, which are often integrated into AI development projects. Secondly, full transparency across the entire AI portfolio – because only those who have a comprehensive overview can provide effective protection. Thirdly: technical controls such as access restrictions, encryption and continuous monitoring to counter attacks at an early stage. This triad can help banks and insurers to operate AI securely – while taking current regulatory requirements into account.

The good news is that established testing procedures already exist within the framework of DORA (Digital Operational Resilience Act). These are designed to ensure more resilient IT systems. Currently, the focus is particularly on ‘classic’ ICT systems, but they are equally suitable for novel AI systems. In other words, although the test procedures are in place, they should be expanded and specifically tailored to the complexity and special features of AI. It is not enough to test individual components of AI, such as the user interface. Rather, the AI system must be tested as a whole – from the user interface and system prompts to possible function calls.

Networking of technologies

A key aspect of successfully implementing AI security in companies is to use all existing security systems and fully integrate proprietary AI applications into the existing IT security architecture – from Extended Detection & Response (XDR) and User Entity and Behaviour Analytics (UEBA) to Security Information and Event Management (SIEM). Only when all these elements work together seamlessly can the foundation for the secure operation of AI systems be guaranteed. Building on this, AI-specific security components such as prompt firewalls and automatic AI testing systems should be implemented.

A promising approach is to use AI itself to defend against AI-based security threats. AI-supported prompt injection firewalls, for example, can detect and block dangerous inputs at an early stage. However, it is crucial that such AI security solutions are built on a solid IT security foundation. Only when basic protective measures are in place can AI-supported security techniques be used effectively and without creating new risks.

Governance as a cornerstone of AI security

DORA can serve as a guideline for banks and insurers: the regulation provides information on what the regulator requires to strengthen digital resilience, including cyber security. However, financial institutions should also examine individually how AI is integrated into their business areas. The goal is always to maintain a healthy balance between innovation and security. Fundamental to this purpose is governance with clearly defined rules and measures that enable responsible, secure and at the same time pragmatic use of AI.

In an ideal world, such a framework is already in place before AI is integrated into business processes. The advantage is obvious: risks arising from uncoordinated or confusing system landscapes are contained at an early stage, and the complexity of AI remains manageable. Without governance as a foundation, there is a risk of quickly losing control. A lack of rules and access control can lead to security problems and compliance violations – turning innovation into a risk.

AI governance not only defines how artificial intelligence should be used in the business sector. It also provides clarity about responsibilities, coordination processes and control mechanisms that contribute to the visibility of the technology. This means that the behaviour, decisions and data flows of the various AI applications remain traceable for IT security at all times. The goal should be to minimise the risk of AI as a black box.

Identity and access management (IAM) plays a major role in practical implementation. The uniform standards enforced by IAM promote cross-departmental collaboration, ensure secure data exchange and prevent the creation of parallel data silos. Centralised IAM is therefore an important factor in ensuring that banks retain sovereignty over sensitive data and systems.

However, the advantages mentioned above do not arise solely from effective IAM: only in combination with centralised platforms for AI development can knowledge and resources be efficiently pooled. This enables banks and insurance companies to further develop their AI initiatives in a targeted manner, drive innovation and, at the same time, strengthen the confidence of customers and supervisory authorities in the digital competence and security of the institution.

The role of legislation

Given the challenging situation in which companies find themselves, the issue of regulation is becoming increasingly central. Although the EU’s AI Act of 2023 sets fundamental standards for the use of AI, implementation and technical quality assurance often remain unclear. In particular, it is unclear how to ensure that AI systems reliably follow complex processes and work instructions, even under difficult conditions.

Companies generally have no way of continuously monitoring the quality of the products they purchase from large US AI providers in terms of their safety. There is a lack of transparency as to whether the AI system delivers the required performance on a permanent basis and not just on an ad hoc basis. At the same time, many companies are dependent on these providers, as there are hardly any local, legally compliant alternatives. As a result, important goals such as data integrity and legal compliance can easily take a back seat. To remain flexible, utilise the latest technologies and minimise risks, companies should therefore rely on vendor-independent solutions.

At least with the EU AI Act, the legal framework for the use of AI within the European Union is now largely clear – even if it can still be further developed. This regulation ensures that the positive aspects of the technology prevail and that economic interests are not enforced at the expense of integrity and security. It is now up to companies to consistently implement these requirements. Above all, this means establishing technical security measures while ensuring that the decisions and outputs of AI systems are technically correct.

‘Get ahead of the wave’ – taking a strategic approach to AI security

Artificial intelligence is rapidly finding its way into all areas of business – especially IT security. It is now high time to face this change with a holistic security strategy. Establishing central, overarching standards for AI from the outset prevents the emergence of many isolated individual solutions that later become costly and time-consuming to integrate into the overall architecture. This is a costly and inefficient experience, as previous technology trends have shown. Companies that get ahead of the curve now and implement AI security consistently, taking technical, organisational and regulatory aspects into account, are creating a stable foundation for long-term innovation success, trust and digital resilience.

About the authors:

Christian Nern is a partner and head of security at KPMG in the financial services division in Munich. Before joining KPMG, the business graduate worked for 25 years in prominent leadership positions in various areas of the IT industry.

Julian Krautwald is Practice Lead Detection & Response at KPMG in the Financial Services division. He is an expert in the field of digital transformation in the financial services sector with a focus on operational cyber security.

Markus Hupfauer is a manager in the FS Technology & IT Compliance division and an expert in the application of artificial intelligence in cyber security.

Related Articles

Mobile phone usage at Oktoberfest remains at record levels

Mobile phone usage at Oktoberfest remains at record levels

Over ten percent more data traffic than in the same period last year Virtually no dropped calls French visitors jump to third place in guest rankings The weather during the first week of Oktoberfest was cold and rainy. That didn't hurt cell phone usage. Compared to...

Free meals are the strongest motivator

According to a study by the University of South Florida, employees value fitness and health less Employees who have direct contact with customers, such as cashiers or salespeople, are more likely to be motivated by perks such as free meals and excursions than by free...

Share This