EU guidelines on prohibited AI practices come into force – compliance officers must take action
A key part of the European AI Regulation (EU) 2024/1689 has been in force since 2 February 2025. On this date, certain practices in the use of artificial intelligence were expressly prohibited for the first time – bindingly and immediately, regardless of when a system was developed or put into operation. Banks, financial service providers and securities institutions, which are increasingly using AI-based systems for risk assessment, fraud detection and money laundering prevention, are particularly affected. The new guidelines from the European Commission now specify precisely which applications are lawful, where clear boundaries are drawn – and what sanctions may be imposed.
Since 2 August 2025, the provisions on supervision and sanctions have also been in force. Violations of the prohibitions can be punished with fines of up to 35 million euros or seven per cent of global annual turnover. For institutions, this means that the time for waiting is over. AI systems must be checked for legality, processes documented and human control mechanisms implemented – otherwise there is a risk of considerable financial and regulatory risks.
Article 5 of the AI Regulation lists a number of practices that are considered particularly dangerous because they violate fundamental rights or undermine trust in digital systems. These include manipulative or deceptive AI systems that deliberately influence or psychologically pressure people – for example, in the sale of financial products – as well as systems that exploit weaknesses due to age, disability or socio-economic situation to persuade customers to make risky decisions. Any form of social scoring that categorises and discriminates against people based on their behaviour or lifestyle is also prohibited. The same applies to the use of emotion recognition in the workplace or in customer contacts, for example to monitor employee performance or influence consultations. The mass extraction of facial images from the internet and real-time biometric identification in public spaces for law enforcement purposes are also prohibited. These practices have been banned in all sectors since February 2025, including the financial sector.
However, the EU Commission emphasises that artificial intelligence is expressly desired in the financial sector, provided that transparency, fairness and control are guaranteed. For example, systems for preventing money laundering that analyse transaction patterns and detect anomalies, or applications for customer identification (KYC) that allow automated checks including facial recognition, are permitted, provided they comply with the GDPR. The use of AI in sanctions and PEP screening as well as in fraud detection is also possible, as long as human oversight is ensured. The key thing here is that AI does not make autonomous decisions about customers or transactions. Human control, traceable decision-making processes and transparent documentation that shows how decisions are made are always required.
Implementing the new requirements calls for a structured approach in all affected institutions. S+P Compliance recommends first conducting a comprehensive inventory of all existing AI systems and checking them for possible prohibited practices. Based on this, a gap analysis should be carried out to compare current use with the Commission’s guidelines. Transparency is a key principle: banks must document which algorithms and data sources are used and how decisions are reviewed. It is equally important to define clear processes for human oversight and escalation, especially for AML and KYC procedures. Data protection impact assessments (DPIAs) are mandatory to ensure that personal data is processed lawfully and for specific purposes. At the same time, compliance, risk and IT teams should be trained to understand and implement the new regulatory requirements. Finally, institutions must prepare their internal processes to work effectively with market surveillance authorities from August 2025 onwards.
The AI Regulation thus marks a turning point in European finance. Artificial intelligence remains a strategic success factor – but only under clear ethical and legal guidelines. For banks and financial institutions, this means that prohibited practices such as manipulation, discrimination or emotional monitoring are taboo. At the same time, permitted applications in money laundering prevention, customer verification, fraud detection or sanctions screening offer great potential if transparency, data quality and human control are guaranteed. Those who review their systems at an early stage not only create legal certainty, but also trust – among customers, supervisory authorities and investors. The AI Regulation is thus more than just a set of regulatory rules. It is a wake-up call for responsible innovation and the ethically sound use of artificial intelligence in the financial sector.

