Since 2 February 2025, the EU-wide bans on certain AI practices have been in force as a result of the EU AI Act. The focus is particularly on applications that pose potential risks to European values and fundamental rights. The guidelines recently published by the EU Commission now provide a detailed interpretation of the prohibitions – particularly in the areas of CCTV, video security technology and biometric facial recognition. This article highlights the key contents of the new guidelines, explains the risk-based approach of the AI Act and shows how Dallmeier’s solutions meet the requirements of the law.
Introduction
The EU AI Act takes a risk-based approach to regulating artificial intelligence. It classifies AI systems into four risk categories, some of which are considered unacceptable because they can endanger fundamental rights and values of the European Union. In particular, applications based on untargeted scraping of facial images from the internet or CCTV recordings or that enable the use of real-time remote biometrics in publicly accessible areas are heavily regulated. The new guidelines of the EU Commission, summarised in a 140-page document, provide legal clarity in this area for the first time. Dallmeier has analysed these guidelines in detail and selected relevant chapters in order to be able to offer its customers practical solutions in the areas of CCTV, video security and biometric facial recognition.
Background: The risk-based approach of the EU AI Act
The EU AI Act differentiates between various risk categories of AI systems. Systems that pose an unacceptable risk to fundamental rights are prohibited under Article 5 of the Act. These include, but are not limited to:
- Untargeted scraping for facial recognition databases: Article 5(1)(e) prohibits the placing on the market, putting into service or use of AI systems that extract facial images from the internet or CCTV footage in an untargeted manner to create or expand facial recognition databases.
- Real-time remote biometric identification in public spaces: Article 5(1)(h) prohibits the use of real-time remote biometric identification systems in publicly accessible areas for law enforcement purposes, except in narrowly defined exceptional cases, such as to search for missing persons or to prevent imminent danger.
The European Commission’s new guidelines clarify these prohibitions and thus provide a harmonised interpretation, which is of key importance for providers and operators of AI systems. In particular, chapters 6 (page 77) and 9 (page 95) of the guidelines deal in detail with the non-targeted collection of facial images and real-time remote biometrics in the context of law enforcement.
Relevance for CCTV, video security and biometric facial recognition
The updated guidelines have a direct impact on video surveillance technologies. For example, the new guidelines prohibit the use of AI systems that build databases by untargeted scraping of facial images, whether from the internet or from CCTV footage. At the same time, systems that enable real-time biometric identification in public spaces are also fundamentally prohibited, unless one of the explicitly defined exemption criteria is met.
These regulations require manufacturers and operators to critically evaluate existing systems and, if necessary, adapt them to ensure compliance with the EU requirements. The clear distinction between authorised, low-risk applications and practices classified as unacceptable plays a crucial role in this.
Dallmeier solutions in the context of the EU AI Act
Dallmeier addressed the challenges of the EU AI Act at an early stage and, with its innovative video security technologies, offers solutions that meet both the legal requirements and the practical demands in the security sector.
The Dallmeier systems are characterised by high performance and, at the same time, compliant implementation of the new EU directives. In this way, Dallmeier focuses on:
- Transparency and compliance: The systems used are designed in such a way that they do not allow the untargeted capture of facial images. Data processing is always carried out in accordance with the provisions of the EU AI Act in order to prevent the unauthorised expansion of facial recognition databases.
- Targeted areas of application: Dallmeier focuses on the use of video security technologies that operate in specifically defined, low-risk areas. This includes applications in which biometric facial recognition is only used in exceptional cases and under strict conditions – for example, as part of security measures and the targeted search for missing persons.
- Technological adaptability: Continuous research and development ensures that Dallmeier solutions can be adapted to the latest regulatory requirements at any time. This also includes regular software updates and optimisations to ensure both security and legal compliance.
With these measures, Dallmeier is positioning itself as a reliable partner for organisations involved in video surveillance that must also meet the strict requirements of the EU AI Act. The solutions not only offer the highest technical standard, but also legal certainty with regard to the new EU requirements.
Conclusion and outlook
The introduction of the new EU Guidelines on Prohibited AI Practices marks a milestone in the regulation of artificial intelligence. The EU AI Act defines clear limits for applications that may endanger fundamental rights – particularly in the areas of video surveillance and biometric identification. The guidelines published by the EU Commission ensure a uniform interpretation of the prohibitions, which is of central importance for providers and operators of AI systems.
Dallmeier is responding to these regulatory challenges with solutions that are both innovative and compliant with the new requirements. By using technologies in a targeted and risk-conscious manner, Dallmeier offers its customers a future-proof video security solution that meets the strict requirements of the EU AI Act.
The continuous further development of the technologies and the close coordination with the regulatory framework not only ensure compliance with the legal requirements, but also enable the responsible use of AI in the security sector – for the benefit of society and in line with fundamental European values.