Artificial intelligence in the crosshairs: How companies must protect their AI systems

September 29, 2025

Artificial intelligence (AI) is rapidly changing the world of work. It is revolutionising processes across all industries – from automated manufacturing and intelligent data analysis to self-learning security solutions. But it is precisely this relevance that is increasingly making AI the target of cyber attacks.

Experts such as Dell Technologies warn that it is not only traditional IT systems that are being targeted by criminal actors, but AI models themselves.

Companies are thus faced with a new reality: AI security must be considered from the outset – in the design, training and operation of the models. This is not just a matter of technology, but of close cooperation between security experts, software developers and AI researchers.

An overview of the biggest threat scenarios

Dell Technologies has summarised the most important ways in which AI systems can be attacked:

  • Model theft: Criminals use systematic queries to reconstruct training data, weightings or parameters. This allows expensive models to be cloned or replicated cheaply.
  • Data poisoning: Manipulated training data deliberately weakens a model, causes errors or opens backdoors for further attacks.
  • Model inversion: Attackers gain access to sensitive training data through repeated queries – personal data or trade secrets could be stolen in this way.
  • Perturbation attacks: Even small changes to input data can cause AI systems to make incorrect decisions – particularly dangerous for safety-critical systems such as autonomous vehicles.
  • Prompt injection: AI systems can be manipulated using specially formulated inputs, for example to disclose sensitive information or output malicious code.
  • Rewards hacking: In learning systems, manipulation of the reward system can lead to permanently incorrect behaviour.
  • DoS/DDoS attacks: Overloading can paralyse AI systems and jeopardise business processes.
  • Supply chain manipulation: Cybercriminals exploit vulnerabilities in third-party providers to compromise AI infrastructures.

Security must be considered from the outset

Christian Scharrer, Enterprise Architect at Dell Technologies Germany, emphasises: ‘AI requires more than standard security. Companies need holistic concepts that range from secure access controls and data validation to ongoing model monitoring.’

Recommended measures include:

  • Strict validation and cleansing of training and input data.
  • Implementation of so-called guardrails that check inputs and outputs.
  • Monitoring systems to detect performance changes.
  • Ensuring a robust and trustworthy supply chain for hardware and software.

Conclusion: AI security is a matter for top management

AI offers enormous potential – but at the same time, it creates new risks that traditional IT security concepts do not fully cover. Companies that want to protect their AI systems must integrate security into the development process from the outset. This is the only way to harness the full potential of AI without taking uncontrolled risks.

Dell Technologies warns that AI security is not an option, but a key strategic task – for the protection of data, business processes and, ultimately, competitiveness.

Related Articles

Comment: German STEM education – federalism or national approach?

Germany is at a crossroads: tomorrow's technical education will determine the economic location of the day after tomorrow. The position paper of the National STEM Forum and the VDI's demand not to lose STEM education in the ‘confusion of federalism’ strike at the...

Mobile phone usage at Oktoberfest remains at record levels

Mobile phone usage at Oktoberfest remains at record levels

Over ten percent more data traffic than in the same period last year Virtually no dropped calls French visitors jump to third place in guest rankings The weather during the first week of Oktoberfest was cold and rainy. That didn't hurt cell phone usage. Compared to...

Share This