Artificial intelligence (AI) is rapidly changing the world of work. It is revolutionising processes across all industries – from automated manufacturing and intelligent data analysis to self-learning security solutions. But it is precisely this relevance that is increasingly making AI the target of cyber attacks.
Experts such as Dell Technologies warn that it is not only traditional IT systems that are being targeted by criminal actors, but AI models themselves.
Companies are thus faced with a new reality: AI security must be considered from the outset – in the design, training and operation of the models. This is not just a matter of technology, but of close cooperation between security experts, software developers and AI researchers.
An overview of the biggest threat scenarios
Dell Technologies has summarised the most important ways in which AI systems can be attacked:
- Model theft: Criminals use systematic queries to reconstruct training data, weightings or parameters. This allows expensive models to be cloned or replicated cheaply.
- Data poisoning: Manipulated training data deliberately weakens a model, causes errors or opens backdoors for further attacks.
- Model inversion: Attackers gain access to sensitive training data through repeated queries – personal data or trade secrets could be stolen in this way.
- Perturbation attacks: Even small changes to input data can cause AI systems to make incorrect decisions – particularly dangerous for safety-critical systems such as autonomous vehicles.
- Prompt injection: AI systems can be manipulated using specially formulated inputs, for example to disclose sensitive information or output malicious code.
- Rewards hacking: In learning systems, manipulation of the reward system can lead to permanently incorrect behaviour.
- DoS/DDoS attacks: Overloading can paralyse AI systems and jeopardise business processes.
- Supply chain manipulation: Cybercriminals exploit vulnerabilities in third-party providers to compromise AI infrastructures.
Security must be considered from the outset
Christian Scharrer, Enterprise Architect at Dell Technologies Germany, emphasises: ‘AI requires more than standard security. Companies need holistic concepts that range from secure access controls and data validation to ongoing model monitoring.’
Recommended measures include:
- Strict validation and cleansing of training and input data.
- Implementation of so-called guardrails that check inputs and outputs.
- Monitoring systems to detect performance changes.
- Ensuring a robust and trustworthy supply chain for hardware and software.
Conclusion: AI security is a matter for top management
AI offers enormous potential – but at the same time, it creates new risks that traditional IT security concepts do not fully cover. Companies that want to protect their AI systems must integrate security into the development process from the outset. This is the only way to harness the full potential of AI without taking uncontrolled risks.
Dell Technologies warns that AI security is not an option, but a key strategic task – for the protection of data, business processes and, ultimately, competitiveness.