Why the new attack method is forcing companies to rethink their approach to cyber security
Artificial intelligence (AI) is currently transforming the digital world of work at a pace that challenges even experienced IT security managers. Systems that were previously regarded as tools for increasing efficiency are increasingly becoming the target and means of attacks themselves.
One of the most recent and dangerous developments in this area is known as prompt manipulation – also known as prompt hacking or prompt injection. This form of attack uses natural language to deceive or abuse AI systems.
Whereas technical vulnerabilities and exploits used to be the main attack vector, the inputs themselves are now being targeted. Cleverly worded natural language prompts can be used to trick AI models into revealing internal information, executing malicious commands or passing on confidential data. This opens up a new front in cyber warfare – one that can no longer be secured by firewalls and code reviews alone.
Commentary by Tony Fergusson, CISO in Residence EMEA at Zscaler
‘The rapid integration of artificial intelligence (AI) into businesses is set to revolutionise the efficiency of business processes, streamline workflows and accelerate decision-making. However, the use of these tools in the wrong hands also carries significant risks,’ warns Tony Fergusson.
Cyber Security Awareness Month is a fitting occasion to draw attention to this new dimension of attack. ‘The latest trend among attackers is to manipulate AI prompts. With the help of prompt hacking or prompt injection, malware actors use natural language for their malicious activities, which can cause maximum damage even without extensive programming knowledge,’ says Fergusson.
He draws a parallel with IT history: ‘History is repeating itself here, because this approach is not new per se. Similar to the SQL injection attacks of the early 2000s, prompt hacking targets the interpretation of user input by systems.’
The dangerous thing is that attackers need very little technical knowledge. Even simple text-based commands are enough to manipulate AI systems. ‘For example, if the text colour for the commands is set to white and is therefore not visible to the human eye, a classic mechanism is easily circumvented,’ explains Fergusson. This low barrier to entry makes prompt manipulation particularly dangerous, as it gives virtually anyone access to potentially effective means of attack.
New security standards required
Companies must learn to rethink security. Instead of checking code or network interfaces as before, the task now is to recognise ‘malicious voice inputs’. The challenge is to distinguish between legitimate user instructions and manipulative commands.
Fergusson sees the zero trust principle as the key approach here: ‘The introduction of a zero trust security framework is one way to address this problem. This approach is based on the premise that no user, no system and no interaction is trusted from the outset.’
Zero trust focuses on continuous verification and analysis of all interactions. This means that prompts and AI inputs must also be continuously monitored for unusual behaviour – regardless of their source or perceived legitimacy.
At the same time, security practices should be adapted to recognise linguistic manipulation patterns. Instead of just solving technical ‘code problems’, security teams will have to anticipate ‘problem prompts’ in the future. The integration of security mechanisms into AI system architecture, authorisation concepts and workflow controls is becoming the central line of defence against the growing threat of prompt manipulation.
Conclusion: Pandora’s box has been opened
Prompt hacking is no longer a theoretical threat – Pandora’s box has been opened. The deeper AI systems are integrated into critical business processes, the greater the risks posed by manipulated prompts.
Companies must prepare for language itself to become a new vulnerability. This means that policies, systems and training must be comprehensively updated. Awareness needs to be raised at all levels – from developers to security analysts to specialist users.
‘AI is rapidly transforming industries,’ Fergusson summarises. ‘Competitive advantages arise for those companies that recognise and exploit the opportunities offered by AI. In order to safely harness the potential of AI, a proactive defence strategy must be planned from the outset. Robust protective measures are necessary to safeguard sensitive data and processes and maintain customer trust.’

