Adversarial Misuse of Generative AI: the threat posed by the misuse of AI technologies

January 29, 2025

In the latest report by the Google Threat Intelligence Group (GTIG), entitled ‘Adversarial Misuse of Generative AI’, the experts take a look at the growing threat posed by the misuse of generative AI models such as Gemini. It shows how cybercriminals and state-backed actors are increasingly using AI for their attacks. The report examines how these threat actors misuse the technology and the challenges of securing such advanced systems.

Generative AI as a tool for cyber attacks

Google explains that threat actors use generative AI like Gemini primarily in three areas: research, debugging code and creating content. Instead of developing new, innovative techniques, these groups automate existing attack methods and increase their efficiency – the misuse of AI accelerates attack processes and makes them more productive.

1. Use of Gemini by APT actors

APT actors (advanced persistent threats) are known for their long-running, well-planned attacks. They use Gemini in a targeted manner in several phases of the attack cycle – from researching target infrastructure to detecting vulnerabilities and developing exploits. Gemini is also used to create malicious scripts and develop evasion techniques.

It is noteworthy that Iranian APT actors in particular make extensive use of Gemini. They use the AI both for researching hosting providers and for vulnerability analysis of target organisations. By contrast, Russian APT actors use Gemini only to a limited extent – indicating differences in tactics and techniques within these groups.

2. Use of Gemini by IO actors (Information Operations)

In addition to APT groups, IO actors (Information Operations) also use generative AI. Their goal is to influence public perception and spread disinformation. Gemini helps them create personas, design messages, translate content and increase their reach.

Iranian IO actors are once again the most prolific users of Gemini, accounting for three-quarters of IO actor usage. By contrast, Chinese and Russian IO actors primarily used Gemini for general research and content creation.

Gemini’s safeguards and their impact

Although Gemini is used in a variety of ways, the model’s built-in security mechanisms protect against misuse. The report highlights that security responses are automatically triggered when more elaborate or malicious requests are made, preventing abuse. For example, phishing attacks via Gmail, data theft and the development of a Chrome infostealer were blocked.

Gemini’s security measures thus prove to be an effective protection against attempts to misuse AI for malicious purposes. They act as an important barrier that makes it more difficult for threat actors to successfully carry out their attacks.

Failed circumvention of security measures and lack of customised attacks

Interestingly, the experts observed that threat actors have not yet conducted any customised attacks on generative AI models such as GPT or prompt attacks. Instead, they resorted to public jailbreak prompts in the hope of bypassing Gemini’s security measures. However, these attempts were largely unsuccessful, demonstrating that the existing safeguards effectively prevent abuse by simple methods.

Outlook: Further development of security mechanisms and threat management

Google’s report provides a valuable insight into the current threat landscape in the area of generative AI and its potential misuse by cybercriminals and state-sponsored actors. Although Gemini is used in various scenarios to make attacks more efficient, the model’s security mechanisms have so far proven effective in preventing more serious attack attempts.

Nevertheless, the misuse of generative AI remains a growing risk that cannot be ignored. Companies and security authorities are therefore called upon to continuously monitor AI-driven attacks and to further develop security standards. Only by consistently developing protective measures against the misuse of generative AI can the potential of this technology for malicious purposes be limited while at the same time promoting its legitimate use.

Related Articles

Focus on the importance of cooperation and innovation

Herrmann at the Security and Innovation Forum at Friedrich-Alexander University Erlangen-Nuremberg At the Security and Innovation Forum at Friedrich-Alexander University Erlangen-Nuremberg (FAU) on Monday, Bavaria's Interior Minister Joachim Herrmann emphasised the...

Airbus’ OneSat selected for Oman’s first satellite

Space Communication Technologies (SCT), Oman's national satellite operator, has awarded Airbus Defence and Space a contract for OmanSat-1, a state-of-the-art, fully reconfigurable, high-throughput OneSat telecommunications satellite, including the associated system....

Black Friday: Half go bargain hunting

On average, 312 euros are spent – around 11 per cent more than last year Online shops from China polarise opinion: half avoid them, the other half have already ordered from them Four out of ten young people would send AI shopping on its own When Black Friday and the...

Share This