Adversarial Misuse of Generative AI: the threat posed by the misuse of AI technologies

January 29, 2025

In the latest report by the Google Threat Intelligence Group (GTIG), entitled ‘Adversarial Misuse of Generative AI’, the experts take a look at the growing threat posed by the misuse of generative AI models such as Gemini. It shows how cybercriminals and state-backed actors are increasingly using AI for their attacks. The report examines how these threat actors misuse the technology and the challenges of securing such advanced systems.

Generative AI as a tool for cyber attacks

Google explains that threat actors use generative AI like Gemini primarily in three areas: research, debugging code and creating content. Instead of developing new, innovative techniques, these groups automate existing attack methods and increase their efficiency – the misuse of AI accelerates attack processes and makes them more productive.

1. Use of Gemini by APT actors

APT actors (advanced persistent threats) are known for their long-running, well-planned attacks. They use Gemini in a targeted manner in several phases of the attack cycle – from researching target infrastructure to detecting vulnerabilities and developing exploits. Gemini is also used to create malicious scripts and develop evasion techniques.

It is noteworthy that Iranian APT actors in particular make extensive use of Gemini. They use the AI both for researching hosting providers and for vulnerability analysis of target organisations. By contrast, Russian APT actors use Gemini only to a limited extent – indicating differences in tactics and techniques within these groups.

2. Use of Gemini by IO actors (Information Operations)

In addition to APT groups, IO actors (Information Operations) also use generative AI. Their goal is to influence public perception and spread disinformation. Gemini helps them create personas, design messages, translate content and increase their reach.

Iranian IO actors are once again the most prolific users of Gemini, accounting for three-quarters of IO actor usage. By contrast, Chinese and Russian IO actors primarily used Gemini for general research and content creation.

Gemini’s safeguards and their impact

Although Gemini is used in a variety of ways, the model’s built-in security mechanisms protect against misuse. The report highlights that security responses are automatically triggered when more elaborate or malicious requests are made, preventing abuse. For example, phishing attacks via Gmail, data theft and the development of a Chrome infostealer were blocked.

Gemini’s security measures thus prove to be an effective protection against attempts to misuse AI for malicious purposes. They act as an important barrier that makes it more difficult for threat actors to successfully carry out their attacks.

Failed circumvention of security measures and lack of customised attacks

Interestingly, the experts observed that threat actors have not yet conducted any customised attacks on generative AI models such as GPT or prompt attacks. Instead, they resorted to public jailbreak prompts in the hope of bypassing Gemini’s security measures. However, these attempts were largely unsuccessful, demonstrating that the existing safeguards effectively prevent abuse by simple methods.

Outlook: Further development of security mechanisms and threat management

Google’s report provides a valuable insight into the current threat landscape in the area of generative AI and its potential misuse by cybercriminals and state-sponsored actors. Although Gemini is used in various scenarios to make attacks more efficient, the model’s security mechanisms have so far proven effective in preventing more serious attack attempts.

Nevertheless, the misuse of generative AI remains a growing risk that cannot be ignored. Companies and security authorities are therefore called upon to continuously monitor AI-driven attacks and to further develop security standards. Only by consistently developing protective measures against the misuse of generative AI can the potential of this technology for malicious purposes be limited while at the same time promoting its legitimate use.

Related Articles

Crime Statistics 2025: Bavaria reports lowest crime rate since 1978

Bavaria remains one of Germany’s safest federal states. According to the 2025 Police Crime Statistics, the crime rate in the Free State – apart from the Covid-19 year of 2021 – is at its lowest level since 1978. Interior Minister Joachim Herrmann presented figures in...

Small businesses too often lack a clear plan for digitalisation

One in five companies with 20 to 99 employees still has no digital strategy; among larger firms, the figure is just 8 per cent Bitkom invites you to TRANSFORM in Berlin on 18 and 19 March Opening addresses by Bill Anderson (CEO of Bayer), Dr Roland Busch (CEO of...

Share This