The rapid evolution of artificial intelligence is transforming not only economies and research, but also the shadow economy of cyberspace. Google’s latest AI Threat Tracker report, published by the Google Threat Intelligence Group (GTIG), offers a precise yet unsettling portrait of this emerging digital battlefield. What was regarded as experimental just a few years ago—the use of generative AI by attackers—has now become an established component of the modern cybercrime ecosystem.
Building on previous analyses, particularly Adversarial Misuse of Generative AI from January 2025, the report identifies a clear turning point. Threat actors no longer employ AI merely to boost productivity or automate known attack patterns; they are beginning to experiment with the structural potential of generative systems. This shift marks a new phase in the evolution of offensive cyber capabilities: the ability to adapt attacks dynamically during execution.
One of the most striking findings concerns the emergence of so-called “just-in-time” AI within malware families such as PROMPTFLUX and PROMPTSTEAL. These malicious programs leverage large language models (LLMs) in real time to modify their code and generate new malicious functions on demand. This represents an early but clearly discernible stage of autonomous malware—software that no longer follows static instructions but acts adaptively and contextually. In doing so, it blurs the boundary between tool and agent, marking a profound paradigm shift in the nature of digital threats.
Simultaneously, socio-psychological attack vectors are becoming increasingly refined. According to GTIG, threat actors now use social engineering techniques to bypass the security mechanisms embedded in commercial AI models. By posing as students or cybersecurity researchers in their prompts, they attempt to elicit restricted information from chatbots such as Gemini. This behaviour underscores both the ingenuity of adversaries and the inherent fragility of open AI systems, which must constantly navigate the delicate balance between accessibility and misuse.
State-sponsored actors from countries including North Korea, Iran, and China have also integrated generative AI models into all stages of their operations—from reconnaissance and phishing to the technical maintenance of command-and-control infrastructures. The result is a level of operational sophistication that far exceeds the opportunistic misuse of AI observed in earlier years.
Equally concerning is the maturation of an underground market for AI-driven cyber tools. Google’s analysts have identified an expanding array of multifunctional platforms that combine phishing, malware development, and vulnerability research. These tools drastically lower the entry threshold for cybercrime, effectively “democratising” access to advanced offensive capabilities. Individuals with limited technical skill can now execute complex attacks once reserved for expert groups or nation-state actors.
As GTIG Technical Director Billy Leonard observes, many attackers initially turn to mainstream AI platforms like Gemini, but stringent safeguards are driving them toward the criminal underground, where unregulated models operate without constraint. This dynamic exposes a dangerous asymmetry: the more legitimate platforms strengthen their protections, the more appealing unrestricted black-market alternatives become.
Google’s response follows a dual approach—actively disrupting hostile activities while integrating newly gained intelligence into its own security infrastructure. The company’s objective is to embed AI-awareness deeply within its protective frameworks, thereby enhancing resilience on a systemic scale.
Ultimately, the AI Threat Tracker does more than map a technological trend; it highlights a societal inflection point. The creative misuse of artificial intelligence has become a strategic weapon in the digital domain, and countering it demands far more than technical remediation. It requires a nuanced understanding of the interplay between human ingenuity, machine adaptability, and the ethical boundaries both attackers and defenders are willing to test.
In this emerging landscape, the line between intelligence and deception has grown perilously thin.
Source: Google, Advances in Threat Actor Usage of AI Tools (2025)

