Trend Micro uncovers criminal methods behind deepfake-based cybercrime
Trend Micro, a leading global provider of cyber security solutions, today released a new report highlighting the scale and maturity of deepfake-based cybercrime. As generative AI tools become more powerful, available and affordable, cybercriminals are increasingly using these technologies for attacks such as fraud, extortion and identity theft.
The report shows that deepfakes are no longer just hype, but are already causing real damage: they undermine digital trust, pose new risks to businesses and thus support the business models of cybercriminals.
The analysis also found that attackers no longer need expert knowledge to launch convincing attacks. Instead, they use freely available platforms for video, audio and image generation, many of which were actually developed for content creators, to create deceptively real deepfakes that they use to mislead both individuals and companies. These tools are inexpensive, easy to use and increasingly capable of circumventing identity checks and security measures.
The report by the Japanese cybersecurity provider describes a growing criminal ecosystem in which these platforms are used for sophisticated scams. These include:
- CEO fraud is becoming increasingly difficult to detect as attackers use deepfake audio or video to impersonate executives in real-time meetings.
- Application processes are being compromised by fake candidates using AI to successfully cheat their way through interviews and gain unauthorised access to internal systems.
- Financial services providers are seeing an increase in deepfake attempts to circumvent KYC (Know Your Customer) checks, enabling anonymous money laundering using fake identities.
Tutorials, toolkits and services are circulating in the cyber underground to professionalise such attacks. With detailed step-by-step instructions on how to circumvent onboarding processes or ready-to-use face-swap tools, it is now easier than ever to get started with this form of crime.
Given the increasing frequency and complexity of deepfake-based attacks, Trend Micro is calling on companies to take proactive measures. The aim is to minimise risks at an early stage and protect employees and processes. Recommended measures include training on how to recognise social engineering attacks, reviewing authentication procedures and integrating solutions for detecting synthetic media content.
‘AI-generated media is no longer a future threat, but already poses a serious business risk today,’ explains David Sancho, Senior Threat Researcher at Trend Micro. ‘We are seeing executives being imitated in real time, job application processes being manipulated and security mechanisms being circumvented with alarming ease. This research is a wake-up call: companies that are not actively preparing for the deepfake era have already missed the boat. In a world where you can no longer trust your eyes, digital trust must be rebuilt from the ground up.’
Further information
The full report, Deepfake it ’til You Make It: A Comprehensive View of the New AI Criminal Toolset, is available in English here: