The criminal use of ChatGPT – a cautionary tale about large language models

March 27, 2023

In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work. 

Their insights are compiled in Europol’s first Tech Watch Flash report published today. Entitled ‘ChatGPT – the impact of Large Language Models on Law Enforcement’, this document provides an overview on the potential misuse of ChatGPT, and offers an outlook on what may still be to come. 

The aim of this report is to raise awareness about the potential misuse of LLMs, to open a dialogue with Artificial Intelligence (AI) companies to help them build in better safeguards, and to promote the development of safe and trustworthy AI systems. 

A longer and more in-depth version of this report was produced for law enforcement only. 

What are large language models? 

A large language model is a type of AI system that can process, manipulate, and generate text. 

Training an LLM involves feeding it large amounts of data, such as books, articles and websites, so that it can learn the patterns and connections between words to generate new content. 

ChatGPT is an LLM that was developed by OpenAI and released to the wider public as part of a research preview in November 2022.

The current publicly accessible model underlying ChatGPT is capable of processing and generating human-like text in response to user prompts. Specifically, the model can answer questions on a variety of topics, translate text, engage in conversational exchanges (‘chatting’), generate new content, and produce functional code. 

The dark side of Large Language Models

As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook.

The following three crime areas are amongst the many areas of concern identified by Europol’s experts: 

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code. 

As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse. 

Read Europol’s recommendations and the full findings of the report here. https://www.europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-models-law-enforcement

DOWNLOAD

***
Important notice: The LLM selected to be examined in the workshops was ChatGPT. ChatGPT was chosen because it is the highest-profile and most commonly used LLM currently available to the public. The purpose of the exercise was to observe the behaviour of an LLM when confronted with criminal and law enforcement use cases. This will help law enforcement understand what challenges derivative and generative AI models could pose.

***

About the Europol Innovation Lab: The Europol Innovation Lab helps the European law enforcement community to make the most of emerging technologies by finding synergies and developing innovative solutions to improve the ways in which they investigate, track and disrupt terrorist and criminal organisations. The Lab leads several research projects and coordinates the development of investigative tools with national law enforcement authorities. 

Related Articles

Infineon: Roadmap for power supply units in AI data centers

Infineon: Roadmap for power supply units in AI data centers

Artificial intelligence leads to increasing energy demand of data centers worldwide Infineon’s new Power Supply Units (PSU) strengthen its leading position in AI power supply based on Si, SiC and GaN Operators of AI data centers benefit from the world's first 12 kW...

SITA unveils latest evolution in total airport management

SITA unveils latest evolution in total airport management

Launch of the new AI-powered platform follows a successful demonstration in 2023 with Canada’s Greater Toronto Airports Authority SITA, a leading technology company in the air transport industry, has launched its trailblazing airport management tool, the SITA Airport...

Share This