Do not reveal confidential data to ChatGPT

June 20, 2023

Generative AI systems like ChatGPT and Co. receive a lot of attention and are fed data by thousands of users every day. More and more companies are using the technologies and applying them to a wide variety of projects and processes. Above all, the tools are used for gathering information, writing texts and translating. Unfortunately, many users do not treat sensitive company data with much consideration and let the AI work for them. This approach can cause serious consequential damage, as this data can be accessed and extracted without control by any other user who only asks the right questions. This can now be sold to other companies or to cyber criminals and misused for quite a few nefarious purposes.

An example of how this could play out would be the following: A doctor enters a patient’s name and details of their condition into ChatGPT so that the tool can compose a letter to the patient’s insurance company. In the future, if a third party asks ChatGPT, “What health problem does [patient’s name] have?” the chatbot could respond based on the doctor’s details. These risks are just as big a threat as phishing attacks, because of course it is possible to infer entire companies and their business practices from single individuals.

Employees, if allowed to use AI tools, must be careful not to enter or include personal data and company internals in their queries. They must also ensure that the information they are given in the responses is also free of personal data and company internals. All information should be independently verified again to safeguard against legal claims and to avoid misuse.

Security awareness training can help employees learn how to use ChatGPT and other generative AI tools responsibly and safely for work. They learn what information they can and cannot disclose so that they and their companies do not run the risk of sensitive data being misused by unwanted third parties. The consequences would otherwise be fines within the scope of the GDPR and the associated damage to their image, all the way to cyber attacks through social engineering. In the latter case, attackers would use the information shared with the tools for their research in order to exploit vulnerabilities in IT systems or to use spear phishing to specifically get employees to click on links stored in emails.

Author: Dr. Martin J. Krämer, Security Awareness Advocate at KnowBe4

Related Articles

Focus on the importance of cooperation and innovation

Herrmann at the Security and Innovation Forum at Friedrich-Alexander University Erlangen-Nuremberg At the Security and Innovation Forum at Friedrich-Alexander University Erlangen-Nuremberg (FAU) on Monday, Bavaria's Interior Minister Joachim Herrmann emphasised the...

Airbus’ OneSat selected for Oman’s first satellite

Space Communication Technologies (SCT), Oman's national satellite operator, has awarded Airbus Defence and Space a contract for OmanSat-1, a state-of-the-art, fully reconfigurable, high-throughput OneSat telecommunications satellite, including the associated system....

Black Friday: Half go bargain hunting

On average, 312 euros are spent – around 11 per cent more than last year Online shops from China polarise opinion: half avoid them, the other half have already ordered from them Four out of ten young people would send AI shopping on its own When Black Friday and the...

Share This