Do not reveal confidential data to ChatGPT

June 20, 2023

Generative AI systems like ChatGPT and Co. receive a lot of attention and are fed data by thousands of users every day. More and more companies are using the technologies and applying them to a wide variety of projects and processes. Above all, the tools are used for gathering information, writing texts and translating. Unfortunately, many users do not treat sensitive company data with much consideration and let the AI work for them. This approach can cause serious consequential damage, as this data can be accessed and extracted without control by any other user who only asks the right questions. This can now be sold to other companies or to cyber criminals and misused for quite a few nefarious purposes.

An example of how this could play out would be the following: A doctor enters a patient’s name and details of their condition into ChatGPT so that the tool can compose a letter to the patient’s insurance company. In the future, if a third party asks ChatGPT, “What health problem does [patient’s name] have?” the chatbot could respond based on the doctor’s details. These risks are just as big a threat as phishing attacks, because of course it is possible to infer entire companies and their business practices from single individuals.

Employees, if allowed to use AI tools, must be careful not to enter or include personal data and company internals in their queries. They must also ensure that the information they are given in the responses is also free of personal data and company internals. All information should be independently verified again to safeguard against legal claims and to avoid misuse.

Security awareness training can help employees learn how to use ChatGPT and other generative AI tools responsibly and safely for work. They learn what information they can and cannot disclose so that they and their companies do not run the risk of sensitive data being misused by unwanted third parties. The consequences would otherwise be fines within the scope of the GDPR and the associated damage to their image, all the way to cyber attacks through social engineering. In the latter case, attackers would use the information shared with the tools for their research in order to exploit vulnerabilities in IT systems or to use spear phishing to specifically get employees to click on links stored in emails.

Author: Dr. Martin J. Krämer, Security Awareness Advocate at KnowBe4

Related Articles

ONVIF Launches New Online Learning Initiative

ONVIF Launches New Online Learning Initiative

ONVIF®, global standardization initiative for IP-based physical security products, has released the first course in a new online learning initiative designed to promote greater knowledge and understanding of the workings of ONVIF. The new “Introduction to ONVIF”...

Share This