ChatGPT and Data Protection


In the meantime, a real hype has developed around ChatGPT. While ChatGPT offers many benefits, it also comes with regulatory compliance and privacy challenges that should be considered. Learn more about the potential issues that can arise when using ChatGPT.

What is ChatGPT?

ChatGPT is a voice software textbot for artificial intelligence (AI) developed by OpenAI in the USA and is currently the talk of the town.

In a chat window, you can ask the software questions and get an answer in a few moments. Another special feature is that ChatGPT follows work instructions, such as writing a letter or essay. ChatGPT's responses are based on training in dialog format. It also asks queries when the statement needs to be more specific.

Let´s ask ChatGPT?

What are the gaps in data protection?

ChatGPT is designed to process and generate text based on user input. To provide effective results and train the software, ChatGPT collects and processes large amounts of data. The data provided by users is stored by ChatGPT. The software uses this data to analyze the effectiveness of the service and to gain insight into the typical behavior and characteristics of users. The data may be collected using cookies, among other means.

ChatGPT and OpenAI have privacy policies that describe how the data is used. However, as California law allows, they are very vaguely worded. In addition, there are no privacy statements or consent forms on either the sign-up page or the actual website. However, it remains unclear for what purpose the company processes data within the chatbot.

Most of the data collection takes place when using the services. This includes IP address, browser type as well as version and settings, date and time of access as well as usage information about the interaction with the system. According to the service provider, the usage data collected includes time zone, country of origin and information about when it is used and for how long, user agent and its version, and device type. ChatGPT also reserves the right to use cookies and analytics. The purpose is to provide and improve the Services. Nothing is said about which cookies and which analytics tools or providers are used.

During registration, users must enter and confirm their email address and cell phone number. However, this is done for no apparent reason and users are not informed about the purpose of the data collection and the consequences. It is also unclear where this data is then forwarded. Cyber criminals are particularly interested in such verified email addresses with confirmed cell phone numbers. If an employee discloses this information to OpenAI in a professional context, the employer is not only exposed to cybersecurity risks, but is also liable for the resulting breach of personal privacy.

Another issue is that all servers are located in the USA. When using ChatGPT, one agrees to personal data being sent directly to servers in the USA. Given the general legal problem of data transfer to the USA, the use of ChatGPT is therefore also questionable in terms of data protection. Furthermore, there is no representative in the EU (Art. 27 GDPR. The processing of personal data of EU citizens:inside is therefore not lawful. The demand for rights such as access under Art. 15 GDPR or deletion of data under Art.17 GDPR therefore comes to nothing.

Additional vulnerabilities of artificial intelligence.

The answers appear quick, easy to understand, and intelligent. However, behind this lies the problem that some of them are simply made up, as researchers at TU Darmstadt have discovered. In addition, ChatGPT draws on training data from 2021, so the results are by no means up-to-date. This is also evident from the fact that the system cannot yet provide any information about the Ukraine war.

In addition, ChatGPT has weaknesses in terms of anti-semitic and racist statements, as well as incorrect or non-existent information from sources. This results in another problem, because it is not possible to trace which sources are used for the contributions. It is therefore not possible to guarantee their accuracy, which in turn leads to the dissemination of unverified content.

There are security measures that are supposed to prevent criminal content. However, these can be easily circumvented. For example, the software can provide tips on how to create fraudulent content.

Is it advisable to use ChatGPT?

There are still concerns about data security and protection. The data may contain sensitive information, such as personal details, financial data, etc. As with any technology that stores personal information, there is a risk of data breaches or other security issues.

The use of artificial intelligence in a private setting is probably more of a gimmick that is harmless. Businesses, on the other hand, should think carefully about whether they actually want to use ChatGPT.

Data protection is an issue for you? During a [free initial consultation] (, we can inform you about measures to optimally protect your customer data.

Support Glasskube
By leaving us a Star on GitHub
Star us
Glasskube Newsletter

Sign-Up to get the latest product updates and release notes!

Our solutions for reliable
and scalable infrastructure.

Easily and scale your IT infrastructure while deploying applications quickly and securely with our cloud native technology solutions.

Outdated software or technical debt?

Turn on autopilot