Italy’s Garante Claims OpenAI’s ChatGPT Violates Data Protection Rules

Italy’s data protection authority, Garante, has escalated its scrutiny of OpenAI’s ChatGPT, asserting that the artificial intelligence chatbot application breaches data protection rules. This development comes after Garante had previously banned ChatGPT due to alleged violations of European Union (EU) privacy regulations but subsequently allowed its reactivation following OpenAI’s efforts to address concerns. Garante has raised fresh concerns, prompting OpenAI to prepare its defense.

Garante has been proactive in assessing AI platform compliance with the EU’s data privacy regime. Last year, the authority banned ChatGPT over alleged breaches of EU privacy rules. However, the service was reinstated after OpenAI took steps to address user consent issues related to personal data usage in algorithm training. Despite the reactivation, Garante persisted in its investigations and has now announced that it has identified elements indicating potential data privacy violations.

In response to Garante’s claims, OpenAI has maintained that its practices are aligned with EU privacy laws. The organization asserted that it actively works to minimize the use of personal data in training its systems, such as ChatGPT. OpenAI also expressed its commitment to cooperate constructively with Garante during the ongoing investigation.

Next steps and legal implications

Garante has given Microsoft-backed OpenAI a 30-day window to present its defense arguments. This period will be crucial for OpenAI to clarify its data protection practices and demonstrate compliance with EU regulations. Garante’s investigation will also consider the findings and input of a European task force comprising national privacy watchdogs.

The backdrop to this investigation is the EU’s General Data Protection Regulation (GDPR), introduced in 2018. Under the GDPR, any company found to have violated data protection rules can face fines amounting to up to 4% of their global turnover. With Garante’s actions, it is evident that data protection authorities within the EU take violations seriously and are willing to enforce penalties when necessary.

Broader regulatory trends

The case of ChatGPT is not isolated, as it underscores the broader regulatory trends surrounding AI systems in the EU. In December, EU lawmakers and governments took a significant step forward by agreeing on provisional terms for regulating AI systems like ChatGPT. This move brings the EU closer to establishing comprehensive rules to govern the use of AI technology, with a particular focus on safeguarding data privacy and ensuring ethical AI practices.

The ongoing investigation by Italy’s Garante into OpenAI’s ChatGPT highlights the importance of compliance with data protection regulations in the European Union. OpenAI’s willingness to cooperate and address concerns regarding the use of personal data will play a pivotal role in the outcome of this case. Furthermore, the broader regulatory trends in the EU indicate a growing emphasis on establishing comprehensive guidelines for AI systems, addressing data protection and ethical considerations. As the investigation unfolds, it remains to be seen how OpenAI will navigate these challenges and whether it can satisfy Garante’s concerns regarding ChatGPT’s compliance with EU data privacy rules.

Source: https://www.cryptopolitan.com/chatgpt-violate-data-protection/