Italy’s data protection authority has warned users and providers of artificial intelligence (AI) tools, such as Grok and ChatGPT, over the risks to “fundamental rights and freedoms” posed by AI deepfakes. Meanwhile, a furor in the United Kingdom over Grok creating sexually explicit images of children and women has led to condemnation by the government and to X limiting Grok’s image function to paying customers.
On January 8, the Garante Per la Protezione dei Dati Personali, or ‘Garante’—Italy’s data protection watchdog—published a press release warning users of AI-based services that allow users to generate and share content based on real images or voices, commonly known as deepfakes.
It said that such services “may lead, in addition to potential criminal offenses, without the data subjects’ consent, to serious violations of the fundamental rights and freedoms of the individuals involved.”
Garante singled out several services in particular, namely Grok, ChatGPT, and Clothoff—the latter of whom has already been temporarily blocked once, in October 2025, for enabling users to create hyper-realistic fake images or videos that depict real people, including minors, in nude or sexually explicit poses.
The data protection authority added that an investigation launched by the court has revealed that these services, in many cases, “make it extremely easy to misuse images and voices of third parties, without any legal basis.”
As well as warning users about such services, Garante reminded providers “to design, develop, and make available applications and platforms in such a way as to ensure that users can use them in compliance with privacy regulations.”
The Italian authority added that it is already working with its Irish counterpart, the Data Protection Commission (DPC), to combat harmful deepfake use; the Irish watchdog is responsible for the services provided by X—Elon Musk’s social media platform that provides the chatbot Grok—as Ireland is the company’s main base of operation in Europe.
Garante said it “reserves the right to take further action” against operators of chatbots that allow harmful deepfakes.
This latest warning comes amid a growing backlash against AI platforms for enabling non-consensual sexualized imagery. The same day the Italian data protection agency published its warning, the European Commission ordered Elon Musk’s X to retain all documents relating to Grok for a longer period, while the bloc ensures compliance with its rules, following its condemnation for producing sexualized images.
“This is saying to a platform, keep your internal documents, don’t get rid of them, because we have doubts about your compliance… and we need to be able to have access to them if we request it explicitly,” said Thomas Regnier, spokesman for tech sovereignty at the European Commission, as reported by Reuters on January 8.
According to the EU’s AI Act, which entered into force on August 1, 2024, AI systems that create synthetic content (like deepfakes) must mark their outputs as artificially generated, and AI systems that manipulate, deceive, or exploit vulnerabilities in ways that are likely to cause significant harm, especially where there is exploitation, are prohibited. Sexually explicit or exploitative deepfakes likely fall into this latter category, as was the case with Clothoff.
Meanwhile, the EU’s Digital Services Act (DSA) imposes obligations on platforms related to transparency and content moderation, as well as holding platforms accountable for the dissemination of illegal and disinformation content.
Regnier recently spoke to the BBC about the issue of AI deepfakes, saying “we don’t want this in the European Union… it’s appalling, it’s disgusting.”
He added that “The Wild West is over in Europe. All companies have the obligation to put their own house in order – and this starts by being responsible and removing illegal content that is being generated by your AI tool.”
Back to the top ↑
Grok in hot water
Meanwhile, Grok has been the source of further unwanted headlines in the U.K., as reports emerged earlier that the AI chatbot‘s image function was being used to create and disseminate explicit images of children and women with their clothes digitally removed.
The U.K.’s communications watchdog, Ofcom, said on January 5 that it had made “urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the U.K.”
This was followed by U.K. Prime Minister Keir Starmer, in an interview with Greatest Hits Radio on Thursday, promising to “take action” on “disgraceful and disgusting” reports around child abuse imagery on Grok AI.
“This is wrong, it’s unlawful, we are not going to tolerate it,” said Starmer. “I have asked for all options to be on the table.”
In response, on January 9, Grok switched off its image creation function for the vast majority of users, posting on X that “image generation and editing are currently limited to paying subscribers.”
However, the U.K. government was less than pleased with this measure, with the PM’s spokesperson describing the response as weak and insulting.
“[Today’s move] simply turns an AI feature that allows the creation of unlawful images into a premium service. It’s not a solution. In fact, it’s insulting to victims of misogyny and sexual violence,” said the spokesperson, in a briefing later on Friday. “The point here is we must stop these abhorrent images being made on Grok, and we will prioritise action that puts an end to this. As the prime minister said yesterday, it’s disgraceful, it’s disgusting and it’s not to be tolerated.”
The spokesperson reiterated that “all options” were on the table for the government as potential solutions to the problem and, in this regard, gave Ofcom the government’s “full support to take any action it sees fit.”
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Back to the top ↑
Watch: The future of AI Generated Art on Aym
Source: https://coingeek.com/italy-warns-on-ai-deepfakes-uk-critiques-grok-for-explicit-images/