The Italian Data Protection Authority has imposed an immediate restriction on the processing of user data by OpenAI. The independent administrative authority has simultaneously initiated an investigation against the developers of the AI chatbot ChatGPT.
ChatGPT is a well-known relational artificial intelligence software that simulates and processes human conversations. It experienced a data breach on March 20, exposing user conversations and payment information for subscribers to its premium service.
ChatGPT Raises Data Privacy Concerns
In its decision, the Data Protection Authority notes the lack of information provided to users and all stakeholders whose data is collected by OpenAI. More significantly, it highlights the absence of a legal basis that justifies collecting and storing personal data to “train” ChatGPT.
Although OpenAI’s terms state that the service is intended for users aged 13 and older, the Italian Authority points out the lack of age verification. This exposes minors to responses that are unsuitable for their developmental stage and self-awareness.
OpenAI does not have a headquarters in the European Union. Still, the firm must report the measures taken in response to the Authority’s demands within 20 days. Failing to do so may result in a fine of up to €20 million or 4% of its annual global turnover.
The Italian Data Protection Authority’s recent temporary restriction on OpenAI’s ChatGPT coincides with growing concerns about AI systems.
Experts Urge Halt to ‘AI Experiments’
An open letter published by the nonprofit Future of Life Institute calls for a pause in the development of AI systems. It cites the “profound risks to society and humanity.”
Signatories of the letter include Elon Musk, Yuval Noah Harari, Steve Wozniak, Jaan Tallinn, and Andrew Yang. Without mentioning other influential figures in technology and AI research.
While the ChatGPT restriction focuses on data protection and privacy issues, the open letter emphasizes the “out-of-control race” to develop unpredictable machine learning systems, even for their creators.
The authors of the letter propose a 6-month pause on AI development, urging AI labs and independent experts to use this time to establish shared safety protocols for AI design and development. These protocols should be audited and overseen by independent outside experts, ensuring safety beyond a reasonable doubt.
Brian Armstrong, CEO of Coinbase, expressed his disagreement with the idea of pausing AI development. He argued that no “experts” can adjudicate the issue, and reaching a consensus among many disparate actors would be impossible. Armstrong believes that committees and bureaucracy will not provide a solution. Instead, the development of AI technology should continue, as the benefits outweigh the potential risks.
Armstrong emphasized the importance of the marketplace of ideas, which he claims leads to better outcomes than central planning. He urged not to let fear hinder progress and cautioned against anyone attempting to centralize control within a single authority.
Still, recent events demonstrate an increasing opposition to the fast-paced development of AI systems without consideration for safety, ethics, and data protection. It is possible that more regulatory authorities will step in to enforce restrictions, emphasizing the need for responsible deployment of AI.
Disclaimer
In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.
Source: https://beincrypto.com/italy-halts-chatgpt-elon-musk-pause-ai/