You are currently viewing Italy Bans ChatGPT Temporarily, Investigates OpenAI’s Data Practices

Italy Bans ChatGPT Temporarily, Investigates OpenAI’s Data Practices


Italy’s privacy regulator, the national data protection authority, has imposed a temporary ban on the widely-used artificial intelligence service ChatGPT, which is backed by Microsoft and owned by OpenAI. The decision follows an investigation into OpenAI’s collection of personal information from users in Italy.

This regulatory action was prompted by a cybersecurity breach that occurred last week, revealing user conversations and certain financial details. For a period of nine hours, exposed information included names, billing addresses, credit card types, expiration dates, and the last four digits of credit card numbers. This information was revealed in an email sent by OpenAI to a customer affected by the breach, as reported by the Financial Times.

OpenAI, led by CEO Sam Altman, has been given 20 days to respond to the ban and demonstrate the measures it has taken to address these concerns. Should the company fail to comply within this timeframe, it may face a fine of up to €20 million.

This marks the first instance of regulatory action against the popular chatbot, as authorities worldwide grapple with the increasing prevalence of generative AI services. Industry experts have raised concerns over the vast amounts of data collected by language models like ChatGPT. Within two months of its launch, OpenAI reported over 100 million monthly active users. Microsoft’s Bing search engine, which also utilises OpenAI technology, garnered more than 1 million users in 169 countries within two weeks of its January release.

OpenAI has previously claimed to have addressed cybersecurity issues related to information leaks. However, during the ongoing investigation, the company is prohibited from processing data from Italian users via ChatGPT.

The Italian regulator initiated the investigation, citing a lack of legal basis for the large-scale collection and storage of personal data for the purpose of training ChatGPT’s underlying algorithms. The regulator also noted that ChatGPT did not consistently provide accurate information, leading to potential misuse of personal data.

Furthermore, the regulator criticised OpenAI for not having a filter to ensure that children under 13 were not using its service. The watchdog asserted that underage children were being exposed to content and information unsuitable for their level of self-consciousness.

This development comes as prominent figures like Elon Musk and AI pioneer Yoshua Bengio call for a six-month moratorium on the development of systems more powerful than the recently launched GPT-4, citing significant societal risks. Some experts argue that this call is hypocritical, aimed at allowing AI “laggards” to catch up with OpenAI while major tech companies aggressively compete to release AI products like ChatGPT and Google’s Bard.

Currently, generative AI technologies are regulated by existing data and digital laws like the GDPR and the Digital Services Act. However, the European Union is preparing legislation specifically for AI usage in Europe, with potential fines of up to €30 million or 6% of global annual turnover for companies that violate these regulations.





Source link

Leave a Reply