Italy’s data protection authority has told OpenAI that its artificial intelligence chatbot application ChatGPT breaches data protection rules, the watchdog said on Monday as it presses ahead with an investigation started last year.
The authority, known as Garante, is one of the European Union’s most proactive in assessing AI platform compliance with the bloc’s data privacy regime. Last year it banned ChatGPT over alleged breaches of EU privacy rules.
The service was subsequently reactivated after OpenAI addressed issues concerning, among other things, the right of users to decline to consent to its use of personal data to train its algorithms.
At the time, the regulator said it would continue its investigations. It has since concluded that there are elements indicating one or more potential data privacy violations, it said in a statement without providing further detail.
OpenAI did not immediately respond to a request for comment.
The Garante on Monday said that Microsoft -backed OpenAI has 30 days to present defence arguments, adding that its investigation would take into account work done by a European task force comprising national privacy watchdogs.
Italy was the first West European country to curb ChatGPT, the rapid development of which has attracted attention from lawmakers and regulators.
Under the EU’s General Data Protection Regulation (GDPR) introduced in 2018, any company found in breach of the rules faces fines of up to 4% of its global turnover.
In December EU lawmakers and governments agreed provisional terms for regulating AI systems such as ChatGPT, moving a step closer to setting rules governing the technology.