Italy gives OpenAI first to-do list to lift ChatGPT suspension order

Image credit: STEPHANIE REYNOLDS/AFP/Getty Images

Italy’s data protection watchdog has laid out what OpenAI must do to lift an order against ChatGPT issued late last month – when it said it suspected the AI ​​chatbot service was in breach of the EU’s General Data Protection Regulation (GDPR) and ordered the US-based the company to stop processing local people’s data.

The EU’s GDPR applies when personal data is processed, and there is no doubt that large language models such as OpenAI’s GPT have absorbed huge amounts of the stuff from the public internet to train their generative AI models to be able to respond in a human way. as a path to natural language prompts.

OpenAI responded to the Italian data protection authority’s order by quickly geoblocking access to ChatGPT. In a brief public statement, also OpenAI CEO Sam Altman tweeted confirmation that it had stopped offering the service in Italy – doing so alongside the usual Big Tech disclaimer that it “believes[s] we comply with all privacy laws.”

Italy’s Garante clearly has a different view.

The short version of the regulator’s new requirements for compliance is this: OpenAI must become transparent and publish an information notice describing the data processing; it must immediately adopt age restrictions to prevent minors from accessing the technology and move to more robust age verification measures; it needs to clarify the legal basis it requires to process people’s data to train its AI (and cannot rely on the fulfillment of a contract – meaning it has to choose between consent or legitimate interests); it must also provide ways for users (and non-users) to exercise rights over their personal data, including requesting corrections of disinformation generated about them by ChatGPT (or having their data deleted); it must also give users an opportunity to object to OpenAI’s processing of their data to train its algorithms; and it must conduct a local awareness campaign to inform Italians that it is processing their information to train the AIs.

The DPA has given OpenAI a deadline – 30 April – to get most of it done. (The local radio, TV and internet awareness campaign has a slightly more generous timeline of May 15 to be campaigned.)

There’s also a little more time for the additional requirement to migrate from the immediately necessary (but weak) age-restricted child safety technology to a harder-to-bypass age verification system. OpenAI has been given until May 31 to submit a plan for implementing age verification technology to filter out users under 13 (and users aged 13 to 18 who had not obtained parental consent) – with the deadline to get the more robust system in place set on 30 September.

In a press release detailing what OpenAI must do to lift the temporary suspension on ChatGPT, ordered two weeks ago when the regulator announced it was launching a formal investigation into suspected GDPR violations, it writes:

OpenAI must comply with the measures set by the Italian SA by April 30 [supervisory authority] on transparency, the rights of registered persons – including users and non-users – and the legal basis for processing for algorithmic training based on users’ data. Only then will the Italian SA lift its order that placed a temporary restriction on the processing of Italian users’ data, as the order is no longer urgent, so that ChatGPT will be available again from Italy.

To elaborate on each of the necessary “concrete measures”, the DPA stipulates that the mandatory information notice must describe “the arrangements and logic of the data processing required for the operation of ChatGPT, together with the rights granted to registered persons (users and non-users ), adding that it “must be easily accessible and located in such a way that it can be read before you sign up for the service.”

Users from Italy must be presented with this message before registering and also confirm that they are over 18, it requires further. While users who signed up before the DPA’s order to stop data processing must be shown the notice when accessing the reactivated service, and must also be pushed through an age gate to filter out underage users.

On the question of the legal basis related to OpenAI’s processing of people’s data to train its algorithms, Garante has limited the available options to two: consent or legitimate interests – which stipulates that it must immediately remove all references to the execution of a contract “in thread with them [GDPR’s] the principle of accountability.” (OpenAI’s privacy policy currently cites all three reasons, but seems to lean most heavily on the performance of a contract to provide services like ChatGPT.)

“This will be without prejudice to the exercise of SA’s investigative and enforcement powers in this regard,” it adds, confirming that it is withholding judgment on whether the two remaining grounds can be lawfully used for OpenAI’s purposes as well.

In addition, the GDPR gives data subjects a number of access rights, including the right to rectification or deletion of their personal data. Therefore, the Italian regulator has also demanded that OpenAI implement tools so that data subjects – meaning both users and non-users – can exercise their rights and have falsehoods generated about them by the chatbot corrected. Or, if correcting AI-generated lies about named individuals proves to be “technically impossible”, the DPA stipulates that the company must provide a way for their personal data to be deleted.

“OpenAI will need to make readily available tools to allow non-users to exercise their right to object to the processing of their personal data relied on for the operation of the algorithms. The same right will need to be granted to users if legitimate interest is chosen as legal basis for processing their data,” it added, referring to another of the rights the GDPR gives data subjects when legitimate interest is based on as a legal basis for the processing. personal information.

All the measures Garante has announced are preparedness, based on its preliminary concerns. And the press release notes that its formal inquiries — “to determine possible violations of the law” — are continuing and may lead it to decide to take “further or other action if this proves necessary after completion of the fact-finding investigation that is underway.”

We reached out to OpenAI for a response, but the company had not responded to our email by press time.

Source link

Back to top button

mahjong slot