Your Privacy Hub

Yes We Trust moves to Didomi

We are excited to share that going forward, Yes We Trust content will be incorporated into Didomi, where we will continue to post relevant, educational content that helps you make sense of data privacy today, including out flagship newsletter and opinion pieces. Thank you for your continued support and see you there!

    • company-news
    • industry-news

    Published on April 13, 2023 last updated on April 14, 2023

    OpenAI will propose corrective measures regarding ChatGPT ban in Italy

    ChatGPT's producer, OpenAI, plans to present measures to remedy the problems that led to the chatbot's ban in Italy in the last week of March, Italian data protection agency Garante said.

    OpenAI, backed by Microsoft Corp, took ChatGPT offline in Italy after the Garante agency temporarily restricted it and began an investigation into an alleged violation of privacy rules.

    In March, the agency accused OpenAI of failing to verify the age of ChatGPT users and "the absence of any legal basis to justify the massive collection and storage of personal data."

    On Thursday, April 6, she said she had no intention of curbing the development of AI, but reiterated the importance of respecting rules to protect the personal data of Italian and European citizens.

    In a video conference late Wednesday, April 5, featuring CEO Sam Altman, OpenAI pledged to be more transparent about how it handles user data and verifies users' ages, Garante reported.

    The company said it would send Garante a document about the steps it has taken to respond to its requests.

    The data authority said it would evaluate the proposals made by OpenAI. A source familiar with the matter said it would likely take several days to evaluate the contents of the letter.

    On Thursday, April 6, the company published a blog post titled "Our Approach to AI Safety," in which it said it is working to develop "nuanced policies against behaviors that pose a real risk to people."

    "We don't use data to sell our services, advertise or profile people. We use data to make our models more useful to people. ChatGPT, for example, gets better by training more on the conversations people have with it."

    The company said it removes personal information from its datasets whenever possible, refines its models to reject prompts from users requesting such information, and will respond to individual requests to remove their data from its systems.

    "While some of our training data includes personal information available on the public internet, we want our models to learn about the world, not private individuals."

    The ban in Italy has sparked the interest of other privacy regulators in Europe, who are considering the need for stronger measures against chatbots and coordinating those actions.

    Continue the conversation in the Yes We Trust community

    avatar Melissa Walehiane

    Melissa Walehiane

    Content writer at Didomi