Your Privacy Hub

Yes We Trust moves to Didomi

We are excited to share that going forward, Yes We Trust content will be incorporated into Didomi, where we will continue to post relevant, educational content that helps you make sense of data privacy today, including out flagship newsletter and opinion pieces. Thank you for your continued support and see you there!

    • company-news
    • industry-news

    Published on June 15, 2023 last updated on August 9, 2023

    Europe's AI Act moves forward

    Earlier this week, European lawmakers approved what could be one of the first major laws to regulate Artificial Intelligence (AI). The European Parliament passed a draft for the AI Act, a law that would implement new restrictions on AI technologies.

    Note: On August 29th, join Didomi for a webinar about AI and Data Privacy, where speakers will discuss the AI landscape, the impact of AI on data privacy, and how to make safe use of AI for your business:

     

    Didomi - Webinar AI and Data Privacy

    In a press release issued last month, the European Parliament communicated about the objectives behind the Act and the reasoning behind the efforts to regulate AI technologies, while reassuring businesses about potential consequences on technological progress in the field:

    "Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement."

    - Dragos Tudorache, Co-rapporteur - Renew, Romania (source: European Parliament)

    As it currently stands, the Act would take a "risk-based" approach, with lawmakers defining restrictions based on how dangerous application of the AI technology could be. Specific applications that have been cited include fake news, facial recognition software, or uses in law enforcement context. 

    Earlier this year, writer Sarah Barker took to the Privacy Soapbox to talk about international cooperation on artificial intelligence, and highlighted the recent case of Clearview AI, the facial recognition software company that has been hit by over 60 million euros of collective fines issued by the UK, France, Greece, and Italy for unlawful processing of personal data and failure to take into account the rights of individuals in an effective and satisfactory way:

    "As Clearview continues its operating methods, a dangerous precedent is being set. The company is far from the only one harvesting sensitive data for profit. Chat GPT, Dall-E and countless other AI systems are fed by highly personal online content. The global failure to bring Clearview to account sends a clear message to others - that whatever domestic regulators may say, their current data protection laws are toothless across borders when it comes to AI technology."

    - Sarah Barker, A pivotal moment: the case for ugent international cooperation on AI (Source: Yes We Trust)

    Do you think the AI Act is the answer? After the vote earlier this week, a final version of the law will now be negotiated, with regulators saying they hope to reach a final agreement by the end of the year.

    Continue the conversation in the Yes We Trust community

    avatar Yes We Trust

    Yes We Trust

    Your privacy hub.