In the Privacy Soapbox, we give the stage to privacy professionals, guest writers, and opinionated industry members to share their unique points of view, stories, and insights about data privacy. Authors contribute to these articles in their personal capacity. The views expressed are their own and do not necessarily represent the views of Didomi.

 

Do you have something to share and want to take over the privacy soapbox? Get in touch at blog@didomi.io.

 

Note: This article was originally published on April 25, 2023, on the Yes We Trust blog.

 

As artificial intelligence (AI) starts to outpace global data protection laws, the shortfalls of fragmented regulation are becoming impossible to ignore. We may be on the precipice of losing control of our privacy for good - and international cooperation is the only answer.

 

The tipping point

 

It may sound like the stuff of science fiction - but as life increasingly mimics art in the world of artificial intelligence, it’s hard to shake the feeling that we are approaching a pivotal moment in history. With experts predicting the integration of AI and robotics into almost every aspect of our lives by 2025, time is running out for world leaders to make some critical decisions. What do we really want this new era of technology to look like? Which fundamental rights and values do we need to safeguard - and how?

 

With so many of these questions still unanswered, alarm bells are finally starting to go off for regulators. With the Italian data protection authority banning Chat GPT, Canada opening investigations into Open AI, and rumblings of bans on TikTok following shadowy data exchanges with China, the first quarter of 2023 has echoed with the sound of screeching brakes throughout global tech communities.

 

Meanwhile, just last week, many of those closest to the fold - from Elon Musk to Apple co-founder Steve Wozniak - joined calls by over 1000 artificial intelligence experts for an immediate hiatus on projects beyond GPT-4 until the risks can be properly managed. When even the creators of the monster are worried enough to hit pause, you can’t help but feel that things may not be quite under control.

 

Regulators are reaching their limit

 

Despite the red flags popping up across the globe like whack-a-moles, many high-risk AI systems continue to proliferate unchecked in the hands of corporations and governments. Perhaps one of the most recently publicized is Clearview AI - the facial recognition company that has attracted significant attention for its controversial gathering of online facial images without consent.

 

From Europe and Australia to parts of the U.S. and Canada, the list of countries that have now declared Clearview’s software to be in breach of data privacy laws is growing. Some have gone as far as to (try to) ban it outright. Why then, have court orders against Clearview to delete gathered data been so ineffective? How is a company subject to so much regulatory criticism able to continue selling its software to US and Ukrainian military and law enforcement agencies?

 

These are pressing questions, and the answers signal the need for an urgent change of strategy if global regulators want to maintain control over mass data collection via AI. The starting point is to understand how AI represents a tipping point for privacy law - before taking a closer look at the Clearview case itself and what it can teach us about the current state of AI governance.

 

As a tangled web of regulatory holes emerges, it becomes clear that we are still barely speaking the same language when it comes to AI - and that fragmented attempts to regulate technology that knows no borders are at the heart of the problem. If left unresolved, this impasse between data protection and AI is, at best, a major regulatory challenge - and, at worst, could spell the end of privacy as we know it.

 

AI - an unprecedented data challenge

 

It is widely accepted that AI poses an unprecedented challenge to data privacy. At its core, the foundation of every machine learning model is data - often personal data. For an AI system to be commercially useful, it usually demands data collection on a massive scale. Some of the biggest concerns about this level of data processing are, by now, well voiced - perhaps most prominently, the inherent bias issues in automated systems which can have a serious effect on people’s lives. 

 

Whilst these ethical questions are important, they are, for the most part, outside the scope of this article. That is, save for one key point: the most commonly touted solution to AI bias is to obtain bigger datasets. The argument is simple - the more data you input into a system, the more accurate the output will be - and the less it will be skewed in favor of any particular group.

 

However, it also follows that the more data you collect, the greater the privacy risk as the degree of separation between data processors and their subjects increases - especially when that process is automated. 

 

Automated decision making

“Automated decision-making” is a familiar phrase in privacy law borrowed from the GDPR. Automated digital algorithms have been used in our daily lives for some time in everything from targeted advertising to computer-generated decisions about finance and insurance. In its simplest form, AI effectively represents the next generation of automated data processing - far more powerful and sophisticated; it operates on a level that simply dwarfs anything that has come before it. 

 

Given the natural evolution of primitive automated decision-making into full machine learning, many early regulatory attempts have started from the assumption that existing privacy laws can simply be adapted to apply to AI. There is some element of truth to this - most of the fundamental end goals of ethical AI governance are already established by the GDPR and its global successors.

 

There is also a clear overlap between AI and basic privacy principles - any automated decision should, as a minimum, have a lawful basis, be fair and secure, and provide accountability to data subjects. Many organizations are already trying to navigate the current regulatory purgatory in this way - for example, by adapting privacy impact assessments to work as “AI impact assessments” as best they can.

 

However, with systems operating at unprecedented scale and increasingly without human intervention, the reality is that achieving basic privacy outcomes in AI is an incredibly complex exercise. How do you seek meaningful consent for images scraped by the billion from the public domain? How do you explain to data subjects how those images will be used? How do you provide legal recourse if their data rights are violated?

 

The Clearview case has pushed these questions urgently to the forefront of the AI conversation.

 

The Clearview case - and why it matters 

 

Privacy Soapbox Sarah Clearview case

 

State-sanctioned surveillance 

Now infamous in the privacy space, Clearview AI is a US-based company that has received hefty regulatory fines around the world over recent years for its processing of online images through its facial recognition software. The company has, so far, scraped over 20 billion facial images from “open” online sources (by way of context, there are only around 8 billion people on the face of the earth). 

 

Originally designed for use by law enforcement agencies to identify criminal suspects, the more data Clearview collects, the better it claims to get at identifying human faces with pinpoint accuracy. Hosting over a million searches by US law enforcement to date, the company continues to power on - despite having received huge regulatory fines for violation of data protection laws around the world.

 

In Europe alone, the company has been hit with collective fines of over 60 million euros by the UK, France, Greece, and Italy - this being the maximum financial penalty that can currently be handed down under the GDPR. European regulators are clearly attempting to throw their full weight at the problem, with Canadian, US, and Australian governments also following suit.

 

A dangerous precedent

Regulatory fines are all well and good - except that these clampdowns have been almost completely ineffective. It is unclear whether most of Clearview’s fines have even been paid, and their position is clear - they cannot, or will not, comply with orders to delete images of citizens outside of their official jurisdiction.

 

The response of Clearview to the fine issued by the UK Information Commissioner was unequivocal: 

 

“...the decision to impose any fine is incorrect as a matter of law. Clearview AI is not subject to the ICO’s jurisdiction, and Clearview AI does no business in the UK at this time.”

 

- Lee Wolosky of Jenner and Block, Clearview’s legal representatives (Source: The Verge)

 

As Clearview continues its operating methods, a dangerous precedent is being set. The company is far from the only one harvesting sensitive data for profit. Chat GPT, Dall-E, and countless other AI systems are fed by highly personal online content. The global failure to bring Clearview to account sends a clear message to others - that whatever domestic regulators may say, their current data protection laws are toothless across borders when it comes to AI technology. 

 

A new regulatory landscape 

 

Privacy Soapbox Sarah New regulatory landscape

Although undoubtedly on the back foot, global regulators are far from sitting still on the issue. From China to Europe and Canada, governments are racing to develop their own legislative responses to the novel challenges of machine learning. The European AI Act, hailed by many as the global standard in AI regulation, leads the charge, with the US Blueprint for an AI Bill of Rights shadowing across the Atlantic.

 

These two draft frameworks offer some insight into the difficulties caused by nations simply “going their own way” on AI - and why the Clearview case may soon be just the tip of the iceberg. 

 

The European Artificial Intelligence Act

First introduced over two years ago, the EU Artificial Intelligence Act is currently in the final stages of review in the European Parliament. A landmark piece of legislation intended to work alongside the GDPR, the Act takes a top-down approach to AI regulation. Designed to provide sweeping and comprehensive coverage for every product touched by AI technology in both the private and public sectors, it delegates the heavy lifting of interpretation and enforcement down to individual regulators.

 

With a strong emphasis on protecting innovation, the AI Act creates a tiered risk system, with outright bans on “unacceptable risk” activities, strict regulation on “high-risk activities” and all remaining activities left relatively unregulated.

 

The US/EU Trade and Technology Council (TCC) Roadmap, effectively a statement of intent towards cohesion between the two nations, describes the Act as follows: 

 

"To be future-proof and innovation-friendly, the proposed legal framework is designed to intervene only where this is strictly needed and in a way that minimises the burden for economic operators, with a light governance structure."

 

- The European Commission (Source: Communication: Fostering a European approach to artificial intelligence)

 

In short, only the most serious risks are subject to real red tape, with the remainder treated with a relatively soft touch.

 

The US Blueprint for an AI Bill of Rights

With individual states in the US managing their own data protection laws, the US does not have any national oversight of privacy matters - and so far, the situation with AI is no different, with the only federal movement so far being the release of the “Blueprint for an AI Bill of Rights.” The blueprint is just one small part of the US picture, with California, Illinois, and New York all passing their own sector-specific legislation - particularly with respect to the use of AI in the workplace.

 

Not legally binding or enforceable, the US blueprint is effectively a principle-based guidance document that, in some ways, mirrors the broader outcomes-based aspects of the AI Act. However, the 76-page draft delves deeper, digging into specific use cases applying to various sectors and products - including surveillance. It will be followed by further piecemeal guidance and policy documents from separate federal agencies - all of which are vulnerable to reversal or amendment under the next administration.

 

Why aren’t these laws working?

 

Of course, the EU and the US are not the only governments producing governance frameworks for AI - similar movements are occurring across the world (most notably in China, where a highly granular approach is being used to create a raft of individual regulations written for specific AI applications). However, an analysis of the European and US responses is enough to illustrate why it is currently almost impossible to regulate the AI activities of companies like Clearview meaningfully - and others like them. 

 

Prevention is better than cure

The first - and simplest - reason for the failure of regulatory attempts against Clearview so far has perhaps been best summarized by the company itself. In an email to the UK Privacy Commissioner in response to an order requiring the deletion of all data belonging to UK citizens, the CEO simply asserted point-blank that such a thing could not be done: 

 

"It is impossible to determine the residency of a citizen from just a public photo from the open internet. For example, a group photo posted publicly on social media or in a newspaper might not even include the names of the people in the photo, let alone any information that can determine with any level of certainty if that person is a resident of a particular country."

 

- Hoan Ton-That, Clearview CEO (Source: Time)

 

This is tough logic to argue against. To make things worse, any effort to identify a person’s residency information could presumably result in further privacy violations (for example, generating further algorithms to obtain a person’s name or country of origin). In short, if what Clearview is telling us is accurate, it is impossible to reverse an AI-generated data breach against a citizen of any given country - at least, without committing further breaches that undermine the entire exercise. 

 

Even in the US, where Clearview is officially based, the problem trickles down to the state level. An order to erase statewide citizen data (as has been done in Illinois) is useless if there is no practical way to filter out images of those citizens. It follows that to be effective, any mechanism to compel deletion would have to be globally enforceable. Even if every state and country produced identical orders, it is difficult to see how they could get around the challenge of ascertaining national identity.

 

In the face of such mind-boggling complexity, it is little surprise that national regulators are already throwing up their hands to the problem. In the UK Clearview case, the Information Commissioner openly admitted that “It’s all one big soup, and frankly, we didn’t pursue that angle.” Not only is this an ominous sign of the international mood towards the subject, but it also exposes how unequipped regulators are for after-the-event approaches to AI data compliance.

 

We are not all speaking the same language

With any realistic prospect of remedying AI privacy breaches fading fast, the need for robust and cohesive preventative measures becomes even more vital. Unfortunately, from enforcement approaches down to the language that we use to describe fundamental AI concepts, there are currently substantial differences in the way that the world is approaching AI governance. 

 

By way of example, the EU and US drafts differ even in their conceptual approach. The EU draft is strictly risk-based, providing only broad principles and leaving the job of specific application to individual regulatory bodies. While the US approach also contains broad elements, it also includes complex “use-case” based guidance designed to target various applications of AI specifically. Whilst neither approach is necessarily more effective than the other, it is incredibly hard to see how they will mesh together in practice.

 

The TCC roadmap acknowledges many of these issues:

 

"The EU and United States affirm the importance of a shared understanding and consistent application of concepts and terminology that include, but are not limited to - risk, risk management, risk tolerances, risk perception, and the socio-technical characteristics of trustworthy AI. "

 

- The European Commission (Source: TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management)

 

Whilst all the right noises are being made, the real work is still to be done - and failure to do so will result in an impenetrable regulatory mess across the Atlantic and beyond.

 

Lawfulness and the “AI arms race” 

It is hard to overstate the extent to which AI is likely to infiltrate every layer of the global economy over the upcoming five to ten years. The financial and strategic rewards at stake for those who take control of the market are vast beyond imagination - and it’s a perilous game as global powers jostle for position. Given the financial and political advantages of light touch regulation, there is a high risk that governments may choose to allow law and ethics to take a back seat. 

 

In practice, this is unlikely to look like an open carte blanche invitation to AI companies. Instead, it is far more likely to come in the form of pro-innovation “lip service” regulation which is not, or cannot be meaningfully enforced. The UK government is already a case in point - in their white paper on artificial intelligence published last month, they openly state an intent to take an adaptable approach to regulating AI:  

 

"However, a heavy-handed and rigid approach can stifle innovation and slow AI adoption. That is why we set out a proportionate and pro-innovation regulatory framework."

 

- UK Department for Science, Innovation and Technology (Source: A pro-innovation approach to AI regulation)

 

In light of what is currently playing out in the Clearview case, it is easy to see the kind of trouble this could lead to.

 

The concept of lawfulness is fundamental to every privacy framework. Under the GDPR, personal data cannot be processed without some lawful basis that protects individual rights - and in most cases, this comes in the form of explicit consent from the data subject. However, as the Clearview case shows us, in the context of mass automated data processing in AI, achieving meaningful and informed consent from data subjects is becoming all but impossible to achieve.

 

Because most AI companies are well aware of this fact, they tend to claim the only other realistic justification for their activities - public interest. In Clearview’s case, the company has claimed that the public benefit of their technology (the identifying and capture of dangerous criminals) justifies the scraping of personal images without consent. This argument has been rejected by multiple European regulators, who hold that any such public interest claim is outweighed by the privacy rights of data subjects. However, as we know, these rulings have not had their desired effect. 

 

The problem once again comes down to international cohesion - rulings like this only work if all regulators draw the same line in the sand. All it takes is for one government to regulate loosely - or, as in the US blueprint, exclude the regulation of facial recognition technology for the purposes of “national security” - and immediately, any efforts elsewhere to enforce meaningful controls are undermined. As of today, the US government continues to use Clearview’s technology with no apparent intention of stopping.

 

Unless this tension is resolved, any legal “agreements” will amount to little more than lip service. 

 

The AI Convention - the answer?

 

Privacy Soapbox Sarah New regulatory landscape-1

 

The rising number of voices clamoring for some form of transnational treaty on AI are not, thankfully, just shouting into the void. In Europe, efforts are being made towards some form of international regulation which could, in theory, be extended beyond the borders of the continent. The Committee on Artificial Intelligence (CAI), set up in 2022, has been working on an International Convention on AI (or AI Treaty) for the past year. 

 

A hugely ambitious undertaking, the intention is to create and implement binding principles across the AI landscape to secure the protection of human rights and the rule of law as technology evolves. Initially a European initiative, the hope is that others (including the UK and US) will sign up too. 

 

Will this change the picture for AI regulation? In short - it’s too early to tell. The task is monumental, and serious international diplomacy will be needed to get big players like the US on board (not to mention those on the other side of the globe). Initial discussions point towards coverage of public bodies only, which also limits the scope of the instrument considerably. Finally, progress has been notably slow - although a draft was initially expected by 2024, recent sources suggest that the treaty is on hold pending the finalization of the AI Act.

 

It definitely feels premature to place too much expectation on the embryonic stages of this treaty - but as perhaps the most definitive attempt that we have seen so far towards international cohesion on AI, it is certainly something to be watched closely as the year progresses.

 

What happens next?

 

There is no doubt that the Clearview case represents a watershed moment for privacy and AI governance, exposing vast gaps in our existing and upcoming regulatory framework. Our current enforcement mechanisms are swiftly becoming all but futile in the face of a technology that does not observe geopolitical borders - and those lining up to replace them are desperately lacking in cohesion.

 

Without some form of heavyweight, binding transnational agreement or treaty on AI, we are heading into extremely murky waters. As the shadow of political and financial interests loom large, there is a very real risk that we will soon pass the point of no return when it comes to protecting our personal privacy, with serious consequences for individuals and businesses across the globe. 

 

One thing is certain - the time for kicking the can down the road when it comes to AI is well and truly over. For now, we can but watch, wait, and hope that those in power will do the right thing - before it is too late.