Artificial Intelligence, Privacy Enforcement, U.S. Laws & Regulations

Why the EU AI Act Poses Greater Challenges Than Privacy Laws

In an age bursting with technological advances, the European Union has taken a pioneering step toward shaping the future of Artificial Intelligence (AI) governance. Enter the landmark Artificial Intelligence Act—a comprehensive regulatory framework penned to strike a delicate balance between fostering innovation, protecting fundamental rights, and ensuring ethical AI deployment. As we begin to see a global onslaught of proposed regulations, they all boil down to four things: transparency, privacy, automated decision-making, and training.

The EU AI Act, at first glance, gives the impression that organizations have plenty of time to prepare, but AI compliance and governance have a much larger scope than privacy. Privacy is managed by a small and specific subset while AI encompasses the entire organization. AI governance and compliance require a cross-departmental approach, scrutinizing all systems and company-wide technologies, and is involved in drastically more processes than privacy. With just a year to prepare, your organization will need to start planning for compliance now. Let’s dive into what the EU AI Act itself entails.

Learn More About Truyo’s Comprehensive AI Governance Platform

About the EU Artificial Intelligence Act

  • Assuming ratification which seems very likely, the EU AI Act will go into effect in 2025.
  • The Act everything from generative AI to AI-powered face recognition surveillance systems (effectively prohibited with carve out for police).
  • Transparency, training, and automated decision-making were all covered in 2021 version and remain in 2023 version.
  • The Act seeks to prohibit social scoring, where AI is classifying people using important topics and personal profiles to manipulate their behavior resulting in intrusive practices that lead to discrimination and exclusion.
  • Includes fines for violations of up to €35 million euros ($38 million USD) or 7% of a company’s global turnover.
  • The Act will be voted on early next year, subject to refinement, and is likely to have extensive elements around bias, discrimination, privacy, and training.

Guidelines for Generative AI

General-purpose AI systems (GPAI) need to adhere to transparency requirements, including detailed technical documentation, compliance with copyright law, and summarizing training content. Stricter obligations apply to high-impact GPAI models to evaluate and mitigate systemic risks.

What’s Prohibited?

The Act bans specific AI applications that pose potential threats to citizens’ rights and democracy. It prohibits systems that categorize individuals based on sensitive characteristics like political beliefs or race, the untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and schools, social scoring systems, AI that manipulates human behavior, and those exploiting people’s vulnerabilities.

Sanctions and Next Steps

Non-compliance with regulations could result in fines ranging from €35 million or 7% of global turnover to lower fines of 7.5 million euros or 1.5% of turnover. Once ratified, we expect a similar penalty trajectory to that seen when GDPR went into effect. In July 2018, GDPR saw 1 sanction with a fine of €400,000. Six months later, in January 2019, 9 sanctions were levied with a combined total of €50,437,276 in fines and the number has increased exponentially since with 1,926 sanctions and €4,418,042,604 this month. We anticipate the same progression for EU AI Act sanctions beginning in 2025.

Law Enforcement Safeguards

For law enforcement purposes, the Act sets safeguards for using biometric identification systems (RBI) in public spaces, requiring prior judicial authorization, and restricting the use of “post-remote” and “real-time” RBI in targeted crime searches or specific and present terrorist threats.

Obligations for High-Risk Systems

High-risk AI systems, posing significant potential harm, must undergo fundamental rights impact assessments, applicable even to the insurance and banking sectors. Citizens can launch complaints about decisions impacting their rights made by these high-risk systems.

Measures for Innovation and SMEs

To support innovation, regulatory sandboxes and real-world testing are promoted for developing AI solutions before market placement.

Global Implications and Impact on Generative AI

The AI Act, while applicable to the EU’s 450 million residents, could influence global AI regulations, as the EU often sets standards in tech directives. The Act notably addresses general-purpose AI systems like ChatGPT, imposing transparency and stricter rules for advanced AI models.

Comparison with Other Countries’ Approaches

The US and China have initiated their own AI regulations. The US focuses on voluntary commitments from tech companies, while China restricts AI for private use but employs facial recognition on a large scale for state purposes.

While the EU’s AI Act aims to govern AI in a human-centric direction, criticisms regarding its scope and efficacy have emerged, calling for clearer guidelines and stronger regulations. The EU’s Artificial Intelligence Act has sparked debate over the regulation of developing technology with President of France Emmanuel Macron suggesting that the Act runs the risk of impeding European tech companies, causing them to fall behind innovative companies in the US, UK, and China where regulation is yet to be finalized.

The above summary is a condensed overview of the EU’s AI Act based on publicly available information and does not encompass all the detailed provisions within the legislation as they are yet to be released. We will continue to update you on information as it becomes available.


Author

Dan Clarke
Dan Clarke
President, Truyo
December 14, 2023

Let Truyo Be Your Guide Towards Safer AI Adoption

Connect with us today