With the passage of the recent EU AI Act, the European Union has officially finalized the first comprehensive, global regulatory framework for the use of artificial intelligence. 

Significantly, the act will not only impact European companies, but it will also have extraterritorial implications for any business offering AI-based products or services on the continent, regardless of where they’re located. For these reasons, it’s critical to understand this legislation and devise a response strategy. 

This is not a trivial task, as the EU AI Act has many moving parts; some uses of AI are prohibited outright, a thicket of regulations covers others, and a penalty scheme has been established for entities that violate these rules. Compliance with the EU AI Act demands an informed approach to data governance and the creation of a single source of truth, among other things.

In this article, we hope to make the task easier by exploring the act's basic framework, the challenges it poses, and how effective data governance and AI-driven automation can help businesses stay ahead.

The Risks of Unregulated AI

AI is a fascinating, powerful technology that has exploded in popularity in recent years — global private investments in GenAI alone skyrocketed from approximately $3 billion to $25 billion from 2022 to 2023. Though its transformative potential is largely seen in a positive light, this same potential has also prompted the need to establish clear and consistent legal rules governing its use. From the spread of deepfakes and misinformation to the rising threat of AI-driven cybersecurity attacks, it’s clear why so many — including some in the industry itself — have called for guardrails on what AI companies can and cannot do with this technology.

To understand these concerns, it's useful to briefly cover how generative AI systems are trained: A standard "foundational" model like ChatGPT is created by feeding an algorithm vast amounts of text scraped from the internet, books, and other sources. The model is trained to predict the next word in a sequence (such as "It was love at first sight") in order to learn patterns in language. Once it's adept enough at that, it can be prompted to perform a staggering variety of text-based tasks.

One problem, however, is that a model can also learn biases present in these training data. To take a well-known example, if a basic model is provided with a sentence like "[He/She] went to school to become a _____," it might finish with "nurse" when the sentence begins with "she" and "doctor" when the sentence begins with "he."

Issues can go beyond biases. AI limitations can manifest in various, sometimes surprising, ways. A few years ago, a well-known incident occurred where ChatGPT was prompted to describe a particular law professor, and it accused him of grave professional misconduct, despite no evidence or record of any such misconduct.

In another case, an engineering team at Samsung got into trouble for putting proprietary code into ChatGPT, potentially exposing sensitive intellectual property to individuals outside the company because OpenAI models are sometimes trained using user-provided inputs.

As these stories illustrate, there’s a demonstrable need for transparency when using AI, especially with generating content. Equally important is ensuring data accuracy, as flawed or misleading data can lead to unreliable AI outputs and unintended consequences. (Among its provisions, the EU AI Act provides for both of these.) Selecting a reliable AI system provider is crucial, as deployers are held accountable for any shortcomings or incidents related to the AI systems they utilize.

“If you don’t have a solid grasp on how large language models are built, it’s tempting to think of them as almost magical,” says Mark Kettles, Senior Product Marketing Manager, Data & AI Governance and Privacy at Informatica. “But an AI model is only as good as the data it’s trained on, which is why data governance is the cornerstone of using this groundbreaking technology responsibly.”

A final wrinkle that must be addressed is the intersection between the EU AI Act and other regulatory frameworks like GDPR (a 2018 EU law defining protections for data access and privacy), intellectual property laws, and sector-specific regulations governing financial services, healthcare, and other industries. Companies building or using generative AI are subject to these other requirements, making compliance a complex affair — and a potentially costly one at that, with the operational efficiency of regulatory compliance a key challenge for many organizations. Compliance requirements for general-purpose AI models, especially those that may pose systemic risks, include maintaining technical documentation and ongoing evaluations once they are on the market.

Of course, the judicious use of tools like master data management and data governance software can go a long way to reducing the burden of compliance — particularly when it automates workflows and functions across environments to create a unified source of truth, as we’ll cover below.

Understanding the EU Act and Its Risk Tiers

Some find it difficult to grasp the implications of the EU AI Act because of certain unique features. The Act establishes guidelines for various organizations involved in AI systems within the EU market, emphasizing that any organization placing AI products in the EU market is subject to the Act, regardless of its location. The questions answered below will shed light on these matters.

Additionally, the EU AI Act addresses systemic risks associated with general-purpose AI models (GPAI). Providers of GPAI models with systemic risk must fulfill additional obligations, including the assessment and mitigation of these risks, maintaining technical documentation, and ensuring cybersecurity measures.

What Is the Purpose of the EU AI Act?

In broad strokes, the EU AI Act is intended to ensure that AI systems are:

  • Safe, transparent, and traceable so that users needn't fear toxic or harmful output and interested parties can understand how and why a model took a particular action
  • Non-discriminatory (i.e., that they don't make assumptions about people based on immutable characteristics)
  • Environmentally friendly (to mitigate the major energy demands of training and serving these models)
  • Respectful of existing privacy laws and people's fundamental rights, to avoid undue reputational harm
  • Protecting democratic processes to ensure that AI deployment does not undermine democratic values and fundamental rights

These are all laudable goals that AI critics and enthusiasts have discussed for years, but now they have the force of law in one of the world's most influential regulatory environments. The EU AI Act defines AI systems as those designed to operate with varying levels of autonomy and for explicit or implicit objectives, inferring how to generate outputs, such as predictions or recommendations, from the inputs they receive.

What Will the Jurisdictional Impact of the EU AI Act Be?

One major misconception is that this legislation is only relevant to companies based in the European Union. This is not true; the act has potentially global reach, given that it covers non-EU companies whose AI products or services are used in the EU.

More concretely, the EU AI Act can compel non-EU countries and companies to evaluate and adjust their AI strategies and services if they wish to operate in the European market. This jurisdictional impact directly affects National Competent Authorities, with the AI Office playing a crucial role in overseeing compliance and fostering voluntary codes of conduct for AI systems, particularly those not deemed high-risk.

The market surveillance authority is responsible for enforcing compliance, allowing individuals and entities to file complaints regarding noncompliance and ensuring that penalties are enforced for violations and misinformation.

This is not unlike how the EU's General Data Protection Regulation (GDPR) operates — any company that has customer data from users in the EU must comply regardless of where they happen to be located.

What Are the EU AI Act’s Risk Tiers?

An aspect of the act that needs to be understood clearly is its "risk tiers." The tiers are:

  • Unacceptable: Unacceptable risks include those with the potential for grave, large-scale downsides, such as governments building automated social scoring systems or subtly manipulating people's behavior. These are banned.
  • High: High risks mostly apply to the use of AI in products already regulated in one form or another or to certain kinds of independent AI applications. For example, automated systems for gathering biometric and emotional data, algorithms deployed in energy infrastructure, credit scoring, health insurance risk assessment, and similar sensitive domains would all count as high-risk. This is where the law is most complicated and requires the sharpest expertise to untangle.
  • Limited: Limited risks are generally those related to the use of AI systems to deceive others, such as deepfakes and (some) chatbots.
  • Minimal: Minimal risks are those not covered in the categories above, an example of which might be an overzealous spam filter. Even here, however, it's wise to deploy basic oversight, such as keeping humans in the loop and prioritizing fairness.

Service AI systems must comply with specific requirements based on their risk classification, balancing innovation support with compliance to fundamental rights.

No matter the details of a particular team's situation, one principle will remain consistent for everyone: this new regulation will be easier to comply with if a product is built around clean, reliable data. This is especially true for firms that deal with sensitive domains like finance, healthcare, or retail.

DORA and EU AI Act

One more piece of legislation worth mentioning is the Digital Operations Resilience Act (DORA), effective January 2025. It’s targeted at shoring up the critical digital infrastructure used by finance companies that operate in Europe, such as insurance companies, investment firms, and banks.

Among other provisions, DORA requires the implementation of certain frameworks, resilience testing, and sharing of information related to cyber threats. Companies operating in the finance industry should begin thinking about DORA compliance if they’re already concerned about the implications of the EU AI Act.

How to Respond to the EU AI Act

The EU AI Act was published in July of 2024 and officially went into force in August of the same year. Select prohibitions go into effect in February 2025, and more requirements will continue to roll out from there. So if your organization is active in Europe, it’s time to start preparing if you haven’t already. General-purpose AI models, which are designed to handle multiple distinct tasks, pose systemic risks that could significantly affect public health, safety, security, or fundamental rights, necessitating additional obligations for their providers.

Your next concrete steps should be:

  1. Create a unified data strategy to navigate compliance requirements for general-purpose AI models

  2. Conduct comprehensive risk assessments

  3. Ensure data transparency

  4. Investigate data platforms that can automate and streamline the creation of robust integration solutions

“In addition to the challenges of creating advanced AI tools, businesses in this space must now contend with safely deploying and managing their models in accordance with the EU’s strictures,” says Kettles. “That’s not easy, but there are excellent platforms built for that exact purpose, and investing in a good one will lighten the load considerably.”

Complying with the EU AI Act

The EU AI Act represents a major change in the regulatory landscape for AI-focused companies. There has long been a question over how to best deal with the seismic shifts brought about by generative AI, and the EU AI Act is one of the first major proposed answers. Enterprises across the world must now reckon with how to comply with it. As part of this undertaking, data governance, transparency, and risk assessment will be critical to success. Additionally, service AI systems must comply with specific requirements based on their risk classification, balancing innovation support with compliance to fundamental rights.

While compliance can be challenging, data integration technologies can help reduce the associated burden. Informatica's AI-powered CLAIRE provides a strong foundation for navigating these difficulties by automating data management tasks such as lineage tracking and metadata exploration, streamlining data discovery tasks like searching for assets, and supporting cross-functional work across environments. It can also help ensure transparency and accountability by allowing non-technical users to easily query and understand data, thanks to its user-friendly natural language interface. AI systems, designed to operate with varying levels of autonomy, infer how to generate outputs, such as predictions or recommendations, from the inputs they receive, based on their explicit or implicit objectives.

To stay ahead of compliance, learn How to Create an Effective Data Governance Strategy for the EU AI Act. For more information, check out Informatica's resources for AI governance and responsible AI.