The EU AI Act’s Compliance Challenges
Author: Marco Saywell
This memo is a brief breakdown of the EU AI Act’s (Act) impact on compliance using the opinions of industry leaders that I met at the Legal Technology Conference. It will focus on where the Act may provide businesses offering compliance services (compliance companies) the greatest opportunities to provide value to their customers.
EU AI Act Definitions – uncertainty of scope
The Act defines AI as a system designed to operate with varying degrees of autonomy. However, ‘AI’ does not include generative AI. The release of generative AI shocked the EU, leading to the creation of a new category of legal regulations specifically for GPAIs. The definitions are broad, but the breadth of the systems that will be targeted is disputed. Industry leaders argue that the definition of AI is too broad and the GPAI definition is flawed on a technical level. Therefore, taking a literal reading of these technical definitions seems to be dubious. However, a broad definition might create opportunities for compliance companies due to the uncertainty surrounding the scope of systems the Act covers.
Risk Based Categorisation – uncertainty of categorisation
AI systems supplied by a third country for the EU market or established within the EU are categorised as either unacceptable risk, high risk, limited risk, or minimal risk.
Unacceptable risk systems cover a list of generally invasive or harm-based uses and are banned, with some exceptions.
High-risk systems, intended for use as products or product safety components, are not banned but regulated. These include products covered by EU product safety legislation with conformity assessments or a list of high-risk uses that go beyond narrow procedures or simply automating existing tasks. These requirements have also produced uncertainty as to what falls under ‘high-risk systems’, creating opportunities for the company because the scope of the risk categories is unclear.
Minimal/no risk systems include every other system and are exempt from regulations. While the EU has stated that it expects most AI systems to fall within this category, industry leaders seem to doubt this.
GPAI systems are categorised into GPAI Models and GPAI Models with Systemic Risk. ‘Systemic risk’ GPAIs are those with a significant impact on the EU market that can propagate at scale across the value chain, with ‘high-impact capabilities’ (advanced systems slightly more powerful than ChatGPT).
Some uses, such as for key national security purposes, are exempt from this regulation.
A GPAI or AI system may fall within multiple risk categories and must comply accordingly.
Compliance Structure
The compliance requirements for each of these categories depend on whether the user is a provider, deployer, importer, distributor, or operator.
It appears the user may fall under multiple categories; for example, a provider may also be a deployer if they develop and integrate the AI into their own company’s service. However, a deployer modifying, not developing, another provider’s AI is unlikely to be classified as a provider unless they trademark or place AI on the market. What types of changes deployers can make without being classed as developing the AI system, and thus become providers, is unclear.
Here is an overview of the compliance requirements relevant to compliance companies:
Operators of “High-risk” AI systems will have to comply with specific requirements, along with “technical requirements”.
The “Technical requirements” include risk management systems, data governance practices, technical documentation, record-keeping logs, information for deployers, human oversight, and appropriate cybersecurity measures.
The specific requirements include packing requirements, conformity assessments, registration, management systems, storage time frames, procedures for EU AI act violations, compliance with reasoned information requests, adherence to accessibility legislation, and a monitoring system for deployers. There are additional requirements for providers outside the EU.
Deployers of “high-risk” AI systems will only have to comply with their own specific requirements. This involves creating appropriate technical and organisational measures to ensure the use of the AI system is performed as specified by the instructions provided by the operators, including inputting only relevant data for the AI’s intended use. Furthermore, deployers must monitor the AI system and inform the provider through their monitoring system, as well as adhere to fundamental rights risk declaration procedures and automatic log storage requirements. There are additional registration and impact assessment requirements for specific uses by public or private bodies.
Distributors of “high-risk” AI systems must verify that the system is properly marked, packaged, and accompanied by clear instructions for use.
Importers of “high-risk” AI systems must verify the system has the required marking, technical documentation, completed conformity assessment (with storage requirements), transparency and that the provider has appointed a representative.
Providers of “limited-risk” AI systems that interact with humans must inform the human users that it is an AI system. Generation systems (E.g. text or deep Fakes) must do the same for their outputs.
Deployers of “limited-risk” AI systems for emotional recognition or biometric categorisation must inform users of this system. Generation systems, especially those used for public interest matters, must inform users of the artificial origins of the outputs.
Providers of all “GPAI models” must compile technical documentation, follow transparency requirements to other providers integrating their GPAI’s and comply with copyright policy. There are additional training data transparency requirements for paid or non-open source GPAIs.
Providers of “GPAI models with systematic risk” must notify the commission when the risk arises, mitigate the risk, report/document any serious incidents and provide cybersecurity protections.
All users of AI systems are encouraged to ensure a sufficient and appropriate level of AI literacy for both staff and system users. There are three voluntary pledges for organisations encouraged by the Commission.
The compliance deadlines apply from the 2nd of August 2026, with some exceptions.
EU AI ACT and the GDPR
AI systems and GPAIs will also be subject to industry-specific and broader legislation, such as the GDPR, and the Act will interact with these existing regulations. For example, OpenAI has already faced compliance issues regarding the GDPR’s principle of data minimisation.
Currently, the guidelines for the Act have been delayed and are now expected to be released in August 2025. We have so far outlined multiple points of uncertainty in the Act that the guidelines may clear up. Unlike the GDPR, the Act is not a principle-based regulation but is highly prescriptive, meaning clarification from the guidelines is essential.
In the face of uncertainty surrounding the interpretation of the Act, industry leaders suggested that constructing a compliance system that provides a good-faith interpretation of the legislation, with guidance on the finer details from other legislative systems, is the optimal approach.
The fines for violating the EU AI Act depend on the severity of the breach, ranging between 7.5m Euros (or 1% of global turnover) to 35m Euros (or 3% of global turnover). Whether these fines will be applied in a ‘fine first, ask questions later’ manner, like with the GDPR, or through a more restrained approach is unclear. There will be more opportunities for compliance companies if the EU AI Act fines operate differently from the GDPR because pre-established GDPR audit services will have to be modified.
Potential Customers
While there is a lot of buzz in the legal industry about the Act, the couple of legal tech companies I asked, such as the industry-leading platform “Harvey”, have not been considering the impacts of the Act on their business. Suggesting that there are likely many companies that may be approached to take on the Company's compliance services without having been previously contacted about the matter.