Sponsored Content

5 top takeaways from the EU AI Act: Is your organisation ready?


By Ravi Patel, AuditBoard Director of Alliance, EMEA

Phased implementation of the EU Artificial Intelligence (AI) Act has started. However, most organisations are still in the early stages of preparedness, and internal audit teams are keen for guidance. What are the key requirements, to whom do they apply, and how can organisations set a path for compliance?

Fortunately, leaders from the front lines of AI implementation and regulation are ready to help. Nishobika Sivakumaran, Responsible AI Manager at Ernst & Young LLP (EY), helps design and implement responsible AI practices across EY globally and its clients, including a recently published case study  How Mott MacDonald accelerated responsible AI | EY - UK.  Frank Heldens, Senior IT Auditor at Achmea, uses AI, data, and technology to drive internal audit innovation. Frank has co-authored several ECIIA papers, including The AI Act: Road to Compliance, A Practical Guide for Internal Auditors. I was privileged to lead an illuminating panel discussion with Nish and Frank, who generously shared their insights and recommendations to help organisations navigate the uncharted territory of EU AI Act compliance. Below are five top takeaways.

1. Even some non-EU companies need to comply

The EU AI Act covers all EU citizens and all AI systems affecting EU citizens, regardless of where systems are developed. This extraterritorial effect is similar to that of the EU’s General Data Protection Regulation (GDPR), which applies to non-EU organisations if they offer goods or services to, or monitor the behaviour of, EU citizens. Furthermore, as Nish pointed out, “businesses  should consider implementing consistent, unified approaches to AI governance across their organisations.” For multinationals with EU operations, that may mean adopting the AI Act on a global basis. 


2. Most organisations aren’t prepared

An AuditBoard flash poll of 800+ internal audit leaders found that only 11% are mostly prepared and confident in their approach to complying with the Act and managing AI risk, with 38% not prepared at all and 51% partially prepared and in need of guidance. 

Leaders identified a range of top challenges, including defining, mapping/identifying, risk tiering/assessing, establishing ownership and accountability, and integrating AI with existing frameworks. Nish and Frank stressed the importance of these AI governance foundations, including:

       Agreeing on a clear and consistent definition of AI across the organisation. Nish noted, “If you adopt a definition that’s too broad and vague, it can make it really challenging to implement downstream governance requirements because it isn’t clear what AI needs to be monitored, evaluated and assured.”

       Integrating AI Act requirements into existing risk management processes. “A lot of the Act’s requirements are not new. Security is always important. Model performance is key to every model you build, and so on. So it’s really important to make sure AI Act requirements are not bolted on top, but integrated into existing ways of working,” explained Frank. “You need to look at the processes that include AI systems from all relevant risk perspectives.”

       Assessing AI current state and maturity. Nish recommended identifying AI risk and governance gaps by assessing what your organisation already has in place, what can be built upon, and prioritising key gaps if any.

 

3. The Act’s role- and risk-based approach dictates varying requirements

The EU AI Act distinguishes the roles of “providers,” those who develop and place AI systems on the EU market or put such systems into service, and “deployers,” the organisations using the AI systems. Different requirements apply to each role, with the majority applying to providers. However, organisations may shift between roles based on how a given AI system is being used, modified, or marketed.

As illustrated below, the Act classifies AI systems into distinct risk-based categories.

 

Image Credit: ECIIA, "The AI Act: Road to Compliance," 2025

       Only “unacceptable risk” systems are prohibited (e.g., social scoring, manipulative/deceptive AI), considered a threat to people’s safety, livelihoods, and rights.

       “High risk” systems have the most stringent requirements around risk management, data governance, quality management, accuracy, robustness, cybersecurity, documentation, recordkeeping, human oversight, and more.

       “Limited risk” systems have transparency obligations primarily focused on ensuring that end-users know they’re interacting with AI.

       “Minimal risk” systems are unregulated, and not subject to any specific obligations under the AI Act. 

GPAI models have transparency requirements laid out in the EU AI Act including technical documentation and copyright compliance. There are more stringent requirements for GPAI models posing systemic risk.

4. Phased implementation has begun

Gradual implementation has begun on a timeline extending to 2031, with two key requirements coming into force as of February 2025: 

       “Unacceptable risk” AI systems can no longer be produced, bought, sold, or used in EU markets.

       AI literacy requirements apply, including inventorying and classifying AI systems using the Act’s risk categories, establishing policies to evaluate future AI systems, and ensuring sufficient literacy for staff dealing with AI systems. As Frank said, “To have the human in the loop to manage AI risks, education and training really are key.”

 

5. The AI Act is central to a shifting regulatory landscape

The EU AI Act is the first comprehensive AI regulation issued by a major regulator, but some regulators are taking different approaches. Individual US states and countries such as South Korea have followed the EU’s example, developing comprehensive regulations. In contrast, countries such as Japan, Australia, Singapore, and others are taking softer approaches to AI governance frameworks, adopting more voluntary guidelines or principles to promote ethical use. Frank recognises the rapid development of regulations and guiding principles across the globe. He stressed that it can be a challenge for companies to keep up with these developments and comply with all requirements, especially those operating globally.

As Nish outlined, the UK’s approach to AI has been shifting. “The UK has taken a pro-innovation approach but signalled its intent to move towards a more unified and stringent AI regulatory framework. This is likely inspired by developments in the EU and globally, alongside advances in AI including generative and agentic AI.” Realising the massive AI assurance market potential, the UK government is positioning the country as a global leader in both AI innovation and AI security. Said Nish, “The goal here is engineered governance, so that we're not stifling innovation but innovating safely at pace.”

The EU AI Act will not be a static regulation. It will move and change as AI evolves, requiring organisations to adapt accordingly. As Frank and Nish emphasised, the challenge for every jurisdiction — as well as every organisation — is finding a balance between innovation and regulation. Whether or not the EU AI Act applies to your business, make sure it’s on the path to achieving that balance.

 

About the author

Ravi Patel leads the EMEA Alliances team at AuditBoard, bringing over 15 years of experience in software, technology, and strategic partnerships with a focus on technology’s role in business transformation. His expertise lies in building partner ecosystems that drive innovation and enable businesses to navigate today’s complex technological landscapes.

AuditBoard