Strategic value of the EU AI Act
July 2025 | SPOTLIGHT | BOARDROOM INTELLIGENCE
Financier Worldwide Magazine
Artificial intelligence (AI) is transformational technology. But only if you use it.
Despite much talk about overreaching regulation, the European Union’s Artificial Intelligence Act can be made to double up as a practical manual for pushing more AI into business. It can be a way out of the ‘trough of disillusionment’ that firms fall into. That is quite a claim. So how can this work?
Based on experience, some businesses have strategic plans to maximise their use of AI, but they are falling behind in practice because usage is throttled by concern about risk and governance. For those firms, the Act’s framework can inform how to cross the gap, integrating AI use into existing governance, risk and assurance structures. Board-level issues of corporate culture and ethical AI leadership can also be addressed through the framework of the Act.
The EU AI Act is the world’s first comprehensive AI law. Although UK companies are not directly subject to EU law post-Brexit, aligning with the framework of the Act offers strategic advantages.
Overview of the EU AI Act framework and obligations
The Act works in a logical way by introducing a tiered, risk-based approach to AI regulation, with obligations proportional to the level of risk.
Unacceptable risk AI systems. These are what the Act calls prohibited AI practices, and include AI applications that threaten safety or fundamental rights. Subliminal techniques that manipulate behaviour to individuals’ disadvantage, exploitative systems targeting vulnerable groups, indiscriminate facial-recognition surveillance, or social scoring by governments are prohibited.
High-risk AI systems. AI applications with significant implications for safety or individual rights, such as in medical devices, hiring decisions, credit scoring and critical infrastructure, are classified as ‘high-risk’. Users of even high-risk AI systems have only limited obligations: proper use, monitoring and reporting of incidents. They must ensure the AI is used as intended and ensure human oversight.
Providers of high-risk AI systems face much stricter obligations before they can put them on the EU market. They must implement a risk management system and adhere to requirements on data governance, technical documentation, record-keeping, transparency to users, human oversight, accuracy, robustness and cyber security. High-risk AI systems also undergo conformity assessments (often involving external audits or certification) to verify compliance before deployment.
The point here is that it is critical for firms to know where the line between being a user and a provider sits. Most providers are big tech firms. But a user training and tuning models needs to avoid being classified as a provider.
Limited-risk AI systems. For AI that is not classified as high risk, the Act imposes transparency obligations. If an AI system interacts with humans or could be mistaken for a human, such as an AI chatbot or a deepfake generator, customers must be told that they are interacting with AI. AI systems that generate manipulated content, such as AI-generated images or video impersonations, must disclose that the content is AI-generated. The point is to avoid misleading humans, so AI-generated content should come with simple explanations or labels. Limited risk AI systems do not require prior approval and so are quick to get to market.
Minimal-risk AI systems. Most AI systems, such as AI in spam filters, productivity tools or video games, are in the minimal or low-risk category. The Act does not mandate any new obligations. Firms’ existing governance and compliance systems are sufficient, but of course firms should use AI responsibly.
Making this practical
For UK firms that are users (not providers) of AI systems, the Act’s approach can be turned into a practical list to guide AI use. This is the first stage in adoption – what have you got and how are you using it?
First, identify AI systems used by the company, then maintain logs of those uses to enable traceability.
Second, document the purpose for which each AI model will be used, then determine where and how human intervention is built into all uses.
Third, classify the risk of AI systems on a system by system and use by use basis, then write a list of the regulatory obligations that apply to each use.
Fourth, determine what data is training the systems, then use data standards to log that it can be legally used, and is of high quality.
Fifth, update policies and contracts to accommodate this approach, then update compliance policies and risk management systems to include AI-related risks and AI-specific controls as real-time, ongoing obligations.
Lastly, assign clear AI oversight roles, such as an AI compliance officer, then integrate the role into the firm’s existing governance structure; this is where education of board-level directors on AI risks sits.
All firms have existing risk and governance policies into which they can build these simple points. This is how to start.
Alongside this governance work, business teams should be tracking compliance costs, cost savings from using AI, and improvements in business performance and customer experience. This is where the practicalities of making AI pay come in.
Strategic benefits of aligning with the EU AI Act
As UK firms that do not have operations in the EU are not required to comply with the Act, many believe that there is no reason to do so. While strict compliance with the Act would impose unnecessary costs, adopting the policy and practical lessons of the Act are valuable to UK firms. In this author’s view, the most important aspects are those outlined below.
Best practice for algorithmic governance. The more that data gives competitive advantage to firms that have it and use it well, the more that algorithms run the business world. Using them well, both from the perspective of getting the most out of them and ensuring that they do what they are meant to do, is now one of the defining characteristics of successful firms.
So, what does best practice look like?
First, track the data supply chain. Data that trains algorithms should be identifiable and subject to clear rules.
Second, track algorithm development and launch. How algorithms are used should be the subject of transparent practices.
Third, track internal accountability. This should be clear and traceable through documentation, oversight and bias checks.
Fourth, track goals related to algorithms. These may be to reduce errors, improve consistency or optimise business outcomes. They should be clear through the chain of algorithm development and launch.
Lastly, be prepared to answer customer and regulator questions. Preparation for current and future regulatory compliance should be built into these systems
Framework for responsible AI development and deployment. The benefits of working to a framework that is applied across the organisation are becoming clear as we see the first enforcement actions and failures causing reputational damage.
The headlines for avoiding trouble can, however, be easily summarised as: (i) promoting compliance by design, including assurance and monitoring of AI through internal audits to evaluate AI systems for fairness, transparency and bias; (ii) ensuring rigorous data governance mechanisms; (iii) adopting published technical standards for accuracy, security and explainability – standards and frameworks such as ISO/IEC 42001 and NIST, AI RM, facilitate structured compliance; (iv) having processes to minimise risks from discrimination, privacy breaches and safety issues; and (v) operationalising ethical AI, the most effective way to build customer trust and market differentiation.
International regulatory alignment and market access. While the UK is not in the EU, the Act is considered an international standard. Therefore, voluntary compliance with its principles has benefits for those firms that choose to align with it. For example, it facilitates the ability of firms to access and compete within EU (and global) markets. Additionally, it provides futureproofing against anticipated international AI regulations. It is also attractive to multinational clients that want to work with partners that have high regulatory compliance. Updating incident response and third party management systems to ensure robust AI incident plans are in place in the firm and through its procurement processes demonstrates a public and structured commitment.
Improved relationships with suppliers, customers and regulators. Soft law is increasingly recognised in business, and AI use is one of the best illustrations of its importance. Using the framework of the Act is a straightforward way to lean into procurement questions, customer concerns and regulator outreach. For example, following the Act strengthens supply chains that use AI by requiring suppliers to meet high standards, builds constructive dialogue with regulators that can lead to lighter-touch regulatory oversight, and positions firms as preferred partners and industry leaders.
Using AI, but specifically using AI in a safe way, is brand enhancing. This includes among customers, investors, employees and the public. It is a way to differentiate a firm operating in a sensitive industry, such as healthcare, finance or HR. It is a way to demonstrate leadership in responsible business and therefore to attract talent and partners by showcasing the firm as an ethical and transparent organisation.
The UK
The UK’s approach to AI regulation somewhat mirrors the EU’s AI Act, more so under the current government. Both have a common emphasis on safety, transparency, fairness, accountability and mechanisms for redress. Guidance from the Information Commissioner’s Office echoes EU principles on transparency and accountability. Sector-specific watchdogs such as the Financial Conduct Authority, Competition and Markets Authority and medical regulators take an approach that is consistent with the Act, and go further to embed ethical priorities. Firms that align proactively with European standards are likely to satisfy UK regulatory requirements and to surpass them. As policy on AI regulation in the UK evolves, and is driven by inevitable public scandals, we can expect that we will have more, not less, regulation in future. Engaging with a comprehensive rulebook like the Act will likely futureproof firms’ AI operations.
Culture
Corporate culture is the single most significant predictor of how AI is used in a firm. The tone set by the board shapes values across the organisation and is by far the most effective way to create compliance and accountability in AI initiatives. Leading firms have dedicated ethics committees to evaluate the risk of AI deployments, but those cannot change the culture that exists in an organisation already. Adjusting performance incentives to reward responsible use of AI can lead to change. After all, people do what you pay them to do, not what you tell them to do. Ultimately, companies that cultivate an ethical AI culture are not only reducing compliance risks but also taking a position in their markets.
Charles Kerrigan is a partner at CMS. He can be contacted on +44 (0)20 7067 3437 or by email: charles.kerrigan@cms-cmno.com.
© Financier Worldwide
BY
Charles Kerrigan
CMS