Stewarding AI: governance needs to catch up

August 2025  |  SPOTLIGHT | RISK MANAGEMENT

Financier Worldwide Magazine

August 2025 Issue


We are firmly entrenched in the era of intelligent machines. Not long ago, artificial intelligence (AI) was barely a footnote in the working lives of most professionals. But that changed rapidly in 2023, with the release of powerful generative AI (GenAI) models. These tools not only mimic human language but produce work outputs that rival – and often surpass – those of seasoned professionals.

AI is no longer just an operational assistant; it is a strategic disruptor. As AI becomes central to how work is done, a critical shift is unfolding – one that calls for boards and senior leaders to radically rethink leadership, oversight and responsibility.

The question confronting boardrooms now is urgent and profound: how do leaders guide organisations when machines can make decisions, design strategy and even challenge human authority?

AI’s surge into the mainstream

The velocity of AI’s progress is striking. In late 2022, ChatGPT 3.5 could not pass basic accounting exams. By early 2023, GPT-4 was outperforming humans in certified public accountant and certified management accountant assessments. By some estimates, AI could automate up to 60 percent of tasks performed by degree-holding professionals – and perhaps up to 98 percent by 2030.

This is not just about efficiency. It is a redefinition of professional excellence. For boards and executive leaders, the implications are existential. Competence, judgment and foresight must be re-evaluated in light of what machines can do.

The human-AI performance gap

Human thought operates at around 10 bits per second, while our senses absorb billions of bits. Wi-Fi alone transfers data at 50 million bits per second. AI systems, by contrast, analyse immense data sets in parallel, rendering decisions in milliseconds. In chess, for example, a grandmaster evaluates a handful of possible future moves; an AI engine considers millions – simultaneously.

This raw processing power does not just change what AI can do, it alters how people must work alongside it. The real challenge is not using the tools; it is adapting the very structure of human cognition and decision making to meaningfully engage with machines that ‘think’ at speeds we cannot.

Preparing for an AI-infused future

Upskilling is necessary, but insufficient. AI-enhanced work demands more than learning how to prompt a model or query a system. It requires a deep recalibration of how professionals approach communication, leadership, ethics and adaptability.

Writing, for instance, is evolving, rather than disappearing. In an AI world, writing must be strategic, purposeful and ethical. Leaders must use writing not just to convey information, but to inspire, navigate ambiguity and build trust in machine-mediated communication.

A new leadership test – ethics, vision and responsibility

Boards are now facing a fundamental test of their leadership. As AI becomes embedded across business functions, from supply chain optimisation through to marketing analytics and strategic forecasting, oversight cannot be an afterthought.

Ethical stewardship is no longer a ‘nice to have’, it is a business imperative. This begins with data privacy. Boards must be accountable for how customer and employee data is collected, used and protected. It extends to algorithmic bias, which can skew decisions in recruitment, lending and service provision. And it includes the impact on jobs, culture and employee relationships.

Oversight in the AI age demands moral courage and strategic clarity. Boards cannot just be technology-aware – they must be ethically grounded and future-oriented. Boards must ensure that AI governance incorporates a clear framework for ethical evaluation, whether through virtue ethics (doing what is morally right), deontology (following rules) or consequentialism (evaluating outcomes). Decisions must align with the organisation’s purpose, stakeholder interests and societal values.

The regulatory void

Despite AI’s pervasive impact, regulation has lagged. With the exception of targeted rules in autonomous vehicles or China’s pioneering AI laws in 2023, most jurisdictions remain unprepared. Yet regulation is essential – not to stifle innovation, but to protect human dignity, a core value that underpins democratic societies. AI’s capacity for autonomous decision making and unpredictability places it beyond the scope of traditional regulatory models designed for static IT systems.

AI is not just another digital tool. Its autonomy and opacity mean we must rethink the foundations of how we regulate, evaluate risk and assign responsibility. Key regulatory challenges include: (i) foreseeability – AI’s unpredictable behaviour can lead to unintended consequences; (ii) control – systems may act beyond the authority of their developers or legal owners; (iii) modularity – AI components can be developed by dispersed actors, limiting oversight; and (iv) opacity – regulators often lack visibility into AI systems’ inner workings.

The combined effect is a ‘fallibility gap’, a space in which AI decisions can go unanticipated, unregulated and unaccounted for. Without adequate safeguards, we risk outsourcing critical decisions to systems that cannot be questioned or held responsible.

Boards must lead on ethics

In the face of regulatory delay, boards must act pre-emptively. Leadership, especially at the board level, must model ethical engagement with AI technologies. This includes creating a culture where AI’s decisions are explainable, ensuring AI benefits are shared, not hoarded by the few, designing AI that works for everyone – across age, ability and demographic, and assigning clear ownership for AI outcomes, even when they are automated.

Boards should push for internal ethical standards that exceed existing laws. Think of it as corporate conscience: self-regulation guided by values, not just profit. Employees need reassurance that AI will not be used simply to replace them, but to empower and extend their potential.

Workplace culture in an AI era

AI changes more than tasks – it changes relationships. Trust, the currency of organisational cohesion, can erode when expectations are not met. The introduction of AI into workflows brings new psychological dynamics. Who is the decision maker – the manager or the algorithm? Can data be trusted? Will AI impact career trajectories?

The informal, often unspoken expectations employees have of their employers are at risk of fracturing. If these are breached, performance suffers and disengagement rises. Boards must recognise AI as a participant in this ecosystem and ensure that AI-enhanced workplaces are human-centric.

Visionary leadership: human values in a machine world

The irony of AI’s rise is that it requires more humanity in leadership, not less. As machines automate logic, what remains profoundly human are empathy, creativity, judgment and purpose. Boards must encourage these traits – not just at the C-suite, but throughout the organisation.

Leadership in the AI era should embrace understanding employee anxieties and aspirations, encouraging innovative problem solving beyond the algorithm, facilitating human-AI teams that amplify, not replace, people, and investing in skills, values and wellbeing, not just efficiency.

AI has enormous potential to elevate performance. But this only happens when systems are deployed ethically, relationships are nurtured, and leadership is grounded in trust and vision.

Policy and collaboration: a call for global governance

AI’s global nature complicates national regulation. No single government can regulate AI in isolation, and failure to act collectively leaves humanity vulnerable to systemic risks, from economic inequality to algorithmic injustice.

Legislatures offer democratic legitimacy; agencies provide technical expertise. But both must work together – and internationally – to build robust frameworks that protect people and foster innovation.

The regulatory goal should be to preserve human dignity – ensuring that as AI expands, we do not reduce individuals to mere data points. Our capacity for autonomy, moral agency and self-worth must remain intact.

Are boards ready?

GenAI is not a passing trend, it is a fundamental transformation. Boards that understand this will lead with both clarity and conscience. That means building AI strategies that are inclusive, transparent and ethically anchored. It means empowering teams, not replacing them. And it means shaping a digital future where humans and machines coexist – not in competition, but in collaboration.

So, the question stands: are today’s boards ready for this new reality? The answer will define not just the success of individual organisations, but the integrity and resilience of the society we are building.

 

Andrew Kakabadse is professor of governance & leadership and Nada Kakabadse is professor of policy, governance and ethics at Henley Business School. Mr Kakabadse can be contacted on +44 (0)1491 418 776 or by email: a.kakabadse@henley.ac.uk. Ms Kakabadse can be contacted on +44 (0)1491 418 786 or by email: n.kakabadse@henley.ac.uk.

© Financier Worldwide


BY

Andrew Kakabadse and Nada Kakabadse

Henley Business School


©2001-2025 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.