Speed without certainty: compliance management in an AI world
August 2025 | SPOTLIGHT | RISK MANAGEMENT
Financier Worldwide Magazine
While enforcement practice differs globally, many regulators have retooled their playbooks – penalties linked to sustained risk management and control failures now dwarf those for one-off technical lapses in most major jurisdictions.
Questions once settled by pointing to a statute are increasingly answered by pointing to the firm’s governance, risk and compliance frameworks. A single technical lapse is perhaps easier to defend; but repeated lapses suggest an organisation never truly understood its duties in the first place.
Into this climate marches artificial intelligence (AI) and software that digests entire legal libraries, drafts policies in seconds and promises to ‘map controls automatically’. The allure is often irresistible – particularly to overextended compliance teams. Yet experience shows that the promise can turn quickly to peril.
A good way to look at it might be when a graduate joins a law firm, no one expects that individual to argue a complex enforcement matter on their first day. A prudent manager limits their scope, checks every output and slowly expands responsibility once their reliability is proven.
Should AI be treated the same way? On present evidence, while it excels at gathering information and arranging it in coherent language, it lacks the insight, judgement and experience that seasoned professionals develop over decades. Organisations that elevate AI directly to partner level will court disappointment; but those that supervise it like a talented intern are already harvesting efficiencies without diluting accountability.
The technology is powerful at gathering information and reformatting it. But while it may appear at first glance to be competent, it is often grossly unreliable at nuance, context and judgement, and those are precisely the qualities that regulators scrutinise when deciding whether a firm took reasonable steps to prevent misconduct or systemic failures in compliance.
Back to source
Most organisations claim they ‘know the law’, but after a breach, root-cause reviews repeatedly show the opposite: obligations were misread, updated guidance slipped past unnoticed or the language remained so dense that operational staff could not translate it into tangible action.
Compliance specialists respond by converting complex legislation into plain-language obligation libraries – turning ‘section 74 of the Gaming Act’ into ‘verify age before entry’. That may sound straightforward; but it rarely is because the interpretation of the law requires a nuanced understanding of how it is applied.
A case in point involves a global gaming industry operator that was struggling to identify its frontline control against under-age patrons. The obligation register auto-prepared by a third-party vendor using AI-assisted logic mentioned the possibility of electronic ID scanners or pricey biometric gates. In reality, its bouncers (seven feet tall and stationed at every entrance) were effectively the frontline control; the human element had vanished in translation because the model could not ‘see’ guards.
Compliance stands at the intersection of the rules (laws about what we want people to do) and human behaviour (how they actually behave in the real world). At this moment in time, without being fed with behavioural data and then overlaying that with domain specific context (for example, how patrons gain entry to a casino or gambling venues), AI’s use is (and should be) limited.
Some vendors market AI platforms that ingest thousands of regulations and claim to output a ready-made map of obligations and related controls. In trials, however, the systems produce lists that look plausible yet are riddled with omissions, errors and contradictions because the law is more complex than probabilistic models assume.
Furthermore, translation into obligations that can be understood and implemented at the frontline and then identifying effective fit for purpose controls requires both legal and risk experience and expertise. No probabilistic model can divine those variables from statutory text and organisational documents alone.
The allure of automation grows with organisational complexity, but today’s compliance functions still have to do the interpretive work if they do not want to have to explain why their shiny new tools fail a basic reality check.
A multinational organisation wrestling with niche, highly-specialised rules recently invited proposals from several vendors. One bid emphasised curated obligation libraries, legal and risk oversight and ‘incremental AI assistance’. Another promised a single platform that would “write all obligations, map all controls and keep them updated automatically”. During due diligence, however, the platform’s developers conceded they could guarantee speed – but not accuracy.
They offered no assurance that the tool’s recommendations would satisfy a regulator asking the reasonable-steps question. The organisation’s compliance leaders pressed harder: would the vendor underwrite the output? The answer was no. The core value proposition collapsed and along with it the illusion that AI without expert oversight can placate supervisory bodies.
Large language models operate by statistical inference, meaning that they can produce authoritative-sounding outputs, which in some cases do not exist. Recent court filings built on hallucinated case law underline the point.
Internal red-teaming, independent assurance and clear escalation protocols must accompany every AI deployment and when mistakes emerge – and they will – organisations must demonstrate that controls were in place to catch and correct them. Otherwise, the technology intended to satisfy ‘reasonable steps’ becomes the very evidence that those steps were missing.
Where AI earns its keep
None of this makes AI a compliance mirage; it is simply vital that it is used at the right point in the chain to generate measurable gains. For example, by comparing customer complaints data with accurate obligations, AI’s superior pattern recognition capabilities could surface nascent or otherwise seemingly unconnected misconduct trends long before manual sampling would reveal them.
In addition, deployed through structured interviews, AI could utilise adaptive question sets to replace rigid web forms, capturing richer accounts of incidents and feeding superior root-cause analysis. Using AI, regulatory change mapping could also be made seamless – new rules can be cross-referenced against an existing library to flag overlaps and highlight gaps requiring human interpretation.
What unites all of these successes is disciplined, expert curation. The model ingests only verified sources and produces a first draft; and human experts – ethically and legally accountable – then test, refine and approve. In future, the real gains could even lie squarely in this space, i.e., training models using an overlay of context fed in by industry experts and risk professionals who manage the intersection of behaviour and compliance as part of their jobs.
AI offers speed, governance supplies assurance
Global watchdogs now issue penalties not merely for what happened but for what should have happened given available data. If an algorithm spots a red flag in real time and the institution fails to act, the regulator may argue the lapse was foreseeable and therefore preventable.
Linking transactional or complaints data to the obligation library allows AI to ask whether emerging patterns signal latent breaches. Early pilots in financial services suggest a sharp reduction in incident volumes when predictive monitoring is paired with swift root cause analysis and remediation. Still, the final decision to escalate remains a human call – a fact regulators continue to emphasise in public speeches and enforcement notices.
Breaches often trace back to a single weak spot: the obligation library froze while the law moved on. AI can help if it is linked to a continuously refreshed data feed, but even then a lawyer or risk professional (preferably both) must validate every critical change because regulators care less about whether a firm captured one obscure clause than about whether it maintained a living process to detect, evaluate and embed changes.
Compliance likely will remain a human endeavour informed by machine speed for a while to come. AI should draft, collate and highlight, while experienced professionals must interpret, challenge and endorse. Firms that strike that balance will discover they can meet regulators’ rising expectations without ballooning headcount or cost. Those that do not may find themselves explaining to a sceptical supervisor why the machine was trusted long before it was tested or asked to ‘prove itself’.
Philip Hardy and Chris Baker are partners at Ashurst Australia. Mr Hardy can be contacted on +61 411 104 250 or by email: philip.hardy@ashurst.com. Mr Baker can be contacted by email: chris.baker@ashurst.com.
© Financier Worldwide
BY
Philip Hardy and Chris Baker
Ashurst Australia