Enhancing corporate transparency and combatting economic crime in the UK

September 2025  |  SPOTLIGHT | FRAUD & CORRUPTION

Financier Worldwide Magazine

September 2025 Issue


With failure to prevent fraud (FTPF) legislation coming into effect on 1 September 2025, companies must consider how to leverage data analytics, technology and artificial intelligence (AI) to strengthen their fraud risk management frameworks.

In the government’s November 2024 statutory guidance, data analytics is mentioned throughout the chapter on reasonable fraud prevention and procedures. And, indeed, data analytics can be used to enhance the effectiveness of all areas of a fraud risk management framework.

Practically speaking, however, the two areas where data analytics can have the greatest, immediate impact are fraud detection and monitoring and fraud risk assessment.

A data-driven approach to fraud detection

Following the UK government’s November 2024 guidance, companies are encouraged to begin leveraging data to enhance their fraud detection procedures and align with the expectations set out under the new FTPF offence.

Data analytics is central to effective fraud detection. Regulators and enforcement agencies like the UK Serious Fraud Office are leveraging data and technology to identify potential fraud, and there is an expectation that companies are doing the same with their own data.

Despite this expectation, many companies do not use data analytics as part of their fraud detection strategy. Some of the most common reasons for this are that companies do not know where to start, they cannot get timely access to data, and they have disparate and fragmented data sources that makes scaling any data-driven detection programme difficult.

Most companies will encounter these challenges to some extent, but practically speaking, a good starting point is to review the output of the company’s fraud risk assessment and, for high-risk areas, identify potential scenarios where fraud risk indicators could be created.

A fraud risk indicator is a test designed to identify specific transactions or scenarios that may be indicative of a procedural mistake, data quality issues or potential fraud. Some examples include splitting purchase orders to circumvent grant of authority limits, payments made to newly created vendors where there is no purchase order or invoice, and channel stuffing – where there are excessive returns of a product in subsequent financial periods such that it distorts revenue recognition.

Once the company has created a list of fraud risk indicators that describe the fraud risk scenarios that are most likely to occur, they should be divided into groups based on data access and availability. By doing this prioritisation exercise, businesses can create a longer-term roadmap on how to design and implement a data-driven fraud detection function. It is also important to note that companies can start modestly. Using spreadsheet extracts from the company’s enterprise resource planning (ERP) system to identify and review patterns of revenue not being correctly booked to accounts receivables can be impactful in detecting fraud.

From indicators to insights

Companies can incorporate data analytics into their fraud risk assessments by adopting practical approaches such as analysing transactional patterns, identifying anomalies, integrating predictive modelling, and using real-time monitoring tools to proactively detect and mitigate fraud risks.

They can use the same fraud risk indicators to support confirmation and prioritisation of risks as part of their fraud risk assessment. One of the most common ways to do this is to leverage fraud risk indicators to monitor the number of times a potential risk may have occurred.

To bring this to life a bit more, if one of the scenarios on a fraud risk register was ‘supplier bank details are being fraudulently changed to divert payments’, a collection of fraud risk indicators could be built to help understand whether this scenario may have occurred.

Some examples might include looking for payments that were made to an alternative vendor bank account, looking for instances when a given purchase order was being used multiple times, and seeing if there were payments made to vendors that had their bank account details changed multiple times. By reviewing the composite results – the number of times each indicator identified something – it is possible to make a better approximation of how likely this fraud risk is to occur and adjust the fraud risk assessment accordingly.

And, while we have seen some organisations become increasingly sophisticated with their detection techniques by introducing machine learning and artificial intelligence, the key takeaway is that it does not have to be complicated to inform a risk assessment.

Building agentic workflows for scalable, adaptive fraud control

The UK government’s November 2024 guidance highlights the potential for AI to play a significant role in identifying and preventing fraud. In response, corporates can enhance their fraud risk management frameworks by using AI to detect anomalies in large datasets, automate the analysis of transactional patterns, flag suspicious behaviours in real time, and continuously learn from new fraud typologies to improve detection accuracy over time.

The proliferation of genAI models has made AI much more accessible to companies. And, while companies may not be able to deploy AI as part of their fraud detection procedures today, there are benefits to understanding how AI can be practically integrated into their fraud detection procedures in the future.

Although traditional fraud risk indicators have made detection more comprehensive and efficient, there are still several drawbacks. Namely, fraud risk indicators tend to ‘over flag’, meaning that a reviewer will need to spend a considerable amount of time reviewing the relevant results to determine if it is an actual fraud or a false positive. As well, traditional fraud risk detection procedures tend to be too retrospective, meaning that by the time a potential fraud is identified, it may be too late.

This is where the promise of AI – and specifically, agentic AI – represents a paradigm shift in fraud detection and across the wider fraud risk management framework.

Agentic AI differentiates itself from other types of AI in that it is oriented toward accomplishing a specific objective. Agentic AI is capable of independently planning a series of steps within a workflow, executing those steps, producing results for human review, and learning from human feedback.

An example of how agentic AI is already changing the face of fraud detection is AI-driven transaction testing.

Traditionally, a monitor or auditor would select a sample of transactions to review – payments from the last year, for example. The monitor would obtain supporting documentation like contracts, invoices and bank statements, and review that documentation against a predefined checklist or protocol. If the monitor identified any process deviations or anomalies, those anomalies may be sent to the relevant team for further review.

With agentic AI, a workflow to test payments could be created whereby an end-to-end review of the purchase-to-pay process for each payment was performed. The agentic workflow might start by using relevant policies and procedures to map out and understand the purchase-to-pay process and create a monitoring checklist. It may also then use investigations reports from previous incidents of payments fraud to augment each step of the monitoring checklist with known instances of control failures and confirmed frauds. The workflow would then gather the relevant supporting documentation for each payment and test each payment against the checklist by systematically working through each step to complete the checklist. For auditability, the workflow could be configured such that each payment checklist could be requested for review and a human end-user could perform second-line review of how each step was performed, what documentation was used and the rationale behind any issues identified. The workflow would then prioritise the severity of issues identified and follow-up a triage process to refer cases to relevant finance, internal audit or investigations teams. Finally, confirmed instances of fraud are fed back into the workflow to refine its testing procedures and understanding of how fraud might occur.

The prospect of using AI in this way has a number of benefits. First and foremost, using agentic AI workflows means that there are no constraints on scale because – in this example – the workflow can review all the payments. Second, by triaging cases to the appropriate team, counter-fraud resources can better allocate their time to the most critical cases. Finally, and potentially most exciting, this workflow can be run in real time, before a payment is made, meaning that companies will be able to move from fraud detection to fraud prevention.

Although this may sound like a distant future use case, this type of AI-driven transaction testing is already being used by some companies. And with the rapid advancement of different genAI, agentic workflows will be integral to the future of fraud risk management.

 

Fran Begley is a director at EY LLP. He can be contacted on +44 (0)7468 987 342 or by email: fbegley@uk.ey.com.

© Financier Worldwide


BY

Fran Begley

EY LLP


©2001-2025 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.