Project purgatory: avoiding AI failures in financial services
December 2025 | FEATURE | BANKING & FINANCE
Financier Worldwide Magazine
From conception to production, most artificial intelligence (AI) projects – regardless of their nature, scope, complexity or sector – are typically launched with the best of intentions.
Success, however, is far from guaranteed. In financial services (FS), while the volume of AI projects approved is substantial – driven by executives optimistic about the potential impact on profitability – successful implementation remains elusive.
Analysis by FinTellect AI indicates that 80 percent of AI projects in the FS sector fail to reach production. Of those that do, 70 percent do not deliver measurable business value. This is particularly striking given that FS firms spent an estimated $35bn on AI initiatives in 2023, according to the World Economic Forum’s 2025 white paper Artificial Intelligence in Financial Services.
“The FS sector continues to allocate a significant portion of its IT budgets to transformation initiatives aimed at improving operational efficiency, enhancing resilience and meeting evolving customer expectations,” says Matthew Gibbons, vice president of sales at Semarchy. “AI is a central investment area, with key applications including fraud detection, personalised financial advice and risk modelling.”
Semarchy’s own research suggests that not only do most AI projects fail to deliver value, but 33 percent do not progress beyond the experimentation phase. These failures are not typically due to flawed algorithms or a lack of talent, but rather to poor data quality – the classic ‘rubbish in, rubbish out’ problem.
“Unreliable data leads to inaccurate outputs, biased insights and increased regulatory risk, ultimately limiting the impact of many AI investments,” notes Mr Gibbons. “A lack of transparency, especially when AI is deployed as a ‘black box’, can erode trust and make it difficult to justify decisions.”
Further illustrating the scale of AI project failure in FS is research from the Massachusetts Institute of Technology, which found that 95 percent of generative AI pilots fail to deliver financial impact. Many projects are approved not because they address a genuine business need, but due to a perceived imperative to adopt AI.
“Financial institutions today face intense competitive pressure, growing regulatory complexity and internal demands for simplification and efficiency,” explains Jonathan Marriott, a go-to-market leader at Glean. “Many have multiple AI initiatives running concurrently, often without a shared foundation or a clear path to measurable return on investment. This fragmentation not only slows progress – it introduces risk.”
Root causes and new approaches
The high failure rate of AI projects in FS can be attributed to several factors, underscoring the importance of robust data management and governance in implementation strategies.
According to RAND, five primary root causes of AI project failure are: (i) stakeholders often misunderstand or miscommunicate the problem to be solved; (ii) organisations frequently lack sufficient data to train effective AI models; (iii) projects prioritise cutting-edge technology over solving real user problems; (iv) inadequate infrastructure hampers data management and model deployment; and (v) AI is sometimes applied to problems beyond its current capabilities.
“In financial services, while the volume of AI projects approved is substantial – driven by executives optimistic about the potential impact on profitability – successful implementation remains elusive.”
For Mr Gibbons, the most critical factor remains the quality and integrity of the underlying data. “Without unified, trusted and well-governed data, even the most sophisticated AI systems will produce misleading results,” he asserts. “Many FS firms have adopted master data management (MDM) to address these challenges. Yet, as data volume and complexity grow, traditional MDM approaches must evolve.”
Mr Marriott advocates a platform-first approach. “This unifies cross-functional data, embeds governance from the outset and enables secure, compliant AI deployment across the enterprise,” he says. “It reduces operational complexity, mitigates shadow AI and accelerates time-to-value, while delivering the personalised, multi-channel experiences customers expect.”
Closing the ‘confidence gap’
Despite widespread inefficiencies and failures, the appetite for AI projects in FS is unlikely to diminish. Firms must therefore address the ‘AI confidence gap’ – the disconnect between project conception and successful delivery.
“Despite the challenges, demand for AI in FS is expected to grow,” agrees Mr Gibbons. “Central to this evolution is the continued development of MDM solutions, which are increasingly vital in unifying, curating and governing the data that powers AI.”
As AI adoption matures, the confidence gap should narrow. “The differentiator will not be experimentation, but execution,” suggests Mr Marriott. “Firms that build on a unified, governed foundation will be best positioned to scale AI with clarity, control and measurable impact.”
The allure of AI in FS lies not only in its promise of transformation but in its potential to redefine how institutions operate, compete and serve their customers. Yet, as the sector races to harness this power, it must also confront the uncomfortable truth: ambition alone does not guarantee success. The real challenge is not in dreaming big, but in executing wisely – with discipline, clarity and a deep respect for the data that fuels intelligent systems.
Looking ahead, the leaders in this space will those that build resilient foundations. AI is not a silver bullet, nor a shortcut to innovation. It is a tool – powerful, yes, but only as effective as the strategy behind it. Financial institutions that treat AI as a long-term capability rather than a short-term experiment will be best placed to turn potential into performance.
© Financier Worldwide
BY
Fraser Tennant