Establishing clear ethical guidelines is fundamental to delivering GenAI in financial services

May 2024  |  SPOTLIGHT | BANKING & FINANCE

Financier Worldwide Magazine

May 2024 Issue


In the global race to harness the potential of generative artificial intelligence (GenAI), financial institutions (FIs) have a remarkable opportunity to leverage their internal data to enhance product offerings and improve servicing of their clients.

Many FIs are ready to adopt GenAI extensively but first there are several hurdles to overcome, including establishing ethical guidelines. Following suitable guidelines inspires confidence among clients and maintains the trust of regulators.

These guidelines would include data suitability, explainability, accountability, fairness and transparency. Without the industry itself establishing a common playing field, the rate of adoption of GenAI into the mainstream will otherwise be moderated.

A multitude of opportunities, a multitude of risks

The world is witnessing a rapid increase in the use of GenAI in our daily lives as its potential benefits are vast. Particularly in the financial services industry, its benefits span multiple aspects of the business. The introduction of GenAI can create a more interconnected financial service value chain that will make businesses either smarter or safer.

Firstly, GenAI can be used to craft tailored offerings for clients through predictive analytics, credit assessment and approval processes. Secondly, it can fortify defences against fraud and money laundering by identifying emerging patterns and criminal methodologies.

Unfortunately, these promising expansions come with inherent risks, as outlined below.

GenAI models typically operate within proprietary frameworks, offering minimal transparency on their functionality or the data employed in training the model. While this opacity allows each model to adapt and learn from diverse experiences and data sets, it also causes a lack of interoperability and substitutability among similar models. As a result, regulators are facing significant challenges when attempting to validate the various models employed. This difficulty in validating the models results in further complications in establishing a set of common rules.

Moreover, it is worth noting that there is also the potential for unintentional errors or data misinterpretations to result in plausible yet erroneous outputs, commonly referred to as ‘hallucinations’, which poses an additional layer of risk.

Privacy concerns also represent a significant factor to consider, encompassing both personal and institutional fears about the leakage of sensitive or confidential datasets.

All these concerns should not be overlooked, because they represent a significant factor in the evaluation of GenAI models. We need to be very mindful of how we expand the use of GenAI: it is crucial that we prioritise fiduciary responsibility and manage data in a responsible manner.

It is vital to respect consumers’ autonomy and ensure trustworthiness and long-term sustainability. Following these principles will help us create a long-lasting, solid expansion of GenAI that will eventually bring increased productivity and profits.

A multitude of regulations

The challenges of using GenAI are amplified and made considerably more complex considering that FIs operate in multiple markets, each with a potentially unique and diverse regulatory landscape and each with a regulator approaching the subject from different angles.

In June 2023, Singapore’s Monetary Authority released a toolkit for the responsible use of AI in the financial sector. Around the same time, China initiated its first recommendations on AI. In November 2023, 28 governments and prominent AI companies made a commitment to subject advanced AI models to rigorous safety tests before release. This pledge coincided with the establishment of a new AI Safety Institute based in the UK and a significant initiative to endorse routine, scientist-led evaluations of AI capabilities and safety measures.

The European Union has been actively crafting its own AI Act. In March 2024, the European Parliament achieved approval of the AI Act, mandating that companies must ensure their products comply with the law before making them accessible to the public. Concurrently, the European Commission requested detailed plans from some of the world’s largest technology and social media companies outlining how they mitigate the risks associated with GenAI.

In the US, an executive order (EO) issued by President Joe Biden in October 2023 was aimed at promoting the “safe, secure, and trustworthy development and use of artificial intelligence”. This EO represents a substantial contribution to establishing comprehensive guardrails for AI.

Additionally, the commitment by the US under the EO to collaborate with allies and international partners is significant. However, cross-border AI regulation remains a challenge due to varying approaches and issues such as data localisation and privacy rules.

The way forward

For institutions with a presence across multiple markets and continents, navigating diverse regulatory landscapes on AI becomes a multifaceted challenge. The ability to establish clear ethical guidelines as guardrails for future development in the field is of paramount importance.

Moreover, given the critical nature of the financial services business, which prioritises safety and reliability, it is imperative to develop scalable approaches to integrating GenAI that garner trust from both regulators and the public.

A collaborative effort toward a common set of GenAI rules in the financial services industry will lay the foundations for new and improved ways of approaching financial services and standard procedures. A preferred approach would be to expand existing frameworks to incorporate GenAI considerations with a focus on accountability to stakeholders, reliability of data and fairness to customers and other agents. To that end, it is important to actively engage in ongoing discussions with policymakers, regulators, technology providers and other FIs across different countries and regions.

In conclusion, as FIs continue to embrace GenAI for enhanced decision making and customer experiences, the importance of having a common set of guidelines cannot be overstated. By adhering to these guidelines, FIs can ensure that the full potential of GenAI is unlocked while also upholding trust and confidence in the financial system.

 

Shaun Taylor is the chief financial officer at Standard Chartered Americas.

© Financier Worldwide


BY

Shaun Taylor

Standard Chartered Americas


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.