Q&A: Managing AI in the financial services sector

October 2022  |  SPECIAL REPORT: FINANCIAL SERVICES

Financier Worldwide Magazine

October 2022 Issue


FW discusses AI management in the financial services sector with Zayed Al Jamil, Caroline Dawson, Jamal El-Hindi, Ling Ho and Andrea Tuninetti Ferrari at Clifford Chance.

FW: Could you provide an overview of the impact that artificial intelligence (AI) is having on the way financial institutions (FIs) conduct their operations? To what extent are you seeing growing adoption of AI in this space?

El-Hindi: Artificial intelligence (AI) is, in some ways, an embodiment of both the opportunities and challenges that exist for financial institutions (FIs) in the big data age. According to the US-based Bank Policy Institute, FIs are increasingly looking to AI and machine learning to support key functions, including, but not limited to, fraud detection and prevention, marketing, customer services, cyber security, anti-money laundering, credit underwriting and back-office processing. Not all institutions are at the same level of sophistication, and those that have the most data gain more from investing in the development of AI tools. The underlying business rationale is that AI can help institutions more effectively and efficiently make evidenced-based decisions across several operations, particularly with respect to areas where rote analysis applies over and over for similar transactions or operations, or where data analytics can expose or predict patterns of activity that are otherwise not discernible.

Al Jamil: My experience is that adoption differs by function within FIs. For core technologies, adoption is growing organically as FIs replace systems as part of their normal procurement lifecycle. This seems to me like an area which is provider-led, with providers offering AI solutions as a component in a broader value offering. By contrast, my experience in relation to use cases like risk modelling, asset allocation and portfolio management, is that FIs are much more proactive, evaluating and even overlaying multiple models and engaging more in collaboration or in-house development to suit specific risk appetites and investment objectives. The principal headwinds to adoption are regulatory uncertainty, not just in terms of regulation of AI itself but also as to how AI fits within the existing financial services regulatory regime, and the challenges of implementing appropriate internal governance regimes.

FIs can have a difficult time scaling AI internally, because switching to AI may require decommissioning core legacy systems.
— Andrea Tuninetti Ferrari

FW: What regulatory considerations do FIs need to take into account when utilising AI technology? How might compliance issues affect AI deployment strategies?

El-Hindi: In the US, banks are making a strong case that among all industries, the financial sector is already the most prepared to act responsibly in the use of AI, and already has internal mechanisms in place based on years of experience with ‘model risk management’. They have good reason to suggest that they do not necessarily need new rules specific to AI technology when they have been dealing with rules governing their data use and decision making for a long time. The bankers argue that existing rules and guidance, and very attentive regulatory oversight, has led to the development of a strong framework that governs analytic model development, implementation, use, validation and sound policies, governance and controls. They already have platforms that can be adapted to address issues and concerns arising in the use of AI, such as explainability, data quality and the identification of bias.

Dawson: In the UK and EU, there is maybe more of a tendency for regulators to seek to address specific issues with specific regulation. So, while there is an argument that UK and EU banks also have existing policies, governance and controls in place to address any risks associated with use of AI technology, we are seeing new regulations in this space. Regarding the distinction between provider-led AI and AI that is client facing or market facing, the regulation broadly follows this distinction as well. So, regulators expect FIs to assess the potential impact of using AI to support key functions, including the risk of disruption to those functions and the impact that this might have on the stability of the FI and its ability to continue to service its clients. And they also expect FIs to undertake robust testing of any AI used to provide services to clients or to deal in the market, to ensure that consumers do not receive a lower standard of service when AI is used, for example, or that trading algorithms do not cause market disruption. And then, of course, any use of AI would also fall within general regulatory requirements, including those around use of data, fair treatment of clients and appropriate systems and controls.

Comparable to ESG, AI risks are wide-ranging, from privacy breach to discrimination, with the potential to affect the reputation of the FI.
— Ling Ho

FW: In your experience, are the challenges associated with AI being given sufficient attention at board level among FIs? Do you believe more needs to be done at the highest levels to ensure appropriate policies and procedures are in place?

Ho: In Asia, we are seeing important progress in applying governance frameworks and controls to the use of AI in financial services. That said, according to a McKinsey 2021 survey, respondents in emerging economies, including China and India, are more likely to report that they do not have the leadership buy-in to dedicate resources toward AI risk mitigation. According to the same survey, understanding exposure to AI risks has a correlation to returns, as only 17 percent of ‘AI high performers’ – those who attributed at least 20 percent of their earnings to their use of AI – responded that they were unclear as to their exposure, compared with 29 percent of other respondents. While AI risks are gaining board-level oversight and are increasingly becoming an item in corporate governance, boards must find ways to ensure that their AI strategy is embedded in their enterprise strategy. Comparable to ESG, AI risks are wide-ranging, from privacy breach to discrimination, with the potential to affect the reputation of the FI. Boards must consider non-observable diversity, ensuring that a range of perspectives and attributes are represented at board level. Enhancing the technical skillset on the board and having the right committee structure will drive a more focused framing and implementation of the right policies and processes, as well as identifying and managing emerging trends, risks and challenges.

El-Hindi: One way to assess whether an institution is handling AI issues sufficiently is to look at the placement and capabilities of the chief data officer (CDO) in an organisation. Has the organisation defined the role well and does it report to and have the support of the C-suite? A CDO helps bridge data technology, culture, ethics and other facets of data use that are critical to getting the best outcomes out of AI while avoiding the worst.

A CDO helps bridge data technology, culture, ethics and other facets of data use that are critical to getting the best outcomes out of AI while avoiding the worst.
— Jamal El-Hindi

FW: When implementing and procuring AI into existing systems, what typical challenges might FIs expect to confront? What steps can they take to overcome these challenges?

Tuninetti Ferrari: FIs can have a difficult time scaling AI internally, because switching to AI may require decommissioning core legacy systems. For example, in almost every industry, maximising the value of customers’ raw data requires breaking the silos between various business functions, in order to train AI with pooled, multidimensional information about customers. But, of course, if you are a bank, a ‘fail fast, fail often’ approach may be inconsistent with regulator and customer pressure to relentlessly ensure compliance, stability and security of banking services. Outsourcing AI can help to meet both demands, provided that there is a clear strategy, relying on three pillars: regulators’ involvement on key projects, diversification of suppliers and outsourcing of liability risk, and interoperability with existing systems.

Al Jamil: A key practical challenge around the procurement of AI systems is that market norms are still developing and AI has an extensive impact on the ‘market standard’ provisions that FIs are used to in their standard procurement documents. To pick some examples, the architecture of the system may present challenges in retrieving data on exit, the autonomous nature of the system may undermine acceptance procedures and entail continuous monitoring, and the opacity of AI decision making may make audit difficult. There are also downstream effects on contracts for other systems within a stack, for example automation may cause an FI to breach contracted volume restrictions. All of these issues can be addressed, but there is no ‘one size fits all’ solution, as much depends on the specific characteristics of the system. The key to addressing them effectively is gathering the right information as early as possible in the process and ensuring that you have a governance framework to address the risks that are identified.

Dawson: Another challenge that we have encountered is the interaction between traditional financial services regulation that focuses on the specific location from which you are carrying on services, and cloud-based or distributed systems. This can make it challenging to apply existing regulation and governance requirements to AI. For example, if the AI supports services that trigger a licensing requirement, is the FI carrying on activities ‘in’ the relevant jurisdiction for these purposes?

Regulators expect FIs to assess the potential impact of using AI to support key functions, including the risk of disruption to those functions.
— Caroline Dawson

FW: What essential advice would you offer to FIs on maximising their AI capabilities while mitigating potential related liabilities?

Ho: Key to mitigating liabilities is trust, and trust is generated by open and transparent communication. Internal governance structures and frameworks with clear lines of accountability at every stage, from development, testing and deployment to ongoing monitoring, should be clearly communicated to stakeholders with onboarding of feedback. Similarly, there should be transparency and auditability of AI-augmented decision-making processes, and the deployment of a level of human involvement, to avoid discrimination claims of biased datasets, or financial misconduct or breach of competition laws, for example from misuse of algorithms for price-fixing or market manipulation. FIs should also understand their reliance on any third-party AI. Records should be well maintained to facilitate algorithm explainability and transparency, and auditability and accountability. Contracts with suppliers and customers should be reviewed, including the boundaries of exclusion clauses in case of exposure to contractual or tortious liability from the use of AI.

El-Hindi: One thing I stress is the need for diversity in organisational thinking when approaching AI issues. Look around the room where decisions are being made and ask who is missing? Do you have your business leaders, operational teams, information security and lawyers there? Do you have your CDO and your human capital people there? How about your political and public affairs people, and those focused on your institution’s reputation and social responsibility? And do you have voices from enough different types of communities to help underscore the importance of avoiding inadvertent bias and to help insert an appropriate human element into the ‘AI use calculus’ at the right place and time? Of course, it is not always feasible to have everyone in the room, especially if they are not actually listening to one another, but be cognisant of who is missing and then follow up to mitigate their absence.

For core technologies, adoption is growing organically as FIs replace systems as part of their normal procurement lifecycle.
— Zayed Al Jamil

FW: Looking ahead, what key issues are likely to affect FIs’ deployment of AI technology? How do you expect regulatory regimes governing this area to evolve?

Tuninetti Ferrari: Tech regulation is moving toward fair allocation of data throughout the data monetisation chain. For example, the Data Act aims at fostering data reuse, allocating fair value also to internet of things device users who generate that data. All industries will be impacted by this new trend. For banks, for example, this is an opportunity to unlock great potential in areas where customers view them merely as a low-grade commodity storing cash and making payments. By generating trust around the way data is used, customers are more inclined to share personal data, which AI can then turn into valuable insights for products and services. One trend we already see is the establishment of ‘data trusts’, whereby a third-party custodian stores data on behalf of the trust participants, and grants access to data in accordance with a set of pre-established rules, which are meant to prevent data misuse and to foster collaboration, governance and transparency among the participants.

Ho: The financial services industry is highly regulated. Regarding AI, Asia-Pacific regulators, most prominently Hong Kong and Singapore, excluding China, have been taking a technology-neutral approach with regulation based on existing frameworks and largely guidance-based to ensure responsible use of AI. The challenge facing FIs is understanding regulators’ underlying expectations and ensuring their latest AI, including where it is outsourced, complies. Collaboration between FIs and technology partners is key. We expect regulatory regimes to evolve through regulators collaborating with industry and adopting a ground-up approach, and with those of other jurisdictions. One example is the Monetary Authority of Singapore’s work with the industry Veritas Consortium in the development of future guidance, including a non-binding fairness assessment methodology. The value of cross-border cooperation and aiming for regulatory consistency, while taking into account local contexts, cannot be underestimated, as many FIs operate regionally or globally.

El-Hindi: Because AI presents big advantages to larger FIs that have access to big data and can afford the innovation needed to harness it, this is another area that may generate concerns about too much advantage resting in the hands of too few institutions. Combine this with general fears of AI as well as a propensity to distrust any institution that becomes ‘too big’, and you can predict possible regulatory efforts to increase opportunities for smaller FIs to innovate with AI.

 

Zayed Al Jamil is a partner in the Clifford Chance TMT Group in London whose practice areas include corporate M&A, telecommunications, and media & technology. He specialises in complex outsourcings, technology development and procurement agreements, data licensing and the separation aspects of M&A transactions. He has a particular focus on emerging technologies such as artificial intelligence, IoT and distributed ledger technology and on transactions which involve the licensing of valuable datasets. He can be contacted on +44 (0)20 7006 3005 or by email: zayed.aljamil@cliffordchance.com.

Caroline Dawson specialises in advising financial institutions and other market participants on financial market regulation, mergers and acquisitions in the financial sector, and securities and derivatives transactions. She is a member of the Bank of England’s Risk-Free Rate Working Group as well as TheCityUK’s Switzerland MAG. She was seconded to the EMEA equities team at Goldman Sachs in 2009 and to the Bank of England in 2014. She can be contacted on +44 (0)20 7006 4355 or by email: caroline.dawson@cliffordchance.com.

Jamal El-Hindi is a former deputy director of the US Treasury Financial Crimes Enforcement Network (FinCEN), having previously served as the associate director for program policy and implementation at the Office of Foreign Assets Control (OFAC). He was also the Treasury’s inaugural chief data officer, with responsibility for enhancing management and use of all Treasury data. At FinCEN, he led the operational, policy and strategic planning aspects of the bureau, overseeing rulemaking, guidance and interpretation efforts, and counselling on enforcement, regulatory analysis and industry outreach. He can be contacted on +1 (202) 912 5167 or by email: jamal.elhindi@cliffordchance.com.

Ling Ho has spent over 30 years advising clients on intellectual property (IP)-related matters in the Greater China region. Her expertise covers the full spread of contentious and non-contentious IP issues. She is the co-head of the China litigation and dispute resolution practice and is a core member of the firm’s global tech group, with particular focus on IP, cyber, tech disputes and risk management. She can be contacted on +852 28 26 34 79 or by email: ling.ho@cliffordchance.com.

Andrea Tuninetti Ferrari is a counsel at Clifford Chance. He regularly advises financial institutions in relation to complex litigations and projects. As a member of the firm’s global tech group, his focus is, among other things, on tech and data governance, monetisation and disputes. He can be contacted on +39 02 8063 4435 or by email: andrea.tuninettiferrari@cliffordchance.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.