Digital ethics: the road to RAI

May 2024  |  FEATURE | RISK MANAGEMENT

Financier Worldwide Magazine

May 2024 Issue


The rise of artificial intelligence (AI) is revolutionising the way organisations operate across a range of sectors and industries. Its advanced capabilities are streamlining and optimising decision making and workflow to the ninth degree.

But advantages carry the weight of responsibility – the direct impact of which is probing questions surrounding the ethics of AI, in all its forms. How organisations build fairness, interpretability, privacy and safety into their use of AI is guided by a set of practices and principles generally referred to as ‘responsible AI’ (RAI).

But how should RAI be defined? According to Accenture, it is the practice of designing, developing and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society, allowing organisations to engender trust and to scale AI with confidence.

In another take, MIT Sloan School of Management and Boston Consulting Group (BCG) define RAI as a framework with principles, policies, tools and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.

Studies indicate, however, that while AI initiatives are surging, RAI is lagging. In an MIT Sloan and BCG survey, 84 percent of respondents stated that RAI should be a top management priority, but only 56 percent felt it had achieved that status. The survey also found that while 52 percent of organisations conducted some level of RAI practices, 79 percent of those admitted these practices were limited in scale and scope. Ultimately, only a quarter of organisations said they had a fully mature RAI programme in place.

Another survey report as to the breadth of RAI, this time from a consumer perspective, is Accenture’s ‘The Art of AI Maturity: Advancing from Practice to Performance’, which reveals that only 35 percent of global consumers trust how AI is being implemented by organisations. The report also states that 77 percent believe that organisations must be held accountable in the event they misuse AI.

“Over the past five years, we have seen a noticeable uptick in awareness and engagement surrounding RAI,” says Manoj Saxena, founder of the Responsible AI Institute. “This trend picked up particularly strong momentum following the introduction of ChatGPT in 2022 and the subsequent reports highlighting various issues such as AI hallucinations and data privacy concerns, including legal actions against ChatGPT.

“RAI is not merely a consideration but a critical imperative, increasingly recognised at the highest levels of corporate leadership,” he continues. “Organisations across industries understand that ensuring ethical AI practices is fundamental to maintaining trust, integrity and accountability in their operations.”

For Sam Page, chief executive and co-founder of 7DOTS, the truth is that most organisations do not have RAI processes in place. “The speed at which AI technologies have emerged means best practice and governance have too often been sacrificed on the altar of experimentation,” he suggests. “Many chief executives have gone from not appreciating its potential to suddenly being worried about being left behind. So, they roll it out into every aspect of their businesses with limited oversight.”

Principles of RAI

When developing RAI programmes and frameworks aimed at mitigating or eliminating the risks and dangers posed by AI technologies, organisations need to understand their mechanics, so as to ensure their RAI models are transparent, fair, secure and reliable.

In its analysis of RAI, ‘Responsible AI: Ways to Avoid the Dark Side of AI Use’, AltexSoft identifies five core principles, outlined below, which, although likely to vary in interpretation and operationalisation from one organisation to another, it believes are the heart and soul of RAI.

Organisations are increasingly recognising the need to introduce greater fairness, interpretability, privacy and safety into their AI systems, with RAI the key to doing so.

The first is fairness. AI systems should treat everyone fairly and avoid affecting similarly-situated groups of people in different ways. Simply put, they should be unbiased. Humans are prone to be biased in their judgments. Computer systems, in theory, have the potential to be fairer when making decisions. But we should not forget that machine learning (ML) models learn from real-world data which is highly likely to contain biases.

Second is privacy and security. ML models learn from training data to make predictions on new data inputs. For example, in the healthcare industry, when using patient data for AI purposes, companies must do additional data preprocessing work such as anonymisation and de-identification. AI systems are required to comply with privacy laws and regulatory bodies that govern data collection, processing and storage and ensure the protection of personal information.

Third is reliability and safety. Organisations must develop AI systems that provide robust performance and are safe to use while minimising negative impacts. To ensure the reliability and safety of AI, organisations may want to consider scenarios that are more likely to happen and ways a system may respond to them, figure out how a person can make timely adjustments to the system if anything goes wrong and put human safety first.

Fourth is transparency, interpretability and explainability. AI is often a black box rather than a transparent system, meaning it is not that easy to explain how things work from the inside. After all, while people sometimes cannot provide satisfactory explanations of their own decisions, understanding complex AI systems such as neural networks can be difficult even for ML experts. This raises many questions in terms of the interpretability and transparency of ML models.

The last principle is accountability. Stakeholders involved in the development of AI systems are in charge of the ethical implications of their use and misuse. Therefore, it is important to clearly identify the roles and responsibilities of those who will be held accountable for their organisation’s compliance with established AI principles. The more autonomous AI is, the higher the degree of accountability of the organisation that develops, deploys or uses that AI system.

“While awareness has grown significantly, the actual implementation, workforce capacity, and oversight of AI and data usage still face considerable challenges,” warns Mr Saxena. “There remains a notable gap between awareness and action in this regard. Research reveals that only half of organisations practice some level of RAI, and nearly 80 percent of those say their implementations are limited in scale and scope.”

Implementing RAI

Although organisations are likely to take different routes to creating and implementing RAI across their range of operations, there are key guidelines that can assist in putting the aforementioned principles into practice.

“Developing an RAI programme should involve careful consideration of various factors,” affirms Mr Page. “Adopting a ‘start with a problem’ approach ensures that AI initiatives address genuine needs rather than being technology-driven.”

Of these factors, Mr Saxena suggests organisations should consider: (i) leadership support and visibility; (ii) investment in training programmes; (iii) awareness of key policies and standards being developed and enforced; (iv) engagement of stakeholders across legal, financial, product, IT and other departments; (v) access to vendor agnostic tools and frameworks; and (vi) continuous monitoring and evaluations.

In addition, certain frameworks, tools and processes should be a part of RAI programmes. It is advantageous to invest in training programmes to build and realign the skills of an organisation’s workforce, enabling them to understand and navigate ethical considerations in AI development and deployment. Organisations should also conduct regular assessments of AI systems to evaluate their ethical implications and identify areas for improvement. Benchmarks and certifications should be established to assess the ethical performance of AI systems and incentivise adherence to responsible AI practices.

In addition, organisations are well-advised to adhere to established ethical guidelines and standards, such as the Institute of Electrical and Electronics Engineers’ (IEEE’) ‘Ethically Aligned Design’ framework, to guide AI development and deployment. And engagement with stakeholders across sectors to co-create and refine RAI frameworks allows diverse perspectives and expertise to be leveraged.

“Fundamentally, it is essential to recognise that AI should complement human intelligence, not replace it entirely,” adds Mr Page. “Overreliance on AI can lead to issues, including a lack of transparency regarding its usage. Any output generated by AI should be effectively watermarked for full transparency.”

Neglecting RAI

While organisations may differ as to when (and to what extent) they plan to adopt an RAI strategy, at the same time, many will concede that neglecting the issue would put them at considerable risk.

“Neglecting RAI practices can expose organisations to various risks,” attests Mr Page. “These risks include inaccuracies in AI-generated insights, dissemination of misinformation, governance issues arising from biased or unreliable AI algorithms and damage to brand reputation due to ethical lapses.

“It is important to remember with AI-generated insights that the output is only as good as the data you input,” he continues. “And often this is where things go wrong. Even when the data is good for the most part, organisations need to ensure output is fully examined for inaccuracies.”

Organisations that do not invest in RAI are also in danger of failing to innovate. “Irresponsible AI systems can go rogue, forcing companies to accumulate technical debt, go backwards, and spend time and money addressing the aftermath,” says Mr Saxena “This can lead to even bigger compliance and operational issues, diminishing stakeholder trust and minimising competitive edge.”

RAI legislation

Recent, high-profile attempts to address the gap in RAI include the World Economic Forum’s (WEF’s) AI Governance Alliance, which unites industry leaders, governments, academic institutions and civil society organisations to champion responsible global design and release of transparent and inclusive AI systems.

The stated goal of the Alliance is to shape the future of AI governance, fostering innovation and ensuring that the potential of AI is harnessed for the betterment of society while upholding ethical considerations and inclusivity at every step.

Complementing the WEF’s initiative is a raft of legislative proposals, which include the European Union’s AI Act, heralded as the world’s first comprehensive AI law, and the Biden administration’s executive order (EO) on safe, secure and trustworthy AI.

“President Biden’s EO on AI safety asserts that it will protect Americans’ information, generate innovation and competition, and propel US leadership in the industry,” explains Mr Saxena. “By informing and advising how federal agencies should consider AI-related issues, President Biden’s EO will have significant effects across industries such as healthcare, financial services, manufacturing and technology.”

RAI guardrails

AI is evolving fast, with most modern iterations including reactive machines, limited memory, theory of mind and self-aware AI. Organisations are increasingly recognising the need to introduce greater fairness, interpretability, privacy and safety into their AI systems, with RAI the key to doing so.

“To set themselves up for long-term success with AI, organisations of all sizes and types are enthusiastically welcoming guardrails,” observes Mr Saxena. “Frameworks, best practices and software tools are increasingly being introduced to help integrate RAI as soon as possible – everywhere that AI does or will touch an organisation’s activities.”

“Such efforts give me hope that over the long term, we will build a better society through ethical and safe AI,” concludes Mr Saxena. “We are confident that organisations are taking the risks of AI as seriously as the rewards. Most leaders recognise that the stakes are high, and they want to do the right thing out of the gate.”

© Financier Worldwide


BY

Fraser Tennant


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.