AI in business and law: balancing risk with progress

September 2025  |  SPECIAL REPORT: DIGITAL TRANSFORMATION

Financier Worldwide Magazine

September 2025 Issue


Artificial intelligence (AI) is rapidly reshaping the way businesses operate across sectors. Whether driving efficiencies, informing decision making or transforming service delivery, AI has already fundamentally shifted how we work and shows huge potential for efficiencies in how we work.

However, the benefits must be weighed carefully against the associated risks, particularly in areas where ethical standards, public trust and legal compliance are fundamental.

This article explores the broader risks AI poses to businesses and the specific concerns within the legal profession, especially in the context of dispute resolution. The prevailing view is clear: while AI offers real potential, it has and will continue to be adopted but the key is that it must be adopted responsibly, with appropriate safeguards and professional oversight.

Broader risks of AI in business

Despite the clear opportunities AI presents, there are significant risks to businesses that, if unmanaged, can result in regulatory breaches, reputational harm and operational disruption.

Privacy and data protection. AI systems typically require vast quantities of data to function effectively, much of which may include personal or sensitive information. This creates considerable challenges around compliance with data protection legislation such as the UK General Data Protection Regulation and the Data Protection Act 2018.

In some cases, AI systems may infer sensitive characteristics, such as health conditions, financial status or behavioural traits without individuals’ explicit knowledge. In the workplace, AI tools that monitor employee activity, assess productivity or track sentiment can also give rise to concerns over surveillance and employee autonomy.

To address these risks, businesses must ensure robust data governance, adopt a holistic approach to privacy, and seamlessly integrate this into products, services and system designs, and provide transparency around AI decision-making processes.

Bias and discriminatory outcomes. AI models are trained on historical data often from specific geographical locations, which often reflects existing societal biases. If left unchecked, these biases can lead to discriminatory decisions in recruitment, financial services, insurance and customer engagement, among other areas.

For example, a recruitment algorithm may favour applicants from certain backgrounds due to biased training data, potentially breaching equality legislation and damaging organisational integrity. Mitigating this risk requires proactive testing for bias, diversifying training datasets, and ensuring that diverse human perspectives are involved in system design and review.

Cyber security and system integrity. AI systems can also be vulnerable to a range of cyber threats that business must be aware of. These include (but are not limited to): (i) poisoning attacks, where training data is deliberately corrupted to compromise results; (ii) model inversion, where attackers reconstruct training data; and (iii) prompt injection, where inputs are manipulated to alter outputs.

These attacks can lead to loss of confidential information, reputational damage or even misinformed business decisions. To reduce exposure, businesses should implement encryption, restrict access, conduct rigorous security testing and monitor for anomalies just as they would have done prior to using AI systems and models. Recent events at Marks & Spencer and Co-op in the UK demonstrate the tangible business harm that cyber attacks can have and are not to be underestimated. The use of AI systems opens a further avenue for attack.

Ethical and reputational risk. AI tools can produce outputs or recommendations that, while technically valid, may be ethically questionable or socially inappropriate. An AI model that delivers tone-deaf marketing content or unfair pricing structures, for example, can alienate customers and harm public perception of a business.

In sensitive sectors such as healthcare, education or finance, AI errors or misjudgements can have serious real-world consequences. While human error is obviously a continued risk, given its automation AI can arguably be more damaging for a business. It is therefore critical to ensure that AI is used in ways that align with ethical expectations, social norms and professional standards. A human-in-the-loop approach, combined with clear accountability, is essential to uphold public confidence.

Unapproved use and third-party risk. Employees may begin using publicly available AI tools such as generative AI (genAI) platforms or writing assistants without organisational approval. This ‘shadow AI’ phenomenon introduces risk around data privacy, use of confidential information, consistency and unauthorised access.

Simultaneously, reliance on third party AI vendors introduces dependency risks and potential exposure if providers fail to meet legal, ethical or security standards. Contracts with AI providers must include clear provisions around data protection, intellectual property, liability, and audit rights. Businesses should develop clear policies on approved AI usage, have appropriate trialling and testing, and maintain a full inventory of systems in operation.

Sector-specific risks: legal practice and dispute resolution

The legal profession has traditionally been cautious in adopting new technologies, and rightly so. Legal work requires precise reasoning, a deep understanding of precedent and the exercise of judgement, all of which sit uneasily with the current capabilities of AI.

However, interest in AI is growing, particularly as legal professionals seek ways to streamline complex and time-intensive processes. It is essential that this adoption is carried out carefully, with clarity around the boundaries of acceptable use.

Supportive use, not substitution. AI is well-suited to tasks such as document management, timeline creation, basic legal research and template generation. Used correctly, it can reduce administrative burden and allow solicitors, barristers and support teams to focus on value-added work.

However, there is a clear consensus in the legal profession that AI should not be used to determine case strategy, assess credibility or reach legal conclusions. These decisions require human judgement, professional experience and ethical consideration. Legal professionals must remain ultimately responsible for any work output, regardless of whether AI tools are involved.

Categorising AI use by risk. To ensure AI is used appropriately within the legal sector, it is helpful to categorise applications by their risk level. Low risk would include administrative and procedural tasks such as bundling, meeting scheduling and summarising basic documentation. Medium risk may involve drafting early-stage legal documents, proposing clause structures or suggesting relevant case law – tasks which must be carefully reviewed by a qualified legal professional. And high risk would include activities involving legal advice, strategic analysis or access to confidential case materials, which are generally not appropriate for AI systems in their current form.

This tiered approach supports safe, responsible adoption without undermining professional duties or client expectations. Businesses in other sectors can and should adopt a similar approach to categorising use. Having industry norms and consistencies can develop good practice and informed usage. Evolution rather than revolution where possible with AI is a good mantra to adopt.

AI ‘hallucinations’ and legal consequences. One of the most significant dangers with genAI tools in the legal sector is their tendency to ‘hallucinate’, producing fictitious but convincing case law, references or facts. In recent months, there have been various high-profile examples where lawyers have faced potential disciplinary action for submitting AI-generated submissions that contained fabricated citations.

Given the high stakes in legal proceedings, there can be no room for such errors. Legal professionals must verify all AI-generated outputs manually and treat such tools as tools of efficiency and assistance, not sources of truth.

Failure to do so not only risks poor legal outcomes but could amount to professional negligence. These sort of incidents are not technological failures; they are failures in judgment and due diligence by the lawyers using the technology. This is not just a problem in the legal sector but one we are all tackling across different business sectors.

Managing client expectations. Clients increasingly arrive with AI-generated content, whether it is contract drafts, initial pleadings or AI-derived legal analysis. While this can expedite early engagement, it may also lead to misunderstandings and inaccurate expectations and can increase cost.

AI tools often lack understanding of nuance and the specific complexities of issues in play. Taking time to respond to or debate with clients these quick fix ‘AI legal answers’ has a knock-on time and cost where clients may be better served by enabling practitioners to focus on using AI tools to their advantage and focus on the legal issues in play.

Confidentiality and ethical obligations. Confidentiality is (rightly) king for clients, and entering client data into public or cloud-based AI tools risks breaching these duties, potentially exposing sensitive material or contravening data protection legislation.

Legal professionals must obtain informed consent if AI tools are being used as part of client work, and must be competent in understanding and explaining those tools’ limitations. Firms should adopt private, secure AI systems for internal use, apply encryption and access controls, and provide clear policies around confidentiality, consent and data handling. In this fast changing landscape it is critical that lawyers do not forego or forget the fundamentals of why clients engage lawyers and the relationship of trust this requires.

Conclusion: moving forward responsibly

AI is no longer a future consideration; it is a present reality. Its use is already transforming how work is carried out in the legal sector and beyond. However, the risks it poses cannot be underestimated. From data misuse to algorithmic bias, and from reputational harm to legal exposure, the stakes are high.

In legal practice especially, a cautious and considered approach is paramount. AI has a supporting role to play, but it cannot replace human judgement, professional integrity or legal accountability. The guiding principle for any organisation adopting AI should be clear: proceed with ambition, but act with care. Progress is important, but prudence is essential.

 

Alex Sharples is a partner at Trowers & Hamlins LLP. He can be contacted on +44 (0)7816 091 813 or by email: asharples@trowers.com.

© Financier Worldwide


©2001-2025 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.