Challenges posed by the digital economy

November 2025  |  SPOTLIGHT | RISK MANAGEMENT

Financier Worldwide Magazine

November 2025 Issue


The digital economy has reshaped the global landscape, offering unparalleled opportunities for innovation, efficiency and connectivity. However, alongside its transformative potential, it presents complex challenges that demand careful navigation.

Among these, artificial intelligence (AI) stands out as both a game changer and a source of profound uncertainty. This article explores the broader challenges of the digital economy, with a particular focus on the role of AI, and highlights why businesses and policymakers must act strategically to address these issues.

Regulatory and legal complexities

The digital economy transcends borders, creating a labyrinth of legal and regulatory challenges. Digital businesses often operate globally without a physical footprint in many jurisdictions, making it difficult for governments to ensure fair taxation.

The Organisation for Economic Co-operation and Development’s global minimum tax framework is a promising step, but implementation remains fraught with political and logistical hurdles. The digital economy thrives on data, but this dependence raises significant privacy concerns.

Regulations like the General Data Protection Regulation and the California Consumer Privacy Act set high standards for data protection, yet businesses face challenges in balancing compliance and the cross-border operational nature of their businesses. The ease of sharing and replicating digital content has made intellectual property protection increasingly difficult. Businesses risk losing competitive advantages as ideas and innovations are rapidly disseminated online.

Cyber security threats

The digital economy is under constant threat from cyber attacks, which are growing in sophistication and scale. Data breaches expose sensitive customer and corporate data, undermining trust and causing significant financial and reputational damage. Cyber criminals are targeting organisations across industries, with ransomware attacks often spreading through interconnected supply chains. This creates vulnerabilities that can disrupt entire sectors. The rise of AI has enabled more sophisticated cyber attacks, such as deepfakes and automated phishing campaigns, which are harder to detect and counter.

In addition, with AI systems becoming increasingly valuable to businesses they are themselves becoming attractive targets to cyber criminals. Large language models (LLMs) are vulnerable to prompt injection attacks, which are designed to make the models behave in unintended ways, including to disclose confidential information contained within them.

Recent developments have included the use of malicious prompts embedded in macros which are intended to mislead AI systems into classifying malware as safe. As more companies use AI systems to detect and prevent cyber threats, this type of attack is developing to counter those measures.

Given that LLMs are increasingly being used to transfer data to other applications and services, the risk of malicious prompt injections being used to disrupt and interfere with them will grow in parallel. In addition, data poisoning attacks are an increasing threat where attackers seek to interfere with the data upon which AI models are trained to generate negative outcomes.

One can imagine that just as ransomware gangs have monetised software that can encrypt systems and lock out their users, their next targets may well be extortion based upon attacks on critical AI systems. As AI technology develops and is increasingly integrated into businesses’ key systems so will the threat landscape evolve.

Ethical and legal risks

AI is arguably the most transformative force within the digital economy. Its potential to revolutionise industries is matched only by the ethical, legal and operational challenges it creates. For businesses, AI is no longer a ‘nice to have’ but will become part and parcel of everyday working life and also a potential risk if mishandled.

AI has the power to unlock unprecedented efficiencies and insights. From automating manufacturing to customer service, AI is streamlining processes and reducing costs, enhancing decision making such that machine learning algorithms can analyse vast datasets to identify trends, better predict outcomes and inform strategic decisions.

AI also enables businesses to deliver enhanced personalised experiences, improving customer engagement and loyalty. However, the digital economy has the potential to exacerbate existing inequalities concerning access to technology where rural and underserved communities often lack the infrastructure to participate fully in the digital economy.

The rapid adoption of AI raises critical issues, including those outlined below.

Bias and discrimination. AI systems are only as good as the data they are trained on. If the data is biased, the outcomes will be too. This has already been seen in certain AI-driven tools and systems, which have perpetuated discrimination.

Accountability. Who is responsible when AI makes a mistake? For example, if an autonomous vehicle causes an accident, is the blame on the manufacturer, the software developer or the user?

Job displacement. While AI creates new opportunities, it also threatens to displace millions of jobs, particularly in sectors reliant on routine or manual tasks. Retraining swathes of individuals will need to start sooner rather than later. Workers without digital skills are at risk of being left behind, creating a divide between those who can thrive in the digital age and those who cannot.

Regulatory uncertainty. Governments are struggling to keep pace with AI advancements, leaving businesses in a grey area regarding compliance and ethical standards.

AI and the boardroom

For decision makers, the question is no longer if to adopt AI, but how to do so responsibly. Businesses must consider how can AI be integrated in a way that aligns with their strategic goals, what safeguards are needed to mitigate risks such as bias, misuse or regulatory non-compliance, and how AI can be leveraged to create competitive advantages without alienating employees or customers. These are not easy questions, and the answers will vary depending on the organisation, industry and jurisdiction. What is clear, however, is that businesses that fail to engage with these issues risk falling behind.

The digital economy has given rise to technology giants with unprecedented market power. This concentration of influence raises concerns about monopolistic practices and heavy reliance on platform dependency, potentially leaving smaller businesses vulnerable to changes in algorithms or policies.

The digital economy, while often seen as ‘clean’, has a significant environmental footprint. Data centres and cryptocurrency mining consume vast amounts of energy, contributing to carbon emissions. The rapid obsolescence of digital devices leads to growing electronic waste, which is challenging to recycle or dispose of sustainably.

Why companies need a strategic approach to AI

AI is not just another tool in the digital economy – it is a transformative force that will define the future of business. However, its complexity and potential for misuse mean that adopting AI without a clear strategy can do more harm than good.

Companies should be asking critical questions. How can our organisation harness AI to drive growth while minimising risks? How do we develop a responsible culture and train AI to understand the same? What ethical frameworks and safeguards should we implement to ensure responsible AI use? How can we futureproof our business against regulatory changes and complex compliance requirements?

Whether a company is just beginning its AI journey or looking to refine its existing strategy, the key is to act now. The digital economy is evolving rapidly and those who fail to adapt risk being left behind.

The digital economy offers immense opportunities, but it also presents complex challenges that require proactive and strategic responses. Among these, AI stands out as both a revolutionary tool and a potential source of risk.

By addressing the regulatory, ethical and operational dilemmas posed by AI, businesses can position themselves as leaders in the digital age. For companies grappling with the challenges of AI or seeking to understand how it can transform their organisation, now is the time to act. Navigating the complexities of AI and the digital economy with confidence can help businesses not only survive but thrive in this rapidly changing landscape.

 

Lisa Lee Lewis is a partner and James Moss is a director at Addleshaw Goddard. Ms Lewis can be contacted on +44 (0)20 7160 3042 or by email: lisalee.lewis@addleshawgoddard.com. Mr Moss can be contacted on +44 (0)161 934 6874 or by email: james.moss@addleshawgoddard.com.

© Financier Worldwide


BY

Lisa Lee Lewis and James Moss

Addleshaw Goddard


©2001-2025 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.