AI and cyber security: risks and opportunities for organisations

April 2026  |  SPOTLIGHT | RISK MANAGEMENT

Financier Worldwide Magazine

April 2026 Issue


Artificial intelligence (AI) is reshaping the cyber security landscape in notable ways. On one hand, AI-powered tools are enabling cyber attacks that are more effective and sophisticated, presenting challenges for organisations of all sizes. On the other hand, AI is being explored as a defensive tool, offering new ways to detect, prevent and respond to cyber threats.

For organisations navigating this evolving terrain, understanding both dimensions of AI's impact on cyber security has become essential to managing risk and maintaining operational resilience.

How AI amplifies cyber security risks

The same capabilities that make AI valuable for a wide range of purposes also make it useful to malicious actors. Cyber criminals and threat actors are increasingly leveraging AI to expand the scale and effectiveness of their attacks.

One area of concern is the use of AI to generate more convincing phishing and social engineering attacks. Traditional phishing emails often contained telltale signs of fraud, such as grammatical errors, awkward phrasing or generic content that alert recipients could identify. AI-powered language models now enable attackers to produce contextually appropriate messages that can be difficult to distinguish from legitimate communications.

These tools can analyse a target’s publicly available information, including social media profiles, professional networks and corporate communications, to craft personalised messages that reference real colleagues, ongoing projects or recent company events. This can make phishing campaigns more persuasive and harder to detect.

Deepfake technology represents another application of AI in the cyber threat landscape. Audio and video synthesis capabilities have advanced to the point where attackers can create convincing impersonations of executives, board members or trusted business partners.

In one reported incident, criminals used AI-generated voice cloning to impersonate a company’s chief executive, reportedly convincing a finance officer to transfer funds to a fraudulent account. As these tools become more accessible and the quality of synthetic media continues to improve, organisations face risks of fraud and manipulation through fabricated audiovisual content.

Another area where AI is being applied by threat actors is credential-based attacks. AI systems can analyse patterns in leaked password databases to generate more effective password guesses, identify likely password variations based on known user information, and automate credential stuffing attacks.

These capabilities can make brute-force and dictionary attacks more targeted and efficient, potentially increasing the success rate of unauthorised access attempts against user accounts and corporate systems.

Experts also predict that AI will transform the vulnerability landscape by accelerating both the discovery and exploitation of security flaws in software. AI-enabled tools can review extensive codebases and identify potential weaknesses far more quickly than traditional manual analysis.

Furthermore, once a vulnerability is detected, AI can assist in developing exploit code, reducing the time between discovery and potential misuse. This compressed timeline would require security teams to adapt their detection and remediation processes in order to respond more efficiently.

The emergence of AI agents, which are AI systems capable of executing multistep tasks autonomously, introduces additional considerations. Unlike traditional AI tools that respond to individual prompts, AI agents can plan, adapt and take actions across systems with limited human oversight.

In 2025, an AI company disclosed that it had detected and disrupted an influence operation in which the threat actors manipulated an AI tool not only for guidance but to conduct the operation itself. Posing as a cyber security firm performing defensive testing, the actor used the AI system to analyse leaked credentials and corporate data, identify and test security weaknesses in target systems, and develop exploit code.

With the agent’s assistance, the attacker identified high‑privilege accounts, established backdoors and exfiltrated data. The threat actor was reportedly able to automate an estimated 80 to 90 percent of the attack. This incident illustrates how AI capabilities may be leveraged for more advanced cyber operations.

Harnessing AI as a cyber security defence

While AI-enabled attacks present challenges, AI may offer useful defensive capabilities. The application of AI to cyber security defence is a relatively new and still-developing field, and organisations that explore these tools may find useful applications in protecting their digital assets and responding to incidents.

AI-powered threat detection represents one notable defensive application. Traditional security systems rely on signature-based detection, which identifies known malware and attack patterns based on previously catalogued indicators of compromise. This approach, while valuable, may struggle against novel threats and sophisticated attackers who deliberately avoid known signatures.

Machine learning (ML) models can complement signature-based detection by analysing patterns of network activity, user behaviour and system operations to identify anomalies that may indicate malicious activity. For instance, they can identify when a user account begins accessing resources outside its normal pattern, when data exfiltration attempts are disguised as routine transfers or when an attacker is conducting reconnaissance within a network.

By establishing baseline patterns and monitoring for deviations, AI systems may help surface potential threats earlier in the attack lifecycle. The automation of routine security operations through AI may also help organisations by addressing, in particular, the challenge of resource constraints in cyber security. Security operations centres are often overwhelmed by the volume of alerts generated by their monitoring tools.

AI can assist in triaging alerts, correlating related events and prioritising the incidents most likely to represent high-risk threats. This could allow human analysts to focus their attention on the cases where their expertise is most needed, potentially improving efficiency.

Vulnerability management is another area where AI is being applied. ML models can analyse software configurations, code repositories and threat‑intelligence data to predict which vulnerabilities are most likely to be exploited. By processing these large and diverse datasets, AI can identify patterns and risk indicators that point to vulnerabilities attackers are most likely to target. This may help security teams to prioritise their patching efforts based on likelihood of exploitation, allowing them to focus limited resources on the highest‑risk issues rather than attempting to remediate all weaknesses simultaneously.

AI can also be applied to identity and access management through continuous authentication and risk-based access controls. Rather than relying solely on static credentials, AI-powered systems can assess the risk associated with each access request based on factors such as location, time of access and behavioural patterns. High-risk requests can trigger additional authentication requirements or be blocked, offering an additional layer of consideration in access decisions.

In the area of incident response, AI tools may assist organisations in investigating and remediating breaches. Natural language processing can assist analysts in querying security logs and threat intelligence databases, while ML models can help reconstruct attack timelines and identify the full scope of a compromise.

Furthermore, AI could be used to simulate attacker tactics, techniques and procedures in controlled environments, enabling red teams to test defences more comprehensively and identify weaknesses before adversaries discover them.

Navigating the relationship between AI and cyber security

The dual nature of AI as both a threat vector and a defensive tool presents organisations with strategic choices. While AI offers potential contributions to cyber security, deploying AI-based security tools also requires consideration of their limitations. For instance, AI-driven systems can produce false positives which can undermine their effectiveness; they rely on large datasets that can introduce privacy challenges, and their accuracy ultimately depends on the quality of the underlying training data.

This dual nature is also reflected in emerging regulatory frameworks. The European Union AI Act for example, acknowledges the relationship between AI and cyber security from both perspectives. On one hand, the regulation recognises that cyber security is important for ensuring AI systems remain resilient against attempts by malicious third parties to alter their use, behaviour or performance. On the other hand, the regulation also acknowledges the potential benefits of AI for cyber security purposes.

For instance, biometric systems intended solely for cyber security and personal data protection measures should not be classified as high-risk AI systems, and therefore would benefit from a lighter regulatory framework. This approach reflects an understanding by European legislators that AI can be both a potential vulnerability requiring protection and a tool for enhancing digital security.

From a practical standpoint, organisations considering AI-based security tools may find it useful to start with pilot programmes or proof-of-concept deployments to build experience before broader implementation. Collaboration with technology partners, participation in industry information-sharing initiatives and workforce training may also be relevant considerations for a successful AI cyber security strategy.

AI’s role in cyber security, both as a risk factor and as a defensive tool, is likely to continue evolving. New applications will emerge, and the capabilities of both attackers and defenders will shift over time. For organisations, this suggests an ongoing need to monitor developments, reassess priorities periodically and adapt security strategies as circumstances change.

 

Ahmed Baladi and Vera Lukic are partners and Billur Cinar is an associate at Gibson Dunn. Mr Baladi can be contacted on +33 (1) 56 43 13 00 or by email: abaladi@gibsondunn.com. Ms Lukic can be contacted on +33 (1) 56 43 13 00 or by email: vlukic@gibsondunn.com. Ms Cinar can be contacted on +33 (1) 56 43 13 00 or by email: bcinar@gibsondunn.com.

© Financier Worldwide


BY

Ahmed Baladi, Vera Lukic and Billur Cinar

Gibson Dunn


©2001-2026 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.