How AI powers cyber crime – and protects against it
March 2026 | SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY
Financier Worldwide Magazine
In 2025, artificial intelligence (AI)-enabled cyber attacks rose 47 percent globally, according to DeepStrike’s ‘AI Cyber Attack Statistics 2025, Trends, Costs, Defense’ analysis. Cyber criminals are increasingly using AI across the attack lifecycle – both to find and exploit weaknesses faster and to deepen and scale intrusions once inside a victim’s environment.
Legally, businesses are often judged by a ‘reasonable security’ standard, which is a moving target requiring risk-based controls, integration of AI into security programmes with human oversight, employee training on AI-enabled threats, and defensive AI tools that are properly configured, monitored and regularly reviewed.
AI-enhanced cyber attacks
In 2025, each of the top 10 vulnerabilities exploited by cyber criminals either had publicly available exploit code or was actively exploited in the wild, with roughly 60 percent becoming exploitable within two weeks of public disclosure, according to IBM’s ‘X-Force 2025 Threat Intelligence Index’.
AI tooling used by cyber criminals improves automated searching of the internet and accelerates the identification of these exploitable vulnerabilities, including those present in internet-facing applications and application programming interfaces (APIs). In 2024, Akamai observed more than 311 billion web application and API attacks (a 33 percent increase from 2023), correlated with rapid adoption of cloud services, microservices and AI. Furthermore, in 2025, a widely used AI app disclosed a critical vulnerability allowing a malicious prompt injection to trigger remote code execution.
The speed at which cyber criminals carry out attacks has continued to increase; it took cyber criminals on average nine days to exfiltrate data in 2021 compared to only two days in 2024. Cyber criminals are now using AI to automate and further accelerate their attacks from initial compromise through reconnaissance, lateral movement and exfiltration.
To demonstrate the power of AI automation, a security firm simulated a full AI-driven ransomware attack chain from initial intrusion to exfiltration and encryption through malware in just 25 minutes. Another group developed an AI-powered ransomware proof of concept (‘PromptLock’), which carried out an entire attack on its own, suggesting attacks may become even more efficient.
AI also enables highly tailored phishing, deepfake audio and video, and other ‘lures’ that mimic trusted senders. AI-driven social engineering obscures familiar ‘tells’ such as poor grammar or generic language, leaving employees more vulnerable to convincing attacks.
In 2024, social engineering and fraud claims rose by 233 percent, driven in part by AI-enhanced business email compromises, deepfake-enabled fraud and mass phishing. According to Aon’s ‘Cyber Risk: Turning Uncertainty into Opportunity, Global Risk Management Survey’, AI-driven deepfake attacks contributed to a 53 percent year over year increase in social engineering incidents
Reports of AI-generated ‘deepfake’ attacks are becoming more common. In 2024, cyber criminals used deepfake video to impersonate multiple senior executives on a video call and execute a $25m fraudulent transaction. Throughout 2024 and 2025, suspected nation state cyber criminals used AI-generated identities to gain virtual employment across multiple organisations, which they then used to access corporate systems and steal data.
Microsoft’s ‘Cyber Signals’ observed a 46 percent rise in AI-generated phishing content in 2025, including attacks routing users to credential-harvesting pages disguised as internal or vendor portals. AI-generated phishing is now sophisticated enough to bypass many traditional filters, with an approximate 25 percent increase in successful evasion, notes DeepStrike’s analysis.
AI-enhanced defensive tactics
With the spread of AI-related attacks, security professionals are beginning to implement AI-aid security controls and defences of their own, and organisations are taking advantage of AI in many forms.
Security teams use AI-driven solutions for anomaly detection, threat hunting, event triage and automated response. Approximately 51 percent of organisations are using some form of AI security or automation, and those that do save, on average, $1.8m in breach costs than those without such capabilities, according to IBM.
Organisations increasingly deploy endpoint detection and response and intrusion detection and prevention systems that use machine learning to establish baselines of normal activity, flag anomalies and block attacks in real time. To address AI-enhanced social engineering, organisations can use advanced email security capable of in-depth content, URL and attachment analysis, identity verification mechanisms, and, where practical, tools for detecting manipulated or synthetic audio and video.
Use of AI can also help organisations more rapidly detect software vulnerabilities and decrease the time between detection and the creation and deployment of patches.
These tools can significantly improve detection and response. However, experts caution that AI-based systems sometimes produce false positives, depend heavily on the quality and relevance of training data, and are better at mitigating incidents as opposed to preventing them outright.
AI and legally defensible security
The use of AI for both offensive and defensive purposes creates challenges for legal departments charged with complying with information security laws and building out ‘reasonable’ and legally defensible security programmes.
Secure AI adoption and the expanded attack surface. Regulators view asset inventories, access controls, logging and monitoring, vulnerability management and tested incident response processes as necessary components of a ‘reasonable’ security programme.
The Federal Trade Commission has long framed ‘reasonable security’ as a risk assessment process, specifically highlighting employee awareness, logging and monitoring, and ongoing risk assessment. The California Consumer Privacy Act’s cyber security audit regulations similarly require covered businesses to maintain a documented cyber security programme as part of their obligation to employ “reasonable security procedures and practices”.
New York’s Department of Financial Services also mandates continuous monitoring, vulnerability management, and a written incident response plan as part of a regulated entity’s cyber security programme, reinforcing that these controls are now benchmarks rather than ‘nice to haves’.
As organisations face AI-fuelled threats and adopt AI-oriented defences, their legal teams will need to coordinate with relevant stakeholders and understand these risks and whether (and how) their company’s security programme mitigates them.
Just as encryption and multifactor authentication became the norm, in a world of automated and fast AI-driven attacks, AI-supported countermeasures may become a necessary tool for companies. These tools not only help organisations secure their operations, they also enable legal teams to defend the company’s AI-related security choices in front of regulators or in court.
Govern and tailor AI defensive tools. Establishing ‘reasonable security’ and properly assessing risk requires lawyers and other business stakeholders to understand how AI-defensive tools work and the risks they are designed to mitigate. Organisations should tailor their models to address the specific risks in their environment, with clear objectives for each AI-based security tool – what threats it detects, what actions it may automate, and how outputs are assessed and refined over time in line with legal and regulatory expectations.
Companies should also consider the scope of human oversight, including defining protocols for when and when not to rely on automated decision making. Involving legal counsel in designing these governance structures helps align controls, use restrictions and review processes with evolving regulatory standards and litigation risk.
Address AI-aided social engineering. Employee training and well-documented procedures for high-risk activities are repeatedly cited by regulators as core elements of ‘reasonable security’, particularly in the face of phishing and social engineering attacks that are increasingly enhanced by AI.
Given the growing effectiveness of deepfakes and significant challenges faced by employees expected to detect them, regulators are likely to view AI-aware phishing and social engineering training and robust, documented verification protocols for high risk uses cases as necessary factors for achieving a ‘reasonable security’ posture.
Additional measures for companies to consider include scenario-based training and ongoing simulations and reminders that reinforce verifying urgent or secretive requests through trusted, out of band channels.
Ultimately, the dual use of AI creates both opportunity and risk that organisations must factor into ‘reasonable security’ programmes. Those that treat AI tools as infallible or deploy them without proper configuration and oversight may create new vulnerabilities or miss critical signals, while those that integrate AI thoughtfully – alongside robust governance, skilled personnel and realistic training – can significantly strengthen resilience and reduce incident impact.
Dave Navetta is a partner and Kaitlin Clemens and Bianca Nalaschi are associates at Troutman Pepper Locke. Mr Navetta can be contacted on +1 (312) 529 0893 or by email: david.navetta@troutman.com. Ms Clemens can be contacted on +1 (215) 981 4342 or by email: kaitlin.clemens@troutman.com. Ms Nalaschi can be contacted on +1 (215) 981 4612 or by email: bianca.nalaschi@troutman.com.
© Financier Worldwide
BY
Dave Navetta, Kaitlin Clemens and Bianca Nalaschi
Troutman Pepper Locke
Q&A: Data centre cyber resilience
How AI powers cyber crime – and protects against it
Evolving ransomware tactics with AI-enhanced attacks and ransomware as a service
Breaking down NIS2: the five main requirements of the updated NIS Directive
Regulating AI and enforcing privacy laws through landmark cases and regulatory practice
US state privacy landscape complicates global privacy compliance
GDPR enforcement: how EU regulators are shaping AI governance
Peru’s new data protection officer: obligations and practical issues