Legal strategies for responding to cyber intrusions around the world
March 2017 | PROFESSIONAL INSIGHT | RISK MANAGEMENT
Financier Worldwide Magazine
In the early 1990s, international police forces estimated that less than 10 percent of computer crime was reported worldwide. Today, estimates suggest that a whopping 18 percent of cyber intrusions are reported. If that second estimate seems low to you, given the number of cyber intrusions that make the news every day, you are right – few businesses want to admit that they allowed a criminal to invade their information technology (IT) assets. With the vast proliferation of the use of cloud services and storage, Cisco estimates that by 2020, 92 percent of all data will be in the cloud, which means that this remotely hosted data will become the target of opportunity for many cyber criminals. The Cisco study reinforces the old joke that banks are robbed because that is where the money is, so it is highly like that the cloud will be the target of cyber intrusions, since that is where the data is.
As a result of the proliferation of businesses using the cloud around the world, the responsibility to report cyber intrusions has grown dramatically. Eighty-nine countries currently require reporting, and without question that number will only increase in the future. With the increase in requirements also comes additional complexity for lawyers attempting to advise their clients.
In the e-commerce world, there are nearly half a trillion credit card transactions each year, and even though many would think that there would (or should) be laws regulating the management of credit card data, it is, in fact, the credit card companies that have established their own rules. In 2006, the five largest credit card companies (Visa, MasterCard, American Express, Discover and JCB) established the Payment Card Industry (PCI) Security Standard Council, which in turn promulgated the Data Security Standards (DSS). The PCI-DSS require companies that handle credit card transactions to maintain secure IT operations and report any intrusions to the PCI.
Also, in 2017 the scope of cyber intrusions has continued to escalate, as reported in a recent IBM sponsored global study covering 383 companies in 12 countries, “$4m is the average total cost of a data breach”, with a continued increase in the number of data breaches likely.
There is no rocket science involved when we say that it is a matter of ‘when, not if’ a company is likely to suffer a cyber intrusions. Given the volume of cloud data now exposed to cyber criminals, it is very reasonable to expect all businesses have IT security policies and incident response plans (IRP) in place so that there is a game plan for when the cyber intrusion occurs.
How does shadow IT fit in?
Shadow IT has been defined as “a term often used to describe information-technology systems and solutions built and used inside organisations without explicit organisational approval”. It is also used, along with the term ‘stealth IT’, to describe solutions specified and deployed by departments other than the IT department. Gartner estimates that 30 percent of a company’s annual IT spend is on shadow IT, which unfortunately leads to the very grim conclusion that most companies have no idea where their data might be. Hence, if there is an intrusion into systems or data maintained outside of IT, companies may be unaware that they even had a duty to report the intrusion or PCI-DSS problem. But, in virtually all instances, report it they must.
Where, when and how to report a data breach is a complex mix of state and international law. Forty-seven out of the 50 states in the US and 89 countries worldwide currently have breach reporting requirements. If credit card data is included in the data intrusion, PCI-DSS has its own separate reporting regime. Some of these laws require reporting only to the affected entity (be it a consumer or business), while others include reporting to state consumer protection agencies or law enforcement. Some require specificity when reporting the details of the breach, while others require only a general statement. Some demand that the breach be reported right away, while others give leeway for law enforcement investigations or the restoration of system integrity.
To detail the ins and outs of these various breach reporting laws would be beyond the scope of this article. However, the EU has taken a positive step toward harmonising the current patchwork of breach reporting requirements of its Member States by enacting the General Data Protection Regulation (GDPR). While most US state data intrusion notification laws are triggered only upon exposure of information that can lead to fraud or identity theft (such as financial account information), under the GDPR, a ‘personal data breach’ is more broadly defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed”. If such a breach occurs, the data controller must notify supervisory authorities (the agencies in each Member State responsible for privacy regulation and enforcement) “without undue delay and, where feasible, not later than 72 hours after having become aware of it”. However, notice is not required if “the personal data breach is unlikely to result in a risk for the rights and freedoms of natural persons”, so companies must be prepared for that to be a source of debate for data controllers and supervisory authorities. In terms of content, a notification to the authority must, at a minimum: (i) describe the nature of the personal data breach, including the number and categories of data subjects and personal data records affected; (ii) provide the data protection officer’s contact information; (iii) “describe the likely consequences of the personal data breach”; and (iv) describe how the controller proposes to address the breach, including any mitigation efforts. If not all information is available at once, it may be provided in phases.
Ironically, reporting requirements to data subjects (the individuals whose personal data may have been compromised) are less onerous than those to supervisory authorities. Data subjects only need to be notified if the data controller determines that the breach “is likely to result in a high risk to the rights and freedoms of individuals”. Even then, there are exceptions based upon the personal data being encrypted and subsequent remedial measures that eliminate or severely minimise the risk to the rights and freedoms of the affected data subjects. Additionally, if notification to each data subject would “involve disproportionate effort”, alternative communication measures may be used. All of these exceptions, however, are subject to the supervisory authority’s right under Article 34(4) of the GDPR to require or not require notification to data subjects, in its own discretion.
Fines for violations of the GDPR are steep – up to €10m or 2 percent of annual gross turnover, whichever is higher, for lower tier violations (which include violations of the breach notification requirements). However, the potential for steep penalties for non-compliance is likely outweighed by the potential cost savings that uniform compliance in all Member States will provide. The burden will be difficult, at first, for companies seeking to conform their privacy practices to the new regulations, but once done, the GDPR looks to provide a more efficient way to protect privacy and notify affected individuals.
Incident response planning
So how do you respond to these types of events? Plan, test and review. Then improve and test again. An IRP is not an IT document, but a business document. It is a game plan for actions that the business will take, not a technical standard or guideline. It is as critical as any other business document you have.
Working with the business units, in-house and outside counsel, the communications team and executive leadership, the key roles, responsibilities and contacts can be captured. If nothing else, this process will highlight the reporting requirements that need to be addressed, based on the type of data breached.
Do you need to plan for every possibility? No. But the plan should contemplate real and common types of events (for example, phishing attacks, ransomware, lost or stolen devices.) A good incident response plan is streamlined and easy to follow. It includes templates for notifications to vendors, clients and partners. It also includes contact information for key personnel and the steps the board and general council will take to communicate with regulators.
Testing the response plan can be as simple as running through a ‘tabletop’ scenario with key stakeholders. The test should include a neutral facilitator that can provide an after-action report, detailing areas of the plan needing improvement. As the plan (and the organisation) matures, additional testing methods can be implemented.
In some ways, building and testing the plan is more important than the plan itself as the planning and testing build muscle memory. When something goes wrong, the organisation can execute the plan without panic or confusion.
The idea that any company can simply avoid a data breach by leaving the problem to the IT leaders and hoping for the best is an antiquated one and is doomed to failure. The information age demands that everyone do their part to ensure that personal and confidential data is protected. Understanding the complex internet cyber intrusion reporting requirements and developing and testing an IRP will not stop a data intrusion from happening, but it can go a long way toward minimising the financial and reputational fallout that will inevitably occur.
Peter S. Vogel is a partner, and Eric S. Levy and Edward H. Block are senior attorneys, at Gardere Wynne Sewell LLP. Mr Vogel can be contacted on +1 (214) 999 4422 or by email: firstname.lastname@example.org. Mr Levy can be contacted on +1 (214) 999 4918 or by email: email@example.com. Mr Block can be contacted on +1 (214) 999 4052 or by email: firstname.lastname@example.org.
© Financier Worldwide
Peter S. Vogel, Eric S. Levy and Edward H. Block
Gardere Wynne Sewell LLP