Data remediation in M&A divestments

January 2018  |  TALKINGPOINT  |  MERGERS & ACQUISITIONS

Financier Worldwide Magazine

January 2018 Issue


FW moderates a discussion on data remediation in M&A divestments between Darren Pauling and Paul Hough at KPMG.

FW: Could you outline why data remediation is such an important consideration in M&A divestments?

Pauling: For many transactions, it is the intellectual property (IP) and commercially sensitive information (CSI) that represents a significant chunk of the actual value. Putting this at risk or allowing leakage fundamentally erodes the value. Knowing this, the need to correctly identify IP or CSI is key to ensuring the continuation of the business with a view to client focus and healthy market competitiveness aligned to the regulatory focus. Only on implementation does the full complexity of such a project become apparent. To understand the data landscape and ‘book of evidence’ of where the material sits to include the originator, the user and the pollination across the global landscape requires detailed attention to ensure this material is protected appropriately. The business acquiring the CSI and IP must ensure that it will be able to operate and manage the business with the security of an appropriate split overseen by a monitoring trustee. If the divestment is not managed correctly the business will potentially heighten the risk for litigation and increased cost and disruption to business as usual.

Hough: From both a monetary and a strategic standpoint, businesses are beginning to realise the hidden value which can be extracted from their data, and so we are starting to see a much more focused approach to dealing with data in M&A divestments. It is no longer acceptable to allow the businesses’ IP, CSI and personally identifiable information (PII), in relation to the retained business, to leak outside the perimeter through divestment activities. The risk of competitive advantage being gained through the inadvertent transfer of data, and the requirement on the buy side to ensure they have the relevant information to continue business as usual after completion of the transaction, are risks that are now being addressed at board level. Furthermore, the increase in data analytics and insight being extracted from both structured, and more recently, unstructured data sources, means that data has become a key asset in divestment transactions.

FW: What insights can you provide as to the specific difficulties companies face when separating data in M&A divestments?

Pauling: Identifying where the IP and CSI is located can be very difficult. For example, is it in the systems and therefore structured data? Is it in the unstructured data? Is it in the heads of the people in the organisation? Is it on paper records? There needs to be a robust and structured plan to identify and then transfer the IP assets and CSI as part of the divestment. This can be a daunting prospect when first considering the strategy to adopt. However, with careful planning the process can be streamlined and with a proven workflow backed up with an innovative approach and methodology. Scoping is paramount, in terms of what to include and what to exclude. Identify priority custodians and functions of the business. Determine who is most likely to have created the material, used it or held it. The definition process is probably the most lengthy and important strategy to get right. Communication follows closely behind.

Hough: Consensus within the business as to what data can and must be divested or retained can often be the first stumbling block for a business in data separation activities. Bringing together business functions, HR, legal and IT, each with a differing opinion on data requirements, in order to define a standardised approach to data identification and separation ensures a much smoother transition, as well as a time and cost reduction. Understanding the flow of data, both within the business and outside the perimeter where relevant data continues to sit with third parties, can be a daunting task. We regularly find tranches of data that sit with third-party providers or are retained by business functions outside of the managed infrastructure. Mapping these data sources and third-party applications ensures business as usual continues seamlessly on day one, licences and contracts are appropriately transferred and the business has an opportunity to address potential data leakage. Challenges in relation to data under legal hold, regulatory retention requirements, Transition Services Agreement (TSA) obligations and data required for audit or tax reporting, are also some examples of data that present challenges to businesses in divestment activities.

Defining a set of guiding principles for classifying data in-scope for retention and divestment, which has been ratified by legal and, where appropriate, the board, is key to developing a robust methodology to data separation.
— Paul Hough

FW: How can companies ensure they are making an accurate determination of what data needs to be remediated?

Pauling: There is no hard and fast rule around this and there are lots of grey areas. For example, both the retained business and the divestment business can be part of the supply chain. In this example, who owns the supplier? The reality is that both businesses will need the supplier and it now gets difficult around how much and exactly what supplier data, granularity, history and so on, comes to the divestment. Working with the business and internal stakeholders is important to help define what constitutes the data to be remediated. Structured work flows should be put in place to agree with the business the data identification process, with regular staging points to satisfy the regulator or new buyer that the workflow is appropriate. This should include application owner workshops, questionnaires and meetings with creators and users of the material. Finally, create a book of evidence and a statement of fact, both of which can be used to respond to external stakeholders and regulators if required.

Hough: Defining a set of guiding principles for classifying data in-scope for retention and divestment, which has been ratified by legal and, where appropriate, the board, is key to developing a robust methodology to data separation. While the classification of data will vary from platform to platform and will be influenced by external factors, such as business as usual activities and data dependencies, a set of guiding principles ensures stakeholders will have a clear understanding of the requirements and can execute data separation using their specific business knowledge. Deployment of technology solutions, coupled with business matter experts, to build a set of rules which can classify documents, based on data content, such as keywords and phrases, concepts and themes within the data, dates ranges and unique identifiers, such as company and transaction numbers or location in the business infrastructure, ensures a standardised determination of data for remediation. This approach has the added benefit of dealing with large tranches of data quickly and in a uniform manner, with minimum business impact.

FW: What methodologies and protocols are companies deploying to organise, categorise, migrate or cleanse unstructured data?

Pauling: Ultimately, decisions need to be made earlier in the cycle to first identify the material in scope. Once the business has a clear idea of the material that will ultimately need to be remediated, a work flow can be developed. This will include the initial definition exercise, followed by scoping of the data landscape. Experience has shown that data in these matters pollinates across the business with the commencement of a creator, followed by a ‘user’ and then the receiver. Finding a ‘seed’ set of data from the creator will allow a digital fingerprint of the material to be created, which can then be used to scan key data sources. The benefit of this approach is to reduce the volume of data to be collected and a reduction in systems that need to be taken offline or copied across the network. This approach can also be adapted for SharePoint collections through the application of utilising text similarity searching.

Hough: With the continued exponential growth of data being generated by businesses on a daily basis, companies are becoming overwhelmed with the volume and lack of detailed understanding of data sitting within their environment. It is only when there is a requirement to understand this data, either driven by a regulatory mandate or through M&A activities, that businesses are tackling these issues. Information governance frameworks for businesses are becoming more relevant as they define retention periods, classification processes and guiding principles for the lifecycle of data. Coupling these principles with technology-driven solutions allows the business to manage its data more efficiently. Technology solutions allow businesses to automate the classification, migration and cleansing processes by designing a rules-based approach to data management in M&A activities.

FW: How would you characterise the usefulness of the Electronic Discovery Reference Model (EDRM) for handling the remediation requirements on unstructured data?

Pauling: The EDRM model provides a set of steps that can be applied to many situations, including remediation projects. In a typical deployment of the model, the requirement will be to manage the data, identify material relevant to the situation, preserve and collect the material, process and analyse the data, produce material and potentially present that material. Applying the model to a remediation project would follow a very similar pattern. Define the data in scope, identify the data landscape, identify the key producers and users of the material, preserve or extract the material in scope, review and identify material, remediate material, present statement of fact and hold an escrow account. Both examples will follow a very similar outline methodology, backed up with reporting and management information (MI) and communication strategies.

Hough: The EDRM model is a guiding set of processes or steps to be utilised when addressing data sets requiring classification and ultimately production to a third party. On this basis, the workflows and principles are very similar to those utilised in M&A data divestment. As with a standard e-discovery matter, data divestments have many of the same requirements such as understanding the existing information governance in the business and how it can be utilised, undertaking data collection, processing and analysis for categorisation, producing data to be delivered to the other parties in the transactions and, ultimately, producing data. While the EDRM guiding principles apply, there are many additional specialist elements which are required in M&A data divestment, including understanding data dependencies, regulatory constraints such as retention requirements or data cleansing activities, and the impact on structured data applications.

FW: What legal and regulatory considerations need to be made during a divestment-related data remediation process?

Pauling: Ask the right questions when considering the legal and regulatory positions. Your specialist IT provider can deploy the technology but it must do so under the right strategy, in turn guided by the legal team. Considerations include both protecting your business and ensuring the bidder will receive IP and CSI on change of control. The exam questions will include: How do I identify and document the IP and CSI? What is the requirement on the information needed to sign the deal? What are the strategic requirements and employee commitments needed to secure the divestment? Do I have a plan in place to demonstrate key workstreams, milestones and completeness? Have the plans and assumptions obtained legal support during both the planning and delivery? This final question will drive the complete workflow as it is vital to make regular health checks with the legal advisers to ensure that on completion the legal advisers can add the necessary legal level of approval.

Hough: The key to ensuring a proportionate and robust response to data divestment is ensuring that clear guidance and legal opinion has been provided by the regulator, monitoring trustee or legal counsel ahead of any activity. In many cases we have seen businesses either over-engineer their response due to lack of consultation or allow IP and CSI leakage due to dismissing remediation options as too complex or disruptive. Getting that clarity from the outset ensures a proportionate response, which mitigates that the risk for the business can be deployed. Other considerations from a legal and regulatory standpoint are in relation to the data retention by all parties. GDPR, MiFID II, legal hold in contemplation of litigation and tax and audit reporting, for example, all put legal and regulatory requirements on divesting parties. Consultation with the appropriate specialists and appropriate mitigation needs to be built into the data divestment methodologies to ensure a compliant programme.

Far too often we see data as a theme not being prioritised early enough. We fully recommend a separate data work team as part of a divestment.
— Darren Pauling

FW: What advice can you offer to companies on achieving a successful data remediation process while managing the associated risks? To what extent is a successful divestment transaction dependant on an effective division of data?

Pauling: Far too often we see data as a theme not being prioritised early enough. We fully recommend a separate data work team as part of a divestment. This team will need to work closely with most of the other teams, including IT, legal, marketing, branding, and so on, and there will be lots of interdependencies. Collaboration, senior stakeholder involvement and clear lines of communication have to be built in from the start. The internal teams hold the SME experience, the senior team can provide the right level of engagement with the business and the communication can knit the entire workflow and methodology to achieve the successful outcome. The experience from a team having to deliver such projects, coupled with innovative use of technology, will enhance these three priorities. It is my experience that to deliver requires all of the players to be working together. In many situations, the removal of the material will be a requirement to allow the divestment to take place. A fully documented workflow, signed off by appropriate legal representation and possibly the monitoring trustee, becomes paramount.

Hough: Consult early would be my advice, whether that is with a specialist service provider, your internal or external counsel or directly with the regulator or monitoring trustee. Understanding the legal and regulatory requirements around your data, and the expectations of the parties involved in the transaction, will allow you to build out a proportionate response to address data divestment. Once you have a clear understanding of the data requirements, engage with the business early. Content knowledge and information flows within a business sit within the function and their employees, and so engagement is key to understanding data sources, dependencies, applications and business practices. As more and more business critical information now sits within structured and unstructured data sources, ensuring sensible and well-planned data divestment is key to ensuring a successful divestment transaction and continued business as usual on day one.

FW: How do you envisage the data remediation process evolving in the months and years ahead? Are there any trends and developments emerging which may reduce the risk and cost of remediating data?

Pauling: The risks around this are amplifying as we all hold more and more data and in more forms. In an ideal world it would be great to have implemented an automated cleansing capability that intelligently files all material with appropriate coding by function. This, in turn, is de-duplicated by way of automation and intelligently culled in line with the business data governance strategy. In short, we store only what we require for the business and remove any data surplus to the data governance strategy adopted. The automated deletion process would scan the networks and individual PCs and ensure that local filing and storage systems are regularly searched and data outside of the governance requirements is purged. I do see this intelligent system approach arriving, but for the moment the data is pollinated across business systems and until a full clean-up and data governance approach is put in place, the cost of remediating data will continue to rise.

Hough: We are seeing a trend whereby businesses are pre-emptively addressing their data with the adoption of information governance frameworks which address data at its creation. This is being driven by many factors, such as GDPR compliance, but it has the added benefit of allowing businesses to understand their data content and ensure categorisation and retention policies are upheld. This has the added benefit that when a data divestment is required, a lot of the ‘heavy lifting’ is already complete. As with many other aspects of the workplace, we are constantly developing and utilising automation and machine learning to undertake data separation. Applied by the right practitioners, technology driven solutions ensure a consistent approach across a business. This will drive reductions in costs as well as completion timelines, and requires less business resources investment.

 

Darren Pauling is a managing director in KPMG’s forensic technology investigation practice in the UK. He has in excess of 20 years’ investigative experience and has overseen the development of KPMG’s forensic technology practice over the past 12 years. He has led multiple global litigation matters, investigations and more recently global remediation cases in relation to data separation and IP remediation. He can be contacted on +44 (0)20 7694 5565 or by email darren.pauling@kpmg.co.uk.

Paul Hough is a director in the KPMG risk consulting practice in the UK. His background is predominantly in data security, computer forensics and e-discovery. With over 12 years’ experience in his field, Mr Hough has led the delivery of some of the largest data separation programmes globally over the past number of years, as part of the KPMG data remediation service offering. He can be contacted on +44 (0)20 7311 3856 or by email paul.hough@kpmg.co.uk.

© Financier Worldwide


THE PANELLISTS

 

Darren Pauling

KPMG

 

Paul Hough

KPMG


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.