The Fundamental Review of the Trading Book: is institution-specific compliance activity viable?
October 2016 | PROFESSIONAL INSIGHT | BANKING & FINANCE
Financier Worldwide Magazine
Financial institutions are increasingly leveraging shared services, from enabling Know Your Customer (KYC) compliance to post-trade reference data management, in order to reduce both cost and compliance resources. And, as the complexity of the new data requirements associated with the Fundamental Review of the Trading Book (FRTB) become clearer, whether it is the new risk models or the depth of historical information requirements, there is growing industry concern regarding the challenges ahead and the tight timescales.
From quote collection to risk factor approval, organisations are beginning to question the viability of institution-specific compliance activity. While there are without doubt challenges to address in areas such as instrument classification and determining the modellability of risk factors, the potential upsides of a single service approach that leverages data pooling and data sharing to mutualise the modellability of risk factor creation and approval are compelling.
It has become abundantly clear over the past decade that early collaboration with regulators is now an essential part of the compliance process. As organisations progressively look for commonalities in regulatory data requirements, it is the industry’s feedback and input into the procedures and standards needed to realise each specific requirement that are now underpinning the necessary change management programmes.
The Fundamental Review of the Trading Book (FRTB) is a prime example. Since its finalisation in January, organisations have started to get to grips with the data requirements associated with this new need to calculate and report market risk and the refreshed risk modelling methodology. FRTB’s replacement of Value-at-Risk (VaR) with expected shortfall (ES) as the standard risk measure has very significant data implications.
Most notably, the concept of non-modellable risk factors (NMRF) will mandate banks demonstrate that the data going into their risk models is real and derived from actual transactions or committed quotes. The expected shortfall measure itself will be calibrated on a history of 10 years. Regulators have become more prescriptive; not only on the content of the data (length of history and modellability), but also on the enterprise-wide integration and the explicit links to P&L and Prudent Valuation.
The depth, range, volume and quality of information now required is unprecedented. FRTB requires price histories to be managed as risk factors, which implies an understanding of their behaviour and relationships. Moreover, data quality is not just for modellability; the increased computation requirements mean that data errors become harder and more expensive to correct.
From a data management perspective, this will demand the collection, analysis, validation and reporting of information across multiple product silos, organisational entities and risk areas. And it raises two key issues – the need for a common data foundation and access to a depth of historical time series information. However, FRTB is just one component of a reinvigorated focus on historical data.
From identifying gaps in history to flagging history that does not qualify for use due to inaccuracy and adding external data sources and proxies, institutions need to create a strong information management architecture to support the growing regulatory focus on historical time series data.
Does it, however, make sense for each and every institution to collect transactional data, identify gaps, introduce new sources and validate 10 years of history across every single risk factor? Few, if any, institutions routinely store real price data, therefore collaboration will be required at some level to fill the gaps. If each bank seeks to solve this data gap separately, not only will costs rise but there will still be a risk of data gaps and inconsistency.
Clearly, there is both the case and opportunity for a shared service model, where one provider undertakes to consolidate this information and provide it as a service to the market.
The challenges with creating this unified model will be in defining a common understanding of risk factors and then mapping and cross referencing this data. The role of Enterprise Data Management (EDM) will be key, enabling the collection and reconciliation of quotation data in multiple different formats from numerous banks and cross-referencing different instrument classes and alternative ways of labelling the same financial product types.
With a common data foundation and a common basis upon which to create or derive the various risk factors, the contribution of quotes to the shared service by multiple organisations will resolve the data acquisition problem. There should be no gaps, and hence no need for complex estimates. The shared service can then leverage the data foundation and data resource to undertake risk factor mapping and provide proof of modellability. The resultant ‘on-demand’ service would deliver institutions a cost effective risk data foundation, overcoming all the traditional data collection and data supply chain costs and integration issues.
The benefits would extend beyond financial institutions. Regulators would have to approve this shared facility but, once risk factors and definitions are agreed, only the shared service would require audit, not each individual bank, significantly reducing the burden on each regulator.
The way the market has responded to other regulatory requirements – such as KYC – with new, consolidated data providers clearly demonstrates the industry’s appetite for shared services. Given the challenges now faced by financial institutions in meeting the FRTB reporting requirements, there is clearly a strong case for collaboration in the middle office.
Martijn Groot is vice president of product strategy at Asset Control. He can be contacted on +44 (0)20 7743 0320 or by email: email@example.com.
© Financier Worldwide