Background
Targeted Review of Internal Models (TRIM) is a regulation under ECB (European Central Bank) which is designed to bring common understanding and consistency across capital models and reduce variability of risk weighted assets calculated using those internal models. TRIM is similar to SR 11-7 in terms of defining regulatory expectations and requirements for internal models however the scope of TRIM is limited to capital models unlike SR 11-7 which is applicable to all models. Specifically, the scope of TRIM covers internal models for credit, market and operational risk capital.
Current State
Execution of TRIM was started in Q2 2017 and split into 2 phases and the first phase was completed at end of Q2 2018 and second phase will finish by end of 2019;
- Phase 1 (Q2 2017 to Q2 2018) – the first phase covered retail and counterparty credit and market risk models
- Phase 2 (Q3 2018 – 2019 ) – the second phase is still ongoing and covers credit risk models for low-default portfolios
Results
Results after completion of the 1st phase were published by ECB last year. Last month, ECB published more results last month which incorporated results from 2018 and added some more details. These results are insightful as they provide a view on common shortcomings across all financial institutions falling under ECB’s supervision in Europe. The table below shows the most common shortcomings in the general category.
In Market Risk models, the top 3 areas with most gaps noted by ECB examiners are;
- methodology of stress VaR and IRC (100% exception rate)
- scope of internal model approach (100% exception rate)
- internal validation and internal backtesting (100% exception rate)
In Market Risk, the common gaps noted are incomplete validation of risk factors, usage of pricing models for VaR and stressed VaR, actual and hypothetical backtesting, PD values used in IRC besides the all too common gaps on data quality and model documentation.
While in area of Credit Risk models, the top 3 areas with most gaps noted by ECB examiners were;
- long run average PD calculation (85% exceptions )
- long-run average default rate (97% of exceptions)
- lack of differentiation and granularity of PD models (81% exceptions for PD and 86% for LGD)
What is interesting is that all of the gaps related to credit risk above are driven by data; the accuracy of the long run average for PD and LGD is driven by the extent of the data and the number of downturn credit cycles it covers and the granularity of the PD models is often decided by the data available for that specific portfolio. And to add weight to this point, 91% of the investigations by ECB had gaps noted related to data management and data quality.
Conclusion
These findings on data and the gaps on PD and LGD calculation published by ECB show show the extent of the effort needed to improve the data quality within financial institutions and and their impact on meeting regulatory expectations on the internal capital models. Data is the oil on which the information economy runs but for the banks the data is not even adequate to meet the basic need of capital calculation/internal models leave alone using the data to gain competitive advantage. It is time that accuracy and availability of relevant data is viewed as a basic need and having an investment in data quality and infrastructure is viewed as an investment.