A New ERA of Elevated MRM and Data Governance
The pronounced economic changes and challenges we have been experiencing over the past year is triggering renewed attention in the industry towards Asset/Liability Management (ALM) modeling and its ability to credibly quantify interest rate risk for ALCO decision making. In addition to the growing use of more sophisticated “feeder” models and studies, institutions are also broadening their ALM modeling-related capabilities in order to perform more integrated stress testing for liquidity and credit, profitability measurement, and even CECL.
Heightened emphasis on model risk management and data governance practices, combined with growing incidents of poor model performance identified by model risk management (MRM) validators and examiners, has triggered an elevated level of review and validation. Key areas of concern include the fundamental source data, use and validity of feeder models or “studies” developed internally or by third-party providers, the sufficiency of ongoing performance-monitoring processes, ALCO’s involvement and oversight with regard to assumption management and decisions, and risk’s overall role at ALCO.
From the data side, for institutions that are implementing more structured data governance programs, many are realizing the data management processes associated with the ALM models are falling short. Some of this can be attributed to the fact that many of the models in production today were developed and implemented well before the advent of model risk management and this new era of data governance. As a result, many institutions have no record or documented confirmation that the data they are relying on is accurate or complete and that it is being interpreted, classified, transformed, and aggregated correctly or as the model owner expects it. It is important to note that during the early days of MRM, data quality and testing was considered an “audit thing,” and most MRM-driven validations focused on the modeling process, assumptions, output, and overall governance. With the growing amount of performance issues being identified by ALM validation experts and examiners of late, more rigorous initial and ongoing data testing and documentation is now being emphasized and expected. Consequently, validation rigor is expanding to include deeper dives into data management and ongoing testing practices.
This issue has become especially noticeable with institutions that have recently implemented a new ALM model and choose to rely on their legacy data sources and processing routines without any challenge to what was done in the past or if it is sufficient today with the new technology. In some cases, different modeling systems require different fields to compute cashflows, etc. or can interpret fields differently. For example, a “payment amount” can be interpreted differently by one system vs. another. As a result, we are seeing a marked increase in high-risk findings related to data quality and governance that has resulted in a number of models having to be redeveloped and revalidated before being approved for use.
In advance of this becoming an issue with your model, invest time in developing or enhancing documentation related to your data management process. Fully document what specific fields are being extracted from your core system(s), describe the logic associated with the data transformation processes, how you confirmed that you have the “right” ingredients being incorporated into your model, and how you continue to ensure quality and consistency each period. In addition to, and in advance of model risk management asking, establish some means of ongoing monitoring with specific thresholds and defined actions associated with breaches (and, of course, document it all).
This project may require some involvement of your IT area, as the data extract process is often programmatic and the logic is embedded in some form of database or programming language that you may not be familiar with. By the end of the process, you should have a document that specifically describes each field being utilized, how it is transformed, what testing was initially performed to confirm you are using the right information, and what ongoing monitoring process you have in place, including acceptable testing thresholds and action steps to be taken in the event that thresholds have been breached. If you are managing your own data extraction and transformation process, consider migrating this to your IT or data/business intelligence group, especially if simpler tools (such as Excel) are currently being used. For institutions implementing a new ALM model, plan to invest the time to redevelop and test your data feeds – chances are, your new model will also benefit from additional fields or logic your current extract process does not include.
From an assumptions perspective, expectations for more rigorous development, testing, and ongoing monitoring of the key assumptions is being emphasized to a greater degree these days. This focus is emanating from two key factors: 1) growing issues identified by examiners and validators through poor assumption back testing performance and the increased use and reliance on qualitative overlays and overrides to compensate for deficient assumption models, and 2) increased use of more sophisticated quantitative efforts that are rising up to the level of being a “model.”
As a result, there is a bright spotlight shining on your key assumptions, how they are derived, applied, and supported. This enhanced scrutiny has institutions revisiting their ALM models and further identifying “feeder” models or “studies” that are driving the key assumption parameters being applied and expanding the scope of their validations and effective challenge processes. For many institutions, validations are now going deeper into the assumption feeder models, including those used for investment cash flow and valuation, prepayments, deposits, growth/volume, and pricing. In addition, if the ALM model is also being used for liquidity forecasting, capital planning, and credit stress testing, additional economic scenario models, credit loss models, and operational expense models are being uniquely challenged, as well as any assumptions that may go into them as well.
The key to preparing for this increased challenge and scrutiny is to identify/list out all of the potential feeder models and studies, meet and discuss these potential “models” or “tools” with your model risk management or audit area, and to develop a mutual game plan to address. In many instances, additional documentation will need to be written. Keep in mind your MRM framework likely has development, testing, and documentation standards you will want to follow.
Just recently, we were asked to perform a conceptual review on a newly developed in-house deposit study that MRM suspected was actually a model. During this evaluation, it was evident that the study performed was indeed a model (linear single-factor regression using institution-specific historical deposit data). However, while there was documentation, it simply described what was done without making the case for the theory applied, the relevance of the data used (only past three years), the variables considered and rejected (only single-factor model with only one specific independent variable considered), the segmentation (which was not as granular as it needed to be), and the sufficiency of the performance results (which was limited to simple R-squared). In fact, the R-squared for several segments was less than 40% and, in many instances, overrides had to be applied. In addition, there was no ongoing monitoring process established that would allow the ALM modeler to track actual deposit activity in relation to the study in order to confirm if the model performance was stable or improving over time or if it necessitated additional redevelopment.
In the case of third-party feeder models, it is no longer acceptable to rely on unvalidated “black box” models for which no documentation is provided (even from the most reputable sources). To address this potential issue, vendor risk assessments are now confirming a vendor’s openness to MRM challenge and validation, the existence of robust documentation that includes evidence of initial and ongoing testing, and affirmation that the firm has appropriately structured ongoing development and enhancement processes.
When you combine the uncertain economic landscape brought about by COVID-19, expansion of your ALM model complexity and use, and increased reliance on more sophisticated “feeder” models and tools, along with the evolving expectations for model risk management and data governance, many institutions are heading towards a perfect storm.
As the first line of defense, you have a responsibility to ensure you are using the best ingredients available for your models, that you are monitoring the performance of your models (and any sub-models that are directly or indirectly impacting results), and have processes in place to identify and address performance issues if and when they occur. If you believe your current practices fall shy of these new and emerging expectations, now is the time to develop a game plan to address it – before your model risk management or regulators do.
Instilling confidence in the modeling your ALCO is relying on for strategic decision-making could make all the difference in our highly competitive, consolidating industry.
Learn more about model risk management.
ABOUT THE AUTHOR
Michael R. Guglielmo is a Managing Director at Darling Consulting Group. With over 30 years of experience in strategic risk management, Mike has provided technical and strategic consulting to a diverse group of financial institutions. Mike is also a frequent author and top-rated speaker on a variety of balance sheet and model risk management and operational risk management topics.
Contact Michael Guglielmo: firstname.lastname@example.org or 978-499-8159 to learn how DCG can help you navigate through the new era of elevated MRM and data governance.
© 2021 Darling Consulting Group, Inc.