Ongoing performance monitoring is a trending topic, not only for risk management professionals seeking to better define internal standards for Model Risk Management (MRM), but also for model owners who want to better understand their model’s behaviors. Specifically for Current Expected Credit Losses (CECL) models, constructing a sound monitoring program and creating periodic reporting can be challenging.
Supervisory Guidance on Model Risk Management (SR 11-7) clearly defines the responsibilities of model owners as encompassing the full lifecycle of the model: development, implementation, and use.
During the early days of CECL models, many institutions elected to engage with third-party vendors. The goal of management teams was to approach the new model with as much support as possible to meet audit and regulatory expectations. These relationships brought heavy time, attention, expertise, and cost to the development and implementation phases.
Yet, the use of the CECL model, which for many model owners began more than a year ago, has largely been a “do-it-yourself” endeavor that does not include nearly the same level of intensity, energy, and support. Now, institutions are faced with the need to monitor and assess their model’s performance. Simply assuming that it is working as intended is a dangerous game.
Ongoing performance monitoring is the formal process for model owners to demonstrate – to themselves and to key stakeholders – that a model continues to be fit for use. This is accomplished through three distinct analyses:
Process verification – This includes items that may typically be viewed as controls (e.g., are the data still correct? Is the math still working as it did during development?). It also extends to the key supporting assumptions, analyses, and processes. For example, CECL models often rely heavily on estimated loan prepayment rates, which must be incorporated into the ongoing monitoring framework even if the estimation of those assumptions is not the direct responsibility of the CECL model owner.
Sensitivity analysis – This includes identifying inputs that are subject to variability and deliberately changing them, individually and in tandem, to observe the change in the results. Not only does this allow the model owner to evaluate model stability through changing environments, but it also allows them to identify model limitations (finding where the model may not be used in advance of those conditions occurring in real time).
Benchmarking – This includes comparing results, inputs, and other related metrics with the behaviors of the industry or relevant peer groups. By evaluating both the current level and recent historical trends of these factors, management may be able to identify when model performance can or cannot be explained given the results of other models attempting to measure the same or similar conditions and expectations.
When developing a monitoring framework to capture these items, management should consider an iterative approach. Start with the more basic analyses and then incorporate more and more sophisticated analyses over time. To that end, there are core components that should be consistent for all analyses within the framework:
Thresholds and interpretation. Once an analysis is complete, the model owner should be able to identify what is a good outcome (or bad outcome) and why.
Scheduling. Every analysis within the framework should be conducted with a set frequency that is appropriate for the work being performed. For example, if underlying loss rates are updated quarterly, then they should be monitored with the same frequency. For more intensive analyses, it may be appropriate to use a less frequent periodicity if the criticality allows.
Governance. A monitoring framework must be visible to be effective. This requires the model owner to conduct analysis, report results through higher levels of management (including summary information to the Board), and establish a pathway for quickly escalating critical issues for more targeted oversight.
To help your organization get started in building an ongoing performance monitoring framework for your CECL model, or to augment an existing framework, the DCG team has constructed a quarterly monitoring report that incorporates many of the items discussed above, including sensitivity analysis and benchmarking to help provide confidence that your model remains “fit for use.”
To learn more about Ongoing Performance Monitoring for CECL and receive a complimentary sample report featuring your institution’s data, contact the DCG CECL team.
ABOUT THE AUTHOR
J. Chase Ogden is a Quantitative Consultant with Darling Consulting Group's Quantitative Risk Analysis and Strategy team. As a practitioner at large and mid-sized financial institutions, Chase has experience in a wide array of modeling approaches, applications, and techniques, including asset/liability models, pricing and profitability, capital models, credit risk and reserve models, operational risk models, deposit studies, prepayment models, branch site analytics, associate goals and incentives, customer attrition models, householding algorithms, and next-most-likely product association.
Chase is a graduate of the University of Mississippi and holds master’s degrees in international commerce policy and applied statistics from George Mason University and the University of Alabama, respectively. A teacher at heart, Chase frequents as an adjunct instructor of mathematics and statistics.
© 2024 Darling Consulting Group, Inc.
Comments