• Tim Daniels

Effective Challenge: Necessary Evil or Valuable Opportunity?


Effective Challenge: Necessary Evil or Valuable Opportunity?

MRM Framework | MRM Performance Monitoring | Model Validation



Everyone has that one friend, relative or neighbor who always seems to ask the one question you don’t have an answer for. Though we appreciate and, in some cases may even love these folks, that lingering question can be pretty irritating. Even worse, it sometimes feels like a simple, off-the-cuff chat could easily morph into an extended interview that leaves you wondering if you even understood what you were talking about in the first place. While modelers and business line managers may not necessarily love their risk managers, examiners, or model validators, these are the folks who have been asking you about “effective challenge” in the modeling process (i.e., the one question you may not have an answer for) over the years. As it turns out, they have been doing you a favor.


Supervisory Guidance identifies effective challenge as a bedrock principle of model risk management and defines it as a “critical analysis by objective, informed parties who can identify model limitations and assumptions and produce appropriate changes.” But what does that really mean? It means you need someone to ask tough questions about the model you built (or the one you rely on) to make critical business decisions. In general, any financial institution that uses a model, no matter how simple or complex it is, should have a process for providing effective challenge to that model. The process is often performed in- house by individuals and/or committees with relevant expertise, independence and authority. This could be Audit, Risk Management, a Management Committee, or some combination of these groups. Some financial institutions also engage a third-party validator to provide an independent analysis of a model in support of the effective challenge effort. As expectations for model risk management have increased over time, so have expectations regarding the level of effective challenge an institution should apply to its models. One way management can demonstrate effective challenge is to develop a more robust model validation program that asks deeper questions than the traditional, "check-box" validation or audit program of yesteryear did.


For example, a check-box model validation might simply verify that a relevant policy requires key model assumptions to be approved no less than annually. A model validation that helps demonstrate effective challenge would seek to determine if the governance and control processes related to the model are appropriate.


Specifically, the validation would ask: "Who is responsible for reviewing and approving key model assumptions (e.g., an individual, a Committee, the board)?" "When is a review and approval required (e.g., before the model run, after output is presented, at the next committee meeting, on an annual basis)?" "How is the approval documented (e.g., meeting minutes, email chain, electronic sign-off, etc.)?"


More broadly, an institution can demonstrate effective challenge through a combination of efforts (.e.g., the audit plan, periodic model validations, rigorous assumption approval, and model performance monitoring processes) as long as they are coordinated and designed to consider the key elements found in most (if not all) modeling processes, including:

  • Governance and control

  • Model inputs (data and assumptions)

  • Model processing

  • Model output

  • Performance monitoring

For example, a data audit can support effective challenge of the institution’s data quality by reconciling a representative sample of the data in a model back to a data source then determining if model documentation adequately describes the processes used to extract, transform (if applicable) and load that data into the model. Similarly, the scope of a model validation could be designed to include a review of the support for key model assumptions and the assumption approval process, to determine if the assumptions are reasonable and the approval is adequately supported and sufficiently documented. Depending on the complexity of the model, effective challenge could also be supported by testing components of model processing, such as the logic, lookup functions, and calculations used in the model to determine whether they are properly designed and accurately executed. To tie these efforts together, institutions should develop an effective ongoing performance monitoring program that includes steps such as process verification, sensitivity analysis, and back-testing to ensure the model is functioning as intended and the output continues to make sense.


Model owners can also provide support to the institution’s effective challenge efforts by enhancing documentation of the model and modeling process. This often means the model owner will be required to provide more precise detail for key model inputs (i.e. data and assumptions) to ensure they are well understood, well supported, and used appropriately in the model. It also requires model owners to understand and acknowledge the dependencies and limitations of a model so they can be addressed during the effective challenge process.


In practice, a well-executed effective challenge process would include robust model validations that push model owners to enhance the effectiveness of the institution’s models by driving improvements in data quality, assumption development and documentation, and ongoing performance monitoring requirements. They also serve to reduce the likelihood of inconsistencies or incongruities between model documentation, model inputs, model processing, and the model output that is ultimately provided to a Management Committee and/or the Board.


In the end, you may never be able to answer the one question your friend, relative, or neighbor always seems to stump you with, but you should feel pretty confident the next time someone anyone asks you to demonstrate how you carry out effective challenge.


 

Learn more about Model Risk Management.


 

ABOUT THE AUTHOR


Tim Daniels is a Senior Risk Management Consultant for Darling Consulting Group. Over the past 10 years, his work has focused on understanding the framework different financial institutions have developed to measure, model, and monitor liquidity. This includes reviewing and assessing the systems and processes a financial institution uses to forecast and stress test liquidity. Over the course of his career, Tim has worked with institutions that range widely in terms of asset size, level of complexity, and core business.



 

© 2021 Darling Consulting Group, Inc.

DCG Insights