4 Questions You're Not Asking About Model Validators, But Should Be
top of page
Darling Consulting Group Logo
Asset/Liability Management
Model Validation & MRM
Data-Driven Solutions
Events
Insights
About Us

DCG Insights

Stay up to date on the latest from DCG

4 Questions You're Not Asking About Model Validators, But Should Be

  • Writer: Mark Haberland
    Mark Haberland
  • 22 hours ago
  • 5 min read
Going Beyond the "Tick & Tie"

When you are going through the process of scheduling the next validation of your key risk models, what’s going through your mind? Do you consider improving model performance? Ensuring your data is complete and accurate and reasonably supporting your assumptions? Does the validator’s experience with the model impact your decision?

The term “validation” means different things to different people. For some, it’s a compliance exercise that “checks a box,” while for others it’s a way to improve a process through independent review and effective challenge.

DCG’s experience has been that those institutions that perform validations simply to comply with regulatory requirement typically look for the lowest cost option (a “check the box” approach). When the validation decision is driven more by risk exposures and complexity, a more robust, strategic approach takes precedence. So how do you go about finding the right fit for your next model validation?

The key to knowing the best approach lies in understanding how the models are expected to perform as well as their limitations. But model owners often don’t know what to ask to get started. As you plan for your model’s next validation, consider these four questions that most practitioners don’t ask potential validators, but should:

1. What type of data review is included in the validation?

Most validators will say that a review of data is included as part of the validation, but it is important to understand exactly what they mean by “review.” Often, a validation’s data component includes (1) confirmation that files balance to the general ledger and (2) a review of column headings to ensure proper fields are included in the data downloads.

But that does not come close to telling the whole story of how important accurate data is to building a model’s foundation.

Say, for example, the files reconcile and include the proper fields; however, when the data was loaded into the model, the model owner inadvertently linked the model to pull in the “Payment Frequency” field into “Repricing Frequency,” resulting in an overly asset sensitive risk profile. An “audit” would not pick that up as an error, and looking at quarter-to-quarter comparisons also would not identify that as an issue, as the error was made in both periods.

If the validation included data benchmarking to compare cashflows and repricing characteristics from a different model, that error would be quickly identified and remedied.

In a recent DCG webinar, only half of the attendees who indicated that data was included in the scope said that the review extended beyond a high-level review and reconciliation – no benchmarking was performed.

So be careful when you see “data review” when comparing validators, as not all approaches are the same.

2. How do you quantitatively test how accurately customer behaviors are reflected in the model? 

When it comes to model assumptions, a primary challenge of the model owner is capturing customer/member behaviors.

Studies (either internal or outsourced) may provide value for developing key assumptions. But if not regularly reviewed and updated, they can quickly become stale.

Occasionally, organizations incorporate vendor-supplied assumptions, which can be dangerous as they typically do not reflect the behaviors of specific customers/members.

Regardless of how model owners develop assumptions, it is important to understand that for models to be most beneficial, they must include sensitivity testing.

Most critically, scenario testing and stress testing of key model assumptions not only provide a range of potential model outcomes, but they also can identify those assumptions that have the biggest impact on model results. Effective validations consider the breadth and depth of the testing and determine whether it is appropriate given current conditions.

3. How do you help ensure models continue to perform as intended?

When a model is first implemented, owners should take care to ensure it is set up appropriately, inputs are sound, and it is fit for use and working as intended.

Over time, however, model performance can deteriorate – whether through environmental challenges, technology not keeping up, or turnover in modeling personnel, among other reasons.

How can model owners ensure the models they rely upon to make the most important decisions in their organizations are providing reliable and timely information?

Ongoing performance monitoring is a key facet of model risk management and should be part of each model’s review process. An effective validation should confirm that models are properly back-tested and/or benchmarked to ensure that model output continues to be consistent with actual results and reasonable as compared to peer groups or an alternative analysis.

4. How will my team benefit beyond just a validation?

An “audit” type approach to model validation can provide feedback on “what” needs improvement with a model and modeling process.

However, a more rigorous validation that provides effective challenge should also provide valuable context to findings that include “why” they are important to remedy and the potential impact of not addressing them, as well as “how” to implement the changes.

This educational aspect of the validation engagement can be the most long-lasting, as it can have a cultural impact on the organization relative to modeling and model risk that changes the focus of how models are constructed and how they are used.

The ongoing benefit can improve model performance, increase efficiency, and provide more useful reporting and output. Including industry “best practices” allows for additional process enhancements beyond what would be required by regulatory guidance. And, by having a better understanding of the process through effective challenge, a seasoned validator can help to customize process enhancements to optimize performance.

In the end, looking at a validation as an investment in the risk management process versus an expense can lend a different perspective when making your choice of validator. A low-cost alternative may seem like an efficient business practice, but consider the potential ramifications should that approach lead to models that provide incomplete or inaccurate information. A faulty model costs the organization far more, either in dollars or reputation, than a comprehensive validation.

Asking the right questions can be the difference between a validation that simply checks the box and one that strengthens models and decision-making. The next time you evaluate potential validators, consider digging deeper to understand what is behind the process.



Contact DCG to identify and ask the right questions that help ensure that your validation goes beyond mere compliance to deliver real business performance.



ABOUT THE AUTHOR


Mark Haberland is a Managing Director at Darling Consulting Group. Mark has over 25 years providing balance sheet and model risk management education and consulting to the community and mid-size banking space. A frequent author and top-rated speaker on a wide array of risk management topics, Mark facilitates educational programs and workshops for numerous financial institutions, industry and state trade associations, and regulatory agencies.


Contact Mark Haberland: mhaberland@darlingconsulting.com or 978-499-8152 to learn more about DCG's approach to model validations and Model Risk Management.

© 2025 Darling Consulting Group, Inc.

bottom of page