Introduction

Our previous article " IFRS 9: What to expect of an expectation" generated significantly more interest and feedback than usual. In this article we explore some of the common themes from respondents' correspondence, and explore whether firms' methodological choices lead to arbitrage opportunities in the market.

We would like to first extend our thanks to everyone who took the time to send their comments (publicly as well as privately). Correspondence focused on three important themes, which we discuss below:

  • Scenario Design;
  • How to estimate probabilities of loss data points; and
  • The principle of parsimony;

We then explore whether firms' methodological choices lead to arbitrage opportunities for market participants.

Scenario Design

The IFRS 9 standard (as well as any credit manager looking to quantify and price risk) is interested in the expectation of a distribution of possible credit losses. The random variable of interest is credit loss, and by definition the expectation requires integration over the distribution of credit loss, as illustrated in our previous article. We labelled the loss data points as (credit loss quantum, cumulative likelihood of that loss occurring) and sought to interpolate the underlying distribution in order to recover the expectation.

A common misconception is that the scenarios themselves require probabilities to be assigned. Whilst many banks are accustomed to terminology to the effect of a "1 in 10 scenario", this would only correspond to a 1-in-10 credit loss severity in the event that a monotonic relationship exists between quantiles of scenario (however that may be measured) and quantiles of credit loss. It can be proven that this relationship is not monotonic, and the result is intuitive. A particularly severe recession followed by a recovery would in-effect "take out" losses from the reporting date balance sheet early (and in the limiting case, wipe out the entire book) to the extent that quantiles of GDP, unemployment or bond spread in the outer years have next to zero impact on overall credit losses for that scenario.

Techniques exist for generating plausible macro paths by historically observed (or indeed any defined) exogenous shocks onto current conditions. Such techniques have the attractive properties of being explainable and reasonably objective. Whilst it may be reasonable to assume that such-and-such a macro outturn might occur once every X years, we reiterate the fact that P(scenario) does not align to P(credit loss) in the general case.

In the next section, we explore whether probabilities can, or indeed should, be assigned at all.

Probabilities

In our previous article, we invoked an assumption that the data points being input into the interpolations are precise (i.e. they can be treated as free parameters in the overall ECL model). We mentioned that in practice the cumulative probability of each loss data point cannot be determined with anything near certainty. In this section, we discuss this assumption further.

The only thing that can be said with certainty about each loss data point, is that its probability of occurring lies between 0 and 1. With zero historical data points for a given scenario, this leads us to the intuitive result that the cumulative probability for a given loss data point can only be set by human judgement. Perhaps unsurprisingly, the judgment around "was this loss a 1-in-10, or a 1-in-7, or maybe a 1-in-12" is highly subjective, and leads to such large variance in estimates of probabilities, that they cannot be treated as free parameters in the model. We would need to model another layer of distributions – the distribution of plausible values for each loss data point's probability. From a scientific perspective, this kind of model is unidentifiable – akin to an unobservable black box. Thus, it is a mathematical certainty that use of subjective probability inputs to ECL calculations will lead to bias.

An alternative is to use time-series techniques to model future uncertainty, and solve for the distribution of credit losses. In the next section, we explore whether this would lead to a "better" model.

The Principle of Parsimony

Said the philosopher William of Ockham, "Frustra fit per plura quod potest fieri per pauciora" – it is futile to do with more what can be done with less. In the modern application of scientific and statistical theory, we are perhaps more familiar with the principle of parsimony: We tend to favour selecting the model that fits best, with the least complexity, and use measures of complexity-adjusted goodness-of-fit to compare candidate models.

Metrics such as the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) have achieved widespread acceptance in the scientific community as a means of applying a "penalty" to pure goodness-of-fit, in proportion to the number of free parameters in each model.

It seems reasonable to question whether market approaches to estimating ECL deliver superior complexity-adjusted goodness-of-fit. For example:

  • The additional free parameters required to estimate hazard curves may (or may not) outweigh any improvement in goodness-of-fit with respect to a Markovian migration assumption;
  • The additional free parameters required in the use of cubic splines may (or may not) outweigh any improvement in goodness-of-fit with respect to having simply binned the explanatory variables; or
  • The additional free parameter required to model rating-level cyclicality (sometimes referred to as "PITness") may (or may not) outweigh any improvement in goodness-of-fit with respect to having assumed that ratings are 100% through-the-cycle.

Returning to the use of fixed scenarios, it is reasonable to ask how the AIC or BIC might compare with time series techniques seen in the sciences, engineering and econometric applications. Two plausible interpretations present themselves:

  • A strict interpretation would be that the fixed scenario approach does not model randomness. Therefore, the likelihood of any macro outturn that does not perfectly match a given scenario is precisely zero.
  • A less-strict interpretation would be that the randomness has been shifted to the uncertainty in the probability of the loss associate with a scenario. As discussed above, the parameters of this model are unidentifiable and likelihood therefore cannot be computed.

Conclusions

To summarise, we have discussed:

  • Use of scenario probabilities being mathematically inconsistent with loss probabilities;
  • Loss probabilities cannot be estimated with any certainty; and
  • Use of distributional modelling approaches may lead to better complexity-adjusted goodness-of fit.

It would be easy to surmise that many IFRS 9 models are not fit for purpose. We therefore draw particular attention to the fact that the IFRS 9 standard only requires unbiased estimates. Minimum-variance and parsimony are not mentioned, leaving significant scope for firms to achieve material compliance with approaches that are, mathematically, sub-optimal.

Lastly, we pose the question of what if materially-compliant yet mathematically sub-optimal estimates are also market-inconsistent? In this situation, market participants could in-effect trade away excess impairment in the banking system until arbitrage opportunities disappear, releasing balance sheet to support lending growth and, ultimately, the macro-economy.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.