The differentiation between problem-finding and problem-solving has widely been discussed in the modern literature of Corporate Governance; but past experiences led me to observe that people often are more concerned about providing a solution to a detected “anomaly” than lucidly identifying and understanding the underlying reasons of the problematic. I would like to prove within this post that detecting an anomaly is as decisive - if not more - as finding a remedy.
Let’s start with a basic illustration; as a Risk analyst, you are in charge of managing and supervising a market risk tool assessing the overall volatility of a portfolio of securities. The purpose of the tool is generating an alert when an internal threshold is breached; the model is highly sensitive to the intra-day market volatility. Someday, the bankrupt of a major institution generates whipsaw effects of financial markets, leading the model’s threshold to be breached. You can act as follows:
You produce statistics about the breach that you
report to the line management;
don’t report anything and contact the Investment Manager in order that securities with the higher marginal contributions are excluded from the portfolio;
adjust the model parameters in order that the tolerance level increases and no breach is reported;
don’t care, the model is just ornamental purpose.
Amid those possibilities, there is - according to me - one optimal decision, the first one. The last choice can be pushed aside; there are no advantages of managing a model that is purposeless. The second option leads the Risk analyst to transfer its oversight responsibility to another department and to minimize its analysis capabilities, this option is then suboptimal. The third option is an easy way to avoid short-term complication; because the bankrupt of a major Institution is an extreme event, you can expect this phenomenon not to be repetitive. We then keep the solution one, where the analyst reports and evaluates the anomaly. Please find below some thoughts on why I think this option is the best one:
You give yourself a larger time-period to perform an in-depth portfolio analysis; you then have the
opportunity to analyse how the model works not only when an extreme event occurs, but also several days prior and post occurrence;
You can analyse the marginal contribution of each portfolio
securities; this is a common pitfall to only consider what is the most visible and has the highest marginal
effect, and moderate other effects;
You leave a room to the line management to make
decisions; for instance about the model validity, or the significance to attribute to the event;
- You understand how the model can/could be fine-tuned; by asking yourself: Are we happy with the current model? Could specific parameters be added to the model to make it more reliable in case of future similar events?
With this post, I want to spot that abnormalities detection requires more than just acting in order that things become normalized. Extreme values should be seen as an opportunity to ask ourselves WHY they occurred, WHAT precisely generated them, and HOW - if needed - could processes/models be improved.