We audit a lot of models here at Numeritas and this question comes up from time to time. Some clients want their model to be ‘right’ and others seem not to care whether the model is right or not, as long as they get a ‘tick in the box’.

The difference often boils down to whether the model is a long-life model or a single use model. A single use model is one built for a very specific and limited purpose – for example a model supporting a bid for a contract or a transaction model. Once the deal is done the model is not expected to be used again. In contrast, a long life model may be used as a corporate decision making tool over several years, during which time it may support many decisions or be used to monitor on going performance of a business or project.

So it is reasonable for there to be a higher expectation for long life models than for single use models. Curiously, long-life models are often used exclusively in-house, and since there is no third party demanding an audit of the model, long life models are rarely subjected to audit. There will be close examination of the figures by people very familiar with the business and the old ‘sniff test’ will often find many an error. Without a formal review however, it is very likely these models harbour unknown errors for years.


We consider there to be different categories of error (which I’ll explain later), so how worried should you be about errors in your model? It is the case that some types of error will never manifest themselves (being in an unused branch of logic) and that many will not be material, so you may be lucky and never get caught. But would you overtake on a blind bend on a quite country road at night? The chances of meeting something coming the other way are slim, but the consequences if it happens don’t bear thinking about.

So let’s consider a situation where we know there are errors, but they aren’t currently a problem. This is where we sometimes find ourselves with single use models. Let’s take an example; a model assumes a construction schedule for a windfarm, commencing on 1 January following financial close in 2 months from now.

The model auditor, who is on top of his game, identifies that this date is fixed in the model’s structure, rather than being an input that affects the logic of the calculations (what we refer to as ‘hard-structuring’. The model auditor points out that this is a limitation of the model – ie many of the calculations will be invalidated if the project is delayed. The modeller responds that ‘we intend to complete on schedule’.

The audit continues and eventually concludes that all other issues have been resolved. Then there is an unexpected turn of events – the turbine manufacturer wants to renegotiate the terms by which they share in the upside if the windfarm produces more energy than the base case forecast. This renegotiation takes a while and requires some changes to the revenue calculations. By the time this is resolved, the financial close date has had to be pushed back and the original schedule is now unlikely to be met. This is when our ‘hard structured date issue’ changes from being an ‘issue’ to being a ‘problem’ (if anyone remembered that this issue had been ‘explained’ and therefore not causing an error in the base case! – we have approaches to make sure these things aren’t forgotten).

This is one fictional example, but all of these issues have arisen in projects I have worked on and there are plenty more examples of an issue which was thought not too be important can rear its ugly head again when something else changes.


I said I’d come back to this. We categorise errors into 3 main types.


These are errors that are currently giving you a wrong answer in your model. Unless these are not material they certainly need to be fixed before an auditor will sign off.


These are more nuanced and the type of error we saw in the above example. These are errors that don’t currently result in a wrong output, but could do if an assumption changed.

There are many situations where these can occur, for example:

  • a branch of logic is not currently in use (eg a setting in the model causes it to be excluded)
  • a zero input value causes returns the correct result despite the logic error – eg indexation category 3 (which has an error) has zero sales.
  • The error only occurs if a threshold is breached – eg the accumulated losses calculation is incorrect, but the project remains profitable in all years.

These category 2 errors should ideally be corrected in any long life model and in single use models if they are to be used for anything other than the current base case.


These are ‘best practice’ infringements. This might include

  • missing, poor or incorrect labels
  • ‘hard coded numbers’ – ie constant numbers mixed with cell references (eg 12 representing months in a year)
  • Inconsistent use of columns for the same time period on different sheets (this is much more of a problem if you use named ranges)


Since most model audits are required for single use models that support a transaction, this really depends on the agreed scope of the model audit. For this reason, it is important to agree the scope upfront and understand the implications. If you change the scope part way through the audit, the modeller may not have done the work required to support the revised scope, or may have to re-perform some of the tests to satisfy the revised scope.

What else should be asking your model auditor?

Download our whitepaper “10 questions to ask your model auditor”