Most people think their spreadsheets are more accurate than they really are. Research shows that when modellers work alone, they estimate an 18% probability of error in their work. The reality is very different: 86% of them make mistakes.
At Numeritas, our own experience backs this up. Out of more than 200 model audits, we’ve only ever seen one model with no numerical errors.
Why are spreadsheets so error-prone, and what can be done about it?
The danger of alpha bias
The culprit is what we call alpha bias. It’s the overconfidence that comes from believing your work is correct because you’re competent and experienced.
Paradoxically, research shows that when people do spot errors in their models, they often become more confident rather than less. The logic goes something like this: “I’ve found a mistake, so I must be good at checking my work – and if I caught this one, I’ve probably caught them all.”
That’s rarely true. Errors hide in unexpected places, and overconfidence blinds us to them.
Why spreadsheets aren’t like software
When we use software, we expect it to work reliably. That’s not because software is inherently less error-prone than spreadsheets — in fact, it’s often far more complex. The difference lies in the process.
In commercial software development, testing is non-negotiable. It can consume up to half of the total project time, and in some approaches, tests are designed before the first line of code is written. Testing isn’t an afterthought; it’s a core discipline.
By contrast, in spreadsheet modelling, testing is often minimal or skipped altogether. We tend to trust our own logic and assume the model is right – until something goes wrong.
Building better models
The good news is that spreadsheet modelling can learn a lot from software engineering. A few principles make a huge difference:
1. Assume there are errors
The safest mindset is to believe your model contains mistakes until testing proves otherwise. Overconfidence is the enemy of accuracy.
2. Test at multiple levels
- Formula level: check individual calculations.
- Module level: confirm that sections of the model work as expected in isolation.
- Analytical review: ask whether the outputs make sense in the real-world context of the business.
Each level catches a different type of issue.
3. Scope before you build
Define the purpose of the model upfront. What questions does it need to answer? What scenarios should it test? If you need to rewrite logic later just to get the outputs you want, complexity rises and the risk of error multiplies.
4. Structure inputs, calculations, and outputs
A well-constructed model separates assumptions (inputs), processing (calculations), and results (outputs). This makes it easier to adapt and test without tampering with the underlying engine.
5. Consider independent review
When the stakes are high – whether for an investment, valuation, or strategic decision – a fresh set of eyes is invaluable. Independent model audits catch what original builders often miss.
The payoff of discipline
The best models aren’t those with the most complex logic or clever formulas. They’re the ones you can trust when the numbers matter most.
By applying the disciplines of scoping, structuring, testing, and independent review, you move from fragile spreadsheets that invite risk to robust tools that inspire confidence.
In financial modelling, accuracy isn’t just about numbers – it’s about decisions. And decisions built on a foundation of testing are always stronger.