What can an Ox tell us about how to run meetings better?
In 1906, Francis Galton studied the judgments made by contestants in a competition to guess the weight of an Ox at a farmers fair. Galton analysed the guesses of 787 participants. The crowd average was just about spot on at 1197lb for an ox weighing 1198lb, however the range of guesses was about 20%. Galton’s use of the term ‘vox populi’ has morphed into “the wisdom of crowds”. It is true that in some situations, averaging the judgments of a number of people will lead to a more accurate result. This however relies on some key factors which I’ll come to in a minute.
In my last article, I discussed how ‘Noise’ causes inconsistency in decision making within a ‘system’ (for example different analysts in the same company). For a quick recap; any ‘system’ (which in a business context can be a company, a division, a department etc) has variability where there are multiple people doing similar work resulting in judgments or decisions. Examples might include what premium to charge for insurance, or what value to put on an asset or business.
So what steps can we take to reduce the variability in judgments within a system?
Galton demonstrated that making use of the knowledge in a group can improve judgments, so In trying to reduce noise, it might seem like a good idea to have decision makers collaborate; discussing cases amongst themselves to arrive at a consensus.
This is an idea that on the face of it has some merit, but the way in which the collaboration is done can introduce our old nemesis bias into the mix. Imagine a typical meeting to discuss an investment. The chair of the meeting expresses their views to the group, then asks for input from the others present. The next person to speak will have taken into account the chair’s comments and almost certainly modified their original view – perhaps out of respect for the chair’s knowledge and experience or though deference or fear of appearing foolish. In any case they are likely to move closer to what they heard the chair say. Anyone following the first two will now look a little off-base if their views differ and so the bias intensifies. Kahneman refers to this type of situation as an “informational cascade”.
There is a lot at play here in terms of group dynamics and social pressure. But isn’t this how most meetings are carried out? Without realising it, a group of rational people can arrive at an irrational conclusion – the dreaded ‘group think’.
The essential factor for the crowds to be wise is independence. Each person should make their assessments independently before any discussion takes place and ideally they should ‘submit’ their assessment to avoid the informational cascade. Of course this can only happen in an open honest environment. In most situations, it is also the case that competence matters ie uninformed members of a ‘crowd’ are less likely to judge well and in specialist fields this can result in a poor judgment, even where averaging of a crowd is used.
Another way to sidestep the informational cascade and social influence of group members on each other, is to nominate one member to play ‘devils advocate’. This person is then given the explicit mission of challenging the views of other group members, making it acceptable to do so (though this may not always be a popular role to fill!).
Other techniques can also help. Using standardised process with objective criteria to reduce variability. This is why some algorithms and AI models have been found to produce more reliable results than human decision makers, however the training of these algorithms can ‘bake in’ any bias which might have existing in the criteria used in its design or ‘training’.
What if you are a ‘solopreneur’ or you just have to make a judgment on your own, without the benefit of a team? One technique you can apply even when you are flying solo is the ‘crowd within’. The idea here is that you make an initial judgment, then sleep on it and start from the assumption you were off the mark and that you will have made a poor judgment. From this revised starting point, you consider why you were wrong and reforecast, then you average the two.
More on Francis Dalton: https://galton.org/cgi-bin/searchImages/galton/search/pearson/vol2/pages/vol2_0469.htm