FAR Masterclass by Anna Gold: Judgment biases in auditing
23 June 2019
Anna Gold is professor of auditing at the Vrije Universiteit Amsterdam and adjunct professor at the Norwegian School of Economics in Bergen. Anna’s research interests lie in the judgment and decision-making area, primarily applied to the field of auditing. Anna’s work predominantly uses experimental but also archival and qualitative research methods and she published in prestigious journals such as The Accounting Review, Auditing – A Journal of Practice and Theory, Journal of Accounting Literature, Journal of Business Ethics, and the International Journal of Auditing.
Although auditors are well-educated professionals, they are also human beings. This means that they are susceptible to judgment biases. Departing from psychological theories, four judgment biases were discussed during Anna Gold’s well-attended FAR Masterclass on March 29, 2019:
- Availability bias: the tendency to consider information that is easily retrievable from memory as being more likely, more relevant, and more important for a judgment.
- Anchoring bias: the tendency to insufficiently adjust away from an initial anchor.
- Overconfidence bias: the tendency to be overconfident in our judgment abilities.
- Confirmation bias: the tendency to seek and overweight confirming evidence.
It cannot be stressed enough: awareness of the existence of these judgment biases is an important first step in reducing their adverse effects.
What are judgments and decisions?
Sarah Bonner defines a judgment as ‘forming an idea, opinion, or estimate about an object, an event, a state, or another type of phenomenon’ and a decision as ‘making up one’s mind about the issue at hand and taking a course of action’. Hence, judgments are an important ‘ingredient’ of decision-making. Ultimately, the judgment and decision-making process leads to issuing an audit opinion.
Examples of important auditor decisions that require extensive professional judgment are, for example: establishing materiality, assessing risks, evaluating effectiveness of controls, selecting substantive procedures and assessing compliance with professional standards.
Often a predefined process is used for decision-making: (1) identify the decision problem; (2) identify decision criteria; (3) assess weights on the decision criteria; (4) identify decision alternatives; (5) rate the decision alternatives using the decision criteria; and finally (6) decide on alternatives.
During the FAR Masterclass, this predefined process was illustrated and discussed on the basis of confirmation of receivables. In a perfect world, for every single decision during an audit, you could walk through this kind of decision tree and make rationally justified decisions.
Audit judgments and audit quality
Of course, the quality of the audit is only as good as the quality of the auditor’s judgment. The problem is that there could be (unintentional) errors in the judgments that might be detrimental to audit quality.
Robert Knechel sums up some examples of important auditing judgments and what can go wrong with those:
- Client acceptance and retention:
- accepting a client while audit firm does not have adequate expertise/resources/access to audit evidence/independence;
- engagement will not be profitable;
- worst case: reputation loss and future law suits.
- Assessing the risk of material misstatement:
- assessing the risk too high: inefficient audits assessing the risk too low: ineffective audits.
Audit researchers study and consider the constraints concerning professional standards, ethics, codes of conduct and legal liability in determining the ‘allowable decision space’ within auditing. However, there is a set of cognitive limitations at work, which influence the auditor’s decision making. These cognitive limitations are also known as ‘bounded rationality’. Bounded rationality means that people often use simplified approaches (‘heuristics’) and as a consequence people make systematic judgment errors (i.e. they have ‘cognitive biases’).
Thinking fast and slow
According to Nobel-prize winner Daniel Kahneman, individuals use two types of thought as they form judgments: system 1 thinking and system 2 thinking. System 1 thinking happens fast, unconscious, automatic, in everyday decisions and is error prone. System 2 thinking is slow, conscious, effortful, happens in complex decisions, is reliable, and (ideally) takes all information into account. A lot of decisions are based on a mixture of System 1 and 2 thinking.
To give an extreme example, Anna referred to the decision to scratch your head if you’re experiencing an itch. That is most probably a System 1 process: you won’t follow a conscious decision path. A lot of decisions that are made during the audit also have such an automatic component. Auditors use shortcuts. Often these heuristics are useful, but we should be aware of them and of what potential problems they may cause.
Therefore, four common judgment biases were covered during the session: availability bias, anchoring bias, overconfidence and confirmation bias.
Availability bias
During the masterclass, on several occasions online input from the participants was requested, which was used as an illustration of the biases. Concerning the availability bias the following experiment was conducted by Anna Gold during the Masterclass.
‘Your friend has just described one of his neighbors as: “Steve is very shy and withdrawn, invariably helpful but with little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve more likely to be (a) a librarian; or (b) a farmer?’
In making such a judgment, we use stereotypical identifiers to draw conclusions, instead of looking at probability. 67 percent of the participants think that Steve is more likely to be a librarian, because being introvert is expected to be more typical for a librarian. But there are 20 times more farmers in the world, so in terms of probability it is way more likely that the person is a farmer than a librarian.
This relates to a common description of availability bias as the tendency to consider information that is easily retrievable from memory as being more likely, more relevant, and more important for a judgment.
Another example of the availability bias might be fear of flying because of ‘news explosions’ after an airplane crash.
But what does this have to do with auditing? Some examples were discussed. During analytical procedures, there is the ease of recall of client-provided explanations for unexpected variances. This may keep the auditor from looking for other explanations. Furthermore, the availability bias may become active during the identification of areas of risk to plan an audit. Most auditors work on more audits concurrently and there may be ease of recall of other clients which are being audited concurrently.
Anchoring bias
In a second experiment during the Masterclass, the participants had to assess the height of Mount Everest. Not everybody knows exactly and probably everyone has to make some kind of estimate. The participants were split into two groups. Group A members had to assess whether the mountain is more or less than 13.500 meters high. Group B members had to assess whether the mountain is more or less than 600 meters high. Group A members on average assessed the height to be 7642 meters. Group B members on average assessed the height to be 6177 meters. So, there was a salient difference between the two averages. This experiment shows that the reference point (the ‘anchor’) drives the answer. This is called the anchoring bias. By the way, he actual height of the Mount Everest is 8848 meters.
Also, in professional settings, people make assessments by starting from an, frequently arbitrary, initial value and (over-) rely on that initial value to form a final judgment. The problem is that the adjustment from the anchor is typically insufficient. This bias is particularly powerful in auditing, and hence highly relevant. Many numerical anchors are given in the form of regular numbers that the auditors will use during the audit (e.g. management estimates, unaudited account balances). If auditors are unaware of this bias, they are not only subject to the bias but also vulnerable to manipulation by others (particularly their clients).
Overconfidence bias
An overconfidence bias exists when ‘a person’s subjective confidence in his or her judgments is reliably greater than the objective accuracy of those judgments’. Most people are overconfident in their judgment abilities and do not acknowledge the actual level of uncertainty that exists. The participants were asked to assess their driving skills. In earlier studies 87,5 percent of the population rate their driving skills above 5 (on a 10-point scale), with 60 percent ranking themselves at eight or above. Also, in the audience, 76 percent of the participants assess their driving skills above average, which is similar to existing research evidence. This is impossible! Most people (by definition) should assess their driving skills as average. Hence, they are subject to overconfidence bias.
This bias is also potentially problematic for auditing, for example if overconfidence exists in the quality of the judgments. Of course, the possibility of professional discretion should remain, but it might lead to problems. Audit research shows that auditors are generally overconfident in their own technical knowledge and abilities and in the technical knowledge and abilities of their staff. Auditor’s self-perceived abilities are sometimes not even correlated with actual performance. Fortunately, overconfidence does seem to decreases for more effective auditors.
A complicating phenomenon is that in our society we are mainly being rewarded for being overconfident.
Confirmation bias
The last bias that was discussed is confirmation bias. Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs, expectations, or hypotheses. So, people mainly look for corroborating evidence for a hypothesis instead of looking for disconfirming evidence. This is highly relevant for auditors. For example, research concerning analytical procedures shows that auditors who are strongly committed to a hypothesis place more importance on confirmatory relative to disconfirming evidence. Furthermore, auditors who begin an audit judgment process with the perception that material errors are unlikely are more likely to discount new evidence suggesting that material errors exist. In addition, auditors given indication of company failure select more cues suggesting failure than cues suggesting viability.
Break-out session and mitigation of bias effects
During a break-out session, small groups discussed what strategies are available for avoiding or mitigating the biases. A common result of the discussions was that time pressure has a negative influence on all System 2 thinking, which leads to a potential higher influence of the System 1 thinking.
Awareness and education are mentioned as crucial ingredients for auditors to be able to cope with cognitive biases. One participant notes that checklists also might help. Although, they have a bad reputation, checklists may help in forcing auditors to think more about issues. Another participant stresses the importance of combining auditors with different personalities and backgrounds in audit teams. Furthermore, a proper tone at the top and the importance of sharing experiences is indicated as being relevant. Auditors should consult and brainstorm with others, also with external specialists.
Anna emphasizes that critical questions should be asked. Consider why something comes to mind. Does that properly represent the actual setting? Consider all the relevant information. Take it all in before developing hypotheses and don’t jump to conclusions. Make the opposing case and consider alternative explanations. Hence, look at an issue from different angles. Challenge subjective estimates and underlying assumptions. Make an independent judgment or estimate, instead of (only) using anchors.
As mentioned, awareness that people can make judgment errors might be the most important take-away. And making errors is inevitable, and should be allowed. Therefore, a proper error management culture is highly valuable.