Cognitive Processes in Decision-Making

We make thousands of decisions every day. Most of these decisions are made without us even being aware we are making them. It goes without saying that the workings of the human brain are mysterious to most of us, processing figurative boatloads of stimuli as we attempt to navigate the world. If we can better understand how these decisions are made, we might be able to create better outcomes for ourselves or at least know where we can start looking to avoid mistakes.

In this short article, some of the largest ideas in the psychology of decision-making made in the past few decades will be summarized.


The Scope of the Topic

The need to make “good” decisions (those which result in the desired outcome) only becomes important in situations where there can be said to be an “optimal” or appropriate use of the available information. Such situations have been termed by Israeli psychologist Daniel Kahneman as “high-validity”. High-validity environments include games of perfect information, such as chess, and situations where the immediate effects of actions are predictable, such as building construction. Low-validity environments are correspondingly situations where the outcomes made using the available information have an unpredictable immediate effect. One example would be stock-picking using only the historical price of the stock. In low-validity environments, the question of choosing a decision-making method becomes irrelevant since any chosen method might not result in the same outcome the next time around. For those who believe they are operating in a high-validity environment when they are not, Kahneman had coined this bias the “illusion of validity”.[1]

We are also not talking about high-validity environments where mathematical models are available. In such cases, these data-driven models have shown themselves to be more effective at making predictions than human experts. For example, in a 2000 study which has been cited over 1800 times (according to Google Scholar), a team of American researchers conducted a systematic review of the results of 136 clinical studies from 1966-1988 on the predictions made by physicians and psychologists on their client’s expected behavior. These behaviors were domain nonspecific and included outcomes as varied as college academic performance, job turnover at companies, business failures, and suicide attempts. Their report found that data-driven mathematical models outperformed expert decisions in 63 reports, were just as good in 65, and were worse in only 8.[2] In a more recent study from 2015 (cited by 57), a team of American researchers reviewing published articles from 1970-2011 found that clinicians’ expert predictions were only slightly correlated (r=0.15) with actual patient outcomes. This difference between expert’s self-reported accuracy and their actual accuracy is called the overconfidence bias.[3] Clearly, if statistical models are available, one would be wise to use them rather than working on intuition alone, even if one is an “expert”.

Where does this leave human decision-making strategy? Well, still incredibly relevant. In most situations, decisions must be made without much historical data to go off of. Not every outcome and the chain of decisions which led to them are recorded, to say nothing of whether a mathematical model has been derived from them. Furthermore, even if the data and expertise to construct such a model were on-hand, it is unlikely that most opportunities to make decisions would find it useful. In highly time-sensitive situations where the cost of the outcome is only small to moderate, creating a model and consulting it would be more of a nuisance than it is worth. These rapid-fire, everyday decisions in high validity situations where the costs of incorrect evaluations are still palpable are the target situations for this inquiry.

The Case for Expert Opinion

So how do good decisions get made? The previous discussion has noted that in high-validity situations, human experts perform worse than statistical models. However, how much worse? In many situations, not much. It has already been noted that published studies in the past 40 years has found expert decisions to be significantly positively correlated with the actual outcome observed.[3] In a more concrete example, a 1993 study by American researchers Crandall and Gretchell-Reiter found that some neonatal intensive care unit nurses were able to detect sepsis in infants before blood tests results were completed. This was despite the fact that the nurses were initially unable to articulate how they knew. It was later revealed that they had unknowingly recognized symptoms of sepsis, making this a classic case of expert intuitive decision-making being right.[4]

The “recognition” of scenarios and their associated outcomes has been identified by some psychologists as the essential aspect of intuition. “Intuition”, as said by American psychologist Herbert Simon and reiterated by American psychologist Gary Klein, “is nothing more and nothing less than pattern recognition”.[5], [6] Taking the specific example and the meta-study above, it appears that what is required to get “adequate” predictive ability in situations is simply to have a lot of exposure within the desired domain of knowledge. In other words, experience. It should be noted that the experience gained only results in better predictive ability within the specific domain it is in. As Klein notes, expertise is fractionated.[6] For example, weather forecaster do a better job predicting typical weather events like rain than in predicting events which happen less often, such as hail.[7]

Dual Process Thinking

Once intuition has been built up with experience, the ability to make correct judgements may begin to feel effortless compared to the layman. For experts, decisions made in complex situations can occur spur-of-the-moment and can happen without much deliberate thought. For example, it has been suggested that a large part of the skill of expert chess players comes from their ability to rapidly recall a large repertoire of previous situations. In one experiment, expert chess players could recall arbitrary chess positions after only being exposed to them for 2-15 seconds at a 93% accuracy rate.[8] On the other hand, unskilled players play almost exclusively by simulating the game forward in possible-move trees.[8] Being able to rapidly make accurate judgements is a definite plus, so what is the mental process which causes this split to occur?

The modern conversation on these ideas comes from the work of British psychologists Peter Wason and Jonathan Evans. In a 1974 article, Wason and Evans conducted an experiment with 24 individuals involving cards with a letter printed on one side and a number printed on the other. The subjects were then instructed to point out which were the only cards needed to be turned over in order to validate whether a given relationship is true for all cards in the set (e.g. “if there is a B, then there will be a three on the other side”). They were then asked to give a reason for their choice. The authors hypothesized that due to a matching bias (a cognitive bias manifesting itself here as choosing the cards mentioned in the given relationship) relationships of one type would be more often than not answered incorrectly and relationships of the other type would be answered correctly. The results of the experiment were consistent with their hypothesis, but also revealed that while those who answered the correct-response-expected relationship correctly cited similar reasons, those who answered the wrong-response-expected relationship incorrectly gave different responses for why they had given their incorrect answer. This suggested to Wason and Evans that in some situations people make decisions via conscious reasons and in others on subconscious grounds (i.e. via the bias), with a justification constructed a posteriori.[9]

These ideas eventually became what is now called “dual process thinking” or the “dual process theory”, which is now a clinically relevant area of research.[10], [11] Dual process theory says that cognition occurs in two types: a rapid, intuitive route called “Type 1” and a slower, analytical route called “Type 2”. One uses Type 1 thinking when making an initial evaluation of a problem – ideas about where the solution might lie based on experience. Type 2 thinking is employed when following a systematic procedure. As noted in a 2017 review by a group of Canadian, American, and Dutch researchers, errors can occur from both modes despite Type 2 being the more “careful” of the two.[11] For example, a 2015 study by a team of Canadian researchers found that there was no statistically significant difference in telling equally-skilled second-year medical students (n=204) “to be careful, thorough, and reflective” compared to working “as quick as possible but not make mistakes” in diagnostic case studies.[12]

The authors of the review note that errors of Type 1 occur due to inherent heuristics and biases in cognition (made in order to reduce the mental effort associated with decision making), whereas errors of Type 2 are more attributed to the availability of working memory.[6], [11] Working memory is the theory that brains can only keep a fixed amount of new information or objects in mind to solve problems. In visual working memory, evidence has been found by Canadian researchers in 2016 that the mind can only recall the visual features of three or four objects while problem solving (Type 2), although visual working memory capacity seems to differ between individuals.[13]

Anchoring and Availability Bias

Because Type 1 reasoning occurs so quickly, it is unfortunate that the heuristics and biases which it is most prone to are so multitudinous. One textbook on decision making psychology lists 53 biases.[14] However, there is one bias on which the reviewers above had noted to have a significant amount of research published: the availability bias. Having its first description in the 1973 paper by Tversky and Kahneman, the availability bias is the tendency for people to judge the probability of an event based on information from representative events which are more familiar to them. In the clinical setting, it was found by a team of Dutch researchers in 2010 that medical residents (n=36) tasked with evaluating the quality of diagnoses gave similar diagnoses when asked to evaluate new cases right after.[15] Availability bias can also be taken advantage of in advertising in the form of “anchoring”. For example, it was found in a 2005 study by American researchers that comparing a cheaper car to a more expensive car in an advertisement increases the cheaper car’s perceived value.[16]

More information about availability bias, and the dozens of other cognitive biases, can be found in the book reference made above.

Conclusion

This short article described the usefulness of finding successful decision-making methods in high-validity situations. In most of these cases, better performance can be expected simply by increasing experience in making decisions within the desired domain. Experience results in pattern recognition. Later on, intuitive judgements are made quickly via the so-called “Type 1” system, whereas systematic effort to solve a new problem is made in the “Type 2” system. Neither system is better than the other to use in all cases, and both suffer from their own problems. In particular, Type 1 system thinking is more prone to cognitive biases like the availability bias. Therefore, one should be aware of the biases which are employed to facilitate decision making and discard them when required.

References

[1] A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science (80-. )., vol. 185, no. 4157, p. 1124 LP-1131, Sep. 1974.
[2] W. M. Grove, D. H. Zald, B. S. Lebow, B. E. Snitz, and C. Nelson, “Clinical versus mechanical prediction: A meta-analysis.,” Psychological Assessment, vol. 12, no. 1. American Psychological Association, US, pp. 19–30, 2000.
[3] D. J. Miller, E. S. Spengler, and P. M. Spengler, “A meta-analysis of confidence and judgment accuracy in clinical decision making.,” J. Couns. Psychol., vol. 62, no. 4, p. 553, 2015.
[4] B. Crandall and K. Getchell-Reiter, “Critical decision method: A technique for eliciting concrete assessment indicators from the intuition of NICU nurses.,” Advances in Nursing Science, vol. 16, no. 1. Lippincott Williams & Wilkins, US, pp. 42–51, 1993.
[5] H. A. Simon, “What is an ‘Explanation’ of Behavior?,” Psychol. Sci., vol. 3, no. 3, pp. 150–161, May 1992.
[6] D. Kahneman and G. Klein, “Conditions for intuitive expertise: A failure to disagree.,” American Psychologist, vol. 64, no. 6. American Psychological Association, Kahneman, Daniel: Woodrow Wilson School of Public and International Affairs, Princeton University, Princeton, NJ, US, 08544-0001, kahneman@princeton.edu, pp. 515–526, 2009.
[7] T. R. Stewart, P. J. Roebber, and L. F. Bosart, “The Importance of the Task in Analyzing Expert Judgment,” Organ. Behav. Hum. Decis. Process., vol. 69, no. 3, pp. 205–219, 1997.
[8] N. Charness, E. M. Reingold, M. Pomplun, and D. M. Stampe, “The perceptual aspect of skilled performance in chess: Evidence from eye movements,” Mem. Cognit., vol. 29, no. 8, pp. 1146–1152, 2001.
[9] P. C. Wason and J. S. B. T. Evans, “Dual processes in reasoning?,” Cognition, vol. 3, no. 2, pp. 141–154, 1974.
[10] J. S. B. T. Evans and C. T. L. S. P. J. S. B. T. Evans, Hypothetical Thinking: Dual Processes in Reasoning and Judgement. Taylor & Francis, 2007.
[11] G. R. Norman, S. D. Monteiro, J. Sherbino, J. S. Ilgen, H. G. Schmidt, and S. Mamede, “The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking,” Acad. Med., vol. 92, no. 1, 2017.
[12] G. Norman et al., “The Etiology of Diagnostic Errors: A Controlled Trial of System 1 Versus System 2 Reasoning,” Acad. Med., vol. 89, no. 2, 2014.
[13] J. M. Gaspar, G. J. Christie, D. J. Prime, P. Jolicœur, and J. J. McDonald, “Inability to suppress salient distractors predicts low visual working memory capacity,” Proc. Natl. Acad. Sci., vol. 113, no. 13, p. 3693 LP-3698, Mar. 2016.
[14] J. Baron, Thinking and Deciding, 4th ed. Cambridge University Press, 2006.
[15] S. Mamede et al., “Effect of Availability Bias and Reflective Reasoning on Diagnostic Accuracy Among Internal Medicine Residents,” JAMA, vol. 304, no. 11, pp. 1198–1203, Sep. 2010.
[16] S. Van Auken and A. J. Adams, “Validating across-class brand anchoring theory: Issues and implications,” J. Brand Manag., vol. 12, no. 3, pp. 165–176, 2005.

Copyright © 2023 siaison pty. ltd. ABN: 72 636 026 232