Cognitive biases are errors in human thinking or deviations from rational judgment resulting in illogical conclusions. When these conclusions are caused by our tendency to evaluate arguments based on our pre-existing beliefs, a phenomenon called belief bias is observed. This paper is concerned with the way cognitive sciences scholars, in particular, Keith Stanovich, view the phenomenon and its components.
A person utilizing the resources they possess while behaving in a way that aids them in obtaining what they most want engages in instrumental rationality, as the rational brain is geared towards maximizing the utility in choice. However, human beings often display astounding degrees of irrationality while attempting to make logical conclusions. Consider the well-known syllogism, the rose problem: if all living things need water, and roses need water, then roses are living things.
The conclusion, though believable, only seems valid because it does not follow logically from these two premises. Additionally, consider the mice problem: all insects need oxygen; mice need oxygen; therefore, mice are insects. In this case, our prior knowledge of mice in the real world facilitates the understanding of the logical invalidity of this argument since one might think of roses as living things as well. However, problems arise when this belief becomes implicated in their deductive reasoning.
When asked to evaluate an argument that is outside the scope of one’s factual knowledge, one can see that the conclusion lacks validity, as the pre-existing beliefs cannot impact the reasoning. All in all, the constraints imposed by the underlying heuristics on the decision-making process cause systematic errors in rational thinking. The inference arises from the respondent’s belief that if in all of their preferred models, the premises are true, then the conclusion must also be true.
According to Dutilh Novaes (2012), these “preferred models serve as the means of reflecting modern beliefs, as well as the commitments that are supported by false information. Stanovich considers this tendency to automatically utilize all the contextual information pertinent to the problem, even when ordered to ignore the real-world believability of the conclusion, to be one of the fundamental computational biases of the autonomous set of systems. Moreover, as per this default-interventionist approach, the beliefs create a fast, default response that the logical processing is then unable to overturn.
In contrast, a study by Trippas, Thompson, and Handley (2017) proposes a parallel-processing model where both belief-based and logic-based responses occur in parallel. The results of the first experiment concerning deductive reasoning, believability, and logical validity, confirmed that the intricate nature of a particular issue defines the extent to which the complexity of intervention will rise (Trippas et al., 2017). The belief-logic conflict, thus, interfered more with belief judgments, while the effect was eliminated with the logical argument’s increase in complexity.
The second experiment was aimed at testing the hypothesis that a further increase in complexity would lead to a complete reversal of the effect. The participants were given a syllogistic reasoning task with a number of arguments of varying complexity. The findings have confirmed the hypothesis, as, in the case of the more complex syllogisms, beliefs interfered with logic judgments to a greater degree. Thus, the analysis of validity aligns with the identification of credibility; similarly, the extent to which interference will be developed hinges on the relative complexity of the essential stages of the logical analysis (Trippas, Pennycook, Verde, & Handley, 2015).
The study by Novaes and Weluvenkamp (2017) on the problems of preferred models has demonstrated that in our reasoning, we take into account the concepts that are minimally plausible according to our beliefs and previous knowledge. The findings have also shown that, from the perspective of preferential logic, certain assumptions and beliefs contribute to the presence ordering of the identified frameworks, while screened beliefs simply using the presence ordering from the very start (Dutilh Novaes & Weluvenkamp, 2017, p. 50). Screened revision employs core beliefs that must stay unchanged, whereas, in standard preferential reasoning, the pre-existing knowledge may be revised.
The 2013 study by Trippas, Handley, and Verde (2013) on the role of cognitive ability in mediating the effects of believability has demonstrated that response bias is one of the components of belief bias. Researchers note that “although higher cognitive capacity seems to be associated with the use of reasoning strategies,… the best reasoners ignore these strategies and apply optimal reasoning to all problems regardless of believability” (Trippas et al., 2013, p. 1400). Cognitive ability is shown to be one of the factors mediating reasoning strategies, alongside task familiarity, reasoning expertise, and motivation. Most importantly, it is stated that beliefs affect a range of outcomes apart from possible bias; thus, a change in the choice of strategies used for argumentation may occur (Trippas et al., 2013).
Continuing their research on the belief bias phenomena, Trippas et al., (2015) conducted a study on the effects that the analytic cognitive style has on the development of reasoning for unbelievable assumptions, that is, the motivated reasoning. The findings have demonstrated that analytical thinking is linked to the latter, a component of belief bias. Researchers note that “interpreted in terms of dual-process theory (DPT), these findings suggest that the response bias component of belief bias is a marker of T1 processing. In contrast, reasoning accuracy and motivated reasoning appear to be determined by T2 processing” (Trippas et al., 2015, p. 442)
Keith Stanovich, the author of The Robot’s Rebellion: Finding Meaning in the Age of Darwin, defines two types of processes in his DPT model: rapid, autonomous processes that produce standard responses (Type 1). Type 2, in turn, implies that the specified processes are interrupted by the ones that are deemed as the ones belonging to higher-order (Evans & Stanovich, 2013). The latter implies that hypothetical thinking is supported extensively, thus, affecting the working memory significantly. As to the belief bias paradigm, Stanovich denies its perception as a simple rule conflict, stating that both types of processing may be affected, albeit in a different way. Type 1 processing produces a response bias, while Type 2 “motivates the selective search for supporting and refuting models of the premises” (Evans & Stanovich, 2013 p. 232).
Strong deductive reasoning instructions combined with a higher cognitive ability reduce belief-based responding, while time pressure increases it. Most importantly, “differences in working memory capacity and intelligence can influence responsiveness to instructions and resistance to belief biases” (Evans & Stanovich, 2013 p. 234). Reason-based and belief-based responses originate in different parts of the brain; their conflict can be observed via neural imaging, the qualitative distinction thus serves to support the default interventionist forms of dual-process theory (Dutilh Novaes & Veluwenkamp, 2017).
The significance of Stanovich’s contribution to the discussion of the belief-bias concept is that he acknowledges both the human evolutionary need to contextualize all the information available and the fact that humans no longer inhabit the kind of natural environment where the action made sense. Most importantly, the researcher asserts the need for deductive reasoning and a temporary suspension of preexisting knowledge to further re-evaluate our beliefs. Personally, I understand the importance of decontextualization, as in today’s technological society, I am constantly faced with information overload and must employ analytical reasoning to distinguish truth from falsehoods. It is certain that today’s post-truth, post-facts society would certainly benefit from a better understanding of how beliefs influence cognition.
References
Dutilh Novaes, C. (2012). Formal languages in logic: A philosophical and cognitive analysis. Cambridge, England: University Press.
Dutilh Novaes, C., & Veluwenkamp, H. (2017). Reasoning biases, non-monotonic logics and belief revision. Theoria, 83(1), 29-52. Web.
Evans, J. S. B., & Stanovich, K. E. (2013). Dual-process theories of higher cognition:5 Advancing the debate. Perspectives on Psychological Science, 8(3), 223-241. Web.
Trippas, D., Handley, S. J., & Verde, M. F. (2013). The SDT model of belief bias: Complexity, time, and cognitive ability mediate the effects of believability. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(5), 1393-1402. Web.
Trippas, D., Pennycook, G., Verde, M. F., & Handley, S. J. (2015). Better but still biased: Analytic cognitive style and belief bias. Thinking & Reasoning, 21(4), 431-445. Web.
Trippas, D., Thompson, V. A., & Handley, S. J. (2017). When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias. Memory & Cognition, 45(4), 539-552. Web.