Cognitive Biases
This article is work in progess. The author is working on it and it is not yet ready for review. |
A cognitive bias represents an error in reasoning, or a deviation from how one should reason.[1] Cognitive biases can lead to errors in judgment, resulting in bias and noise in forecasts. Many cognitive biases have been found to be systematic, meaning that when put in the same context, many people commit the same error in the same way. [2] The existence of cognitive biases is a central concern for making better forecasts.
Errors in reasoning[edit]
A cognitive bias is not necessarily a deviation from the correct answer, but an error in reasoning. While there is no consensus on correct reasoning, we can identify cognitive biases by comparing the observed reasoning with so-called normative models (models of how one should reason). One such normative model is Bayes' Theorem, and any deviation from it could be considered a cognitive bias. This cognitive bias is present regardless of the truth value of the judgment. To say that someone is wrong simply because they exhibited a cognitive bias would be an example of the Fallacy Fallacy. This distinction is important because in forecasting the correct answer is not known, but it is still possible to identify cognitive biases, that is to say, errors in reasoning. This distinguishes cognitive biases from bias, which can only be identified once the true answer is known. Though it is generally assumed that cognitive biases will lead to biases in forecasts.
Due to disagreements in how one should reason, a cognitive bias from one perspective can be correct reasoning from another perspective. There is ongoing research and debate into the best ways to reason, but generally a few normative models are focused on in the literature, including Bayes Theorem, Formal Logic, Probability Theory, Expected Utility Theory, among others. These list of models have been heavily criticized by various theorists and researchers, such as Nassim Taleb, Gerd Gigerenzer, Gary Klein and many others. Strictly speaking, some variation in the normative models used can improve forecasts so long as the errors in the models are not correlated, and the models are able to pick up on signal. See Fox vs Hedgehog and the Diversity Prediction Theorem.
Causes of bias[edit]
The most common reason for the existence of a cognitive bias is that the individual is using a heuristic to reason because it is less cognitively demanding, and faster, than using a Normative Model. For example, rather than finding a good base-rate, a forecaster may consult their memory for similar cases they can think of. This is known as the Availability Heuristic. Many such heuristics have been identified in the literature.
Examples of Cognitive Biases[edit]
Since a cognitive bias is defined as an error in reasoning, but there is no widespread agreement on how one should reason, a comprehensive list of biases is not possible. And indeed, the list of biases is always growing as researchers discover new ways in which people do not reason according to some normative model. The list below therefore only includes a few examples:
- Disbelief in the law of large numbers
Effects of cognitive bias[edit]
Cognitive biases usually lead to bias when making a forecast, or bias and noise when combining forecasts.
Cognitive biases likely lead to flawed forecasts. This problem can then be multiplied if multiple forecasts are combined, and all the forecasters are reasoning in the same flawed way by using the same heuristic. The more people there are committing the same error in reasoning, the more problematic it is for the forecast. Since such correlation of reasoning patterns is expected, debiasing is an important component of improving forecasts.
Alternatively, the errors in reasoning that forecasters commit may not be correlated. For example, one forecaster may be overconfident, and another underconfident. When this happens, the biases may cancel each other out.
Reducing noise and bias[edit]
The crowd forecasting approach[edit]
If forecasts are sufficiently diverse and independent, they can be pooled, or aggregated, to form a single crowd forecast which always outperforms the average forecast, and often outperforms the best. Crowd forecasting in this manner has helped public health experts of the Johns Hopkins Center for Health Security to predict the spread and severity of infectious disease such as Covid-19 [3].
Methods range from a simple averaging, to an optimized weighted average where the best performing forecasters (as measured per their brier score or calibration) weigh more. Additional tweaks can further improve a crowd forecast such as applying a form of decay where only the most recent forecasts beyond a given threshold are taken into account.
Individual forecasting training[edit]
References[edit]
- ↑ [1] - Baron, J. (2006). Thinking and Deciding. https://doi.org/10.1017/cbo9780511840265
- ↑ [2] - Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
- ↑ https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-021-12083-y