Predicting at Near-Certainty: Difference between revisions

From Forecasting Wiki
Content added Content deleted
(Created page with "<!-- ====================================== Article Banners ====================================== --> <!-- --------------------------------------------------------------------------------------------- --> <!-- ====== Remove this block once the article is ready for the review to delete the banner ====== --> {| cellspacing="8" cellpadding="0" style="margin: 0em 0em 0.5em 0em; border: 1px solid #ecb300; background: #ffd244; width: 100%" | '''''This article is work in prog...")
 
mNo edit summary
Line 21: Line 21:
========= Remove this line once the article is ready for review to display review banner ========= -->
========= Remove this line once the article is ready for review to display review banner ========= -->
<!-- --------------------------------------------------------------------------------------------- -->
<!-- --------------------------------------------------------------------------------------------- -->
<!-- ====================================== Article Banners ====================================== -->
<!-- ====================================== Article Banners ====================================== -->Predictions of absolute certainty and near-certainty exhibit some noteworthy philosophical properties. However, in practice such predictions should rarely if ever result in outcomes worse than badly-injured brier scores for the predictor who makes them, should they be proven incorrect.


== Absolute Predictions in Absolute Reality ==
== Absolute Predictions in Absolute Reality ==
Line 27: Line 27:
Let us consider the special case of ''Absolute Predictions''. An absolute prediction, as I'm using the term, simply means a prediction with a probability of 0 or 1<ref>Yudkowsky, Eliezer. [https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities ''0 and 1 are not probabilities''] LessWrong, 2008-01-10</ref>. Either way means the predictor is infinitely confident in their prediction.
Let us consider the special case of ''Absolute Predictions''. An absolute prediction, as I'm using the term, simply means a prediction with a probability of 0 or 1<ref>Yudkowsky, Eliezer. [https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities ''0 and 1 are not probabilities''] LessWrong, 2008-01-10</ref>. Either way means the predictor is infinitely confident in their prediction.


If someone claims to be 100% confident in a proposition, in some technical sense they're claiming that there is no possible evidence that could convince them the proposition wasn't true. This becomes obviously absurd when we consider the claim following a resolution date. For example, suppose we're predicting whether Prince Charles will ascend to the throne by 2021-04-21 (Queen Elizabeth's 95th birthday). Someone who predicted 100% is saying this is definitely going to happen. Before 2021-04-21, this sort of makes sense. However, 2021 came and went and [https://en.wikipedia.org/wiki/Elizabeth_II Elizabeth's reign continued]. But the person who actually believed with 100% certainty that it would happen is caught in a weird position. Either they must claim that the thing the predicted actually happened, or that their actual credence wasn't actually 100%--after all, some evidence (i.e. it didn't actually happen) convinced them that it didn't actually happen.
If someone claims to be 100% confident in a proposition, in some technical sense they're claiming that there is no possible evidence that could convince them the proposition wasn't true. This becomes obviously absurd when we consider the claim following a resolution date. For example, suppose we're predicting whether Prince Charles will ascend to the throne by 2021-04-21 (Queen Elizabeth's 95th birthday). Someone who predicted 100% is saying this is definitely going to happen. Before 2021-04-21, this sort of makes sense. However, 2021 came and went and [https://en.wikipedia.org/wiki/Elizabeth_II Elizabeth's reign continued]. But the person who actually believed with 100% certainty that it would happen is caught in a weird position. Either they must claim that the thing they predicted actually happened (which seems like a strong basis for founding elaborate conspiracy theories), or that their actual credence wasn't actually 100%--after all, some evidence (i.e. it didn't actually happen) convinced them that it didn't actually happen.


We don't even need to the event not to occur to push the infinitely-confident predictor into a contradictory state. Suppose something happens, causing them to lower their stated credence to 99% certainty. In other words, something convinced them that it was at least ''physically possible'' that Charles wouldn't ascend to the throne. While this is epistemically healthier, their brier score on the Omniscient Ledger is still permanently reduced to 0--regardless of how the question resolves! They claimed that no evidence could convince them pf a negative outcome, then some piece of evidence did.
We don't even need to the event not to occur to push the infinitely-confident predictor into a contradictory state. Suppose something happens, causing them to lower their stated credence to 99% certainty. In other words, something convinced them that it was at least ''physically possible'' that Charles wouldn't ascend to the throne. While this is epistemically healthier, their brier score is still permanently reduced to 0--regardless of how the question resolves! They claimed that no evidence could convince them pf a negative outcome, then some piece of evidence ''did''.


Seen in this light, we should be extremely cautious about ever believing anyone who claims absolute confidence in anything.
Seen in this light, we should be extremely cautious about ever believing anyone who claims absolute confidence in anything.
Line 35: Line 35:
== Absolute Predictions in Forecasting Tournaments ==
== Absolute Predictions in Forecasting Tournaments ==


Practical Forecasting can afford to be a little more forgiving. All forecasts are stored imprecisely in one way or another.
Practical Forecasting can afford to be a little more forgiving. All forecasts are stored imprecisely in one way or another. If a forecasting platform provides 0% and 100% as valid inputs, it's probably best to think of them as rounding your probability to the nearest whole percentage. In other words, you aren't claiming that there's no evidence that would convince you otherwise (contra above), but that your confidence is within half a percent of absolute. It isn't hard to envision forecasts for which this should be true (e.g. will the sun rise tomorrow?).

If a forecasting platform provides 0% and 100% as valid inputs, it's probably best to think of them as rounding your probability to the nearest whole percentage. In other words, you aren't claiming that there's no evidence that would convince you otherwise (contra above), but that your confidence is within half a percent of absolute. It isn't hard to envision forecasts for which this should be true (e.g. will the sun rise tomorrow?).


== References ==
== References ==

Revision as of 04:06, 3 May 2022

This article is work in progess. The author is working on it and it is not yet ready for review.

Predictions of absolute certainty and near-certainty exhibit some noteworthy philosophical properties. However, in practice such predictions should rarely if ever result in outcomes worse than badly-injured brier scores for the predictor who makes them, should they be proven incorrect.

Absolute Predictions in Absolute Reality

Let us consider the special case of Absolute Predictions. An absolute prediction, as I'm using the term, simply means a prediction with a probability of 0 or 1[1]. Either way means the predictor is infinitely confident in their prediction.

If someone claims to be 100% confident in a proposition, in some technical sense they're claiming that there is no possible evidence that could convince them the proposition wasn't true. This becomes obviously absurd when we consider the claim following a resolution date. For example, suppose we're predicting whether Prince Charles will ascend to the throne by 2021-04-21 (Queen Elizabeth's 95th birthday). Someone who predicted 100% is saying this is definitely going to happen. Before 2021-04-21, this sort of makes sense. However, 2021 came and went and Elizabeth's reign continued. But the person who actually believed with 100% certainty that it would happen is caught in a weird position. Either they must claim that the thing they predicted actually happened (which seems like a strong basis for founding elaborate conspiracy theories), or that their actual credence wasn't actually 100%--after all, some evidence (i.e. it didn't actually happen) convinced them that it didn't actually happen.

We don't even need to the event not to occur to push the infinitely-confident predictor into a contradictory state. Suppose something happens, causing them to lower their stated credence to 99% certainty. In other words, something convinced them that it was at least physically possible that Charles wouldn't ascend to the throne. While this is epistemically healthier, their brier score is still permanently reduced to 0--regardless of how the question resolves! They claimed that no evidence could convince them pf a negative outcome, then some piece of evidence did.

Seen in this light, we should be extremely cautious about ever believing anyone who claims absolute confidence in anything.

Absolute Predictions in Forecasting Tournaments

Practical Forecasting can afford to be a little more forgiving. All forecasts are stored imprecisely in one way or another. If a forecasting platform provides 0% and 100% as valid inputs, it's probably best to think of them as rounding your probability to the nearest whole percentage. In other words, you aren't claiming that there's no evidence that would convince you otherwise (contra above), but that your confidence is within half a percent of absolute. It isn't hard to envision forecasts for which this should be true (e.g. will the sun rise tomorrow?).

References

  1. Yudkowsky, Eliezer. 0 and 1 are not probabilities LessWrong, 2008-01-10

<comments />