Predicting at Near-Certainty

Predictions of absolute certainty and near-certainty exhibit some noteworthy philosophical properties. However, in practice such predictions should rarely if ever result in outcomes worse than badly-injured brier scores for the predictor who makes them, should they be proven incorrect.

Absolute Predictions in Absolute Reality
Let us consider the special case of Absolute Predictions. An absolute prediction, as I'm using the term, simply means a prediction with a probability of 0 or 1. Either way means the predictor is infinitely confident in their prediction.

If someone claims to be 100% confident in a proposition, in some technical sense they're claiming that there is no possible evidence that could convince them the proposition wasn't true. This becomes obviously absurd when we consider the claim following a resolution date. For example, suppose we're predicting whether Prince Charles will ascend to the throne by 2021-04-21 (Queen Elizabeth's 95th birthday). Someone who predicted 100% is saying this is definitely going to happen. Before 2021-04-21, this sort of makes sense. However, 2021 came and went and Elizabeth's reign continued. But the person who actually believed with 100% certainty that it would happen is caught in a weird position. Either they must claim that the thing they predicted actually happened (which seems like a strong basis for founding elaborate conspiracy theories), or that their actual credence wasn't actually 100%--after all, some evidence (i.e. it didn't actually happen) convinced them that it didn't actually happen.

We don't even need to the event not to occur to push the infinitely-confident predictor into a contradictory state. Suppose something happens, causing them to lower their stated credence to 99% certainty. In other words, something convinced them that it was at least physically possible that Charles wouldn't ascend to the throne. While this is epistemically healthier, their brier score is still permanently reduced to 0--regardless of how the question resolves! They claimed that no evidence could convince them pf a negative outcome, then some piece of evidence did.

Seen in this light, we should be extremely cautious about ever believing anyone who claims absolute confidence in anything.

Absolute Predictions in Forecasting Tournaments
Practical Forecasting can afford to be a little more forgiving. All forecasts are stored imprecisely in one way or another. If a forecasting platform restricts you to whole number inputs between 0% and 100% inclusive (as, for example, Good Judgement Open does), it's probably best to think of them as rounding your probability to the nearest whole percentage. In other words, you aren't claiming that there's no evidence that would convince you otherwise (contra above), but that your confidence is within half a percent of absolute. It isn't hard to envision forecasts for which this should be true (e.g. will the sun rise tomorrow?).