Calibration: Difference between revisions

331 bytes added ,  2 years ago
no edit summary
(Created page with "{{Banner|WIP}}<!--- Change 'WIP' to 'Help wanted' then 'Review wanted' when you're at those stages, remove when review is passed---> Calibration refers to the propensity of a forecaster's forecasts to occur at the approximate frequency of their prediction. For example, a forecaster who forecasts 10 events at 40% each and 4 of those events ultimately occur exhibits good calibration. If 3 or 5 of these events occur, the forecaster may still be exhibiting reasonable calibr...")
 
No edit summary
Line 7:
One common approach for doing so visually is the ''Calibration Plot''. Calibration plots are, roughly, vertical box-and-whisker diagrams showing the distribution of resolution frequencies for a given forecaster's track record. For example:
 
[[File:Metaculus Calibration Plot.png|frameframeless|center|400px700px]]
 
Here we can see a clear correlation between Metaculus' predictions and the resolutions. Moreover, we can see that error isn't systematically consistent (i.e., boxes aren't consistently above or below the dotted "perfect calibration" line), meaning we can't use a simple linear correction to improve upon Metaculus' forecasts.
 
== References ==