Aggregation of Binary Predictions: Difference between revisions

No edit summary
 
Line 16:
 
== Trained aggregation methods ==
Trained aggregation methods learn features such as for example weights for different forecasters from the data. Many of the approaches use a concept called "extremizing", which shifts the overall probability towards one of the extremes (0 or 1).
 
=== Logistic Regression ===
Outcome is modeled as Bernoulli(p), where ''logit(p)'' is a linear combination of the ''logit(p_i)'' for each forecaster ''i''. By default, one can expect that the weights add to 1 (~roughly unbiased), and that more predictive forecasters will tend to get a larger coefficient, but since this is a regression model, what this measures is predictiveness conditional on every other forecast. So that e.g. if two forecasters are identical, one could get zero weight since the other one is doing all the work (a phenomenon known as "collinearity").
 
=== Skew Adjusted Extremized Mean (Sk-E-Mean) ===
Anonymous user