Ideal scoring rules for prediction platforms

Scoring rules for prediction platforms should have a few properties:


 * they should be proper
 * reward collaboration
 * potentially reward participation
 * give you positive, as well as negative points

There is a Metaculus article on this that might be useful to cite.

Rewarding participation
Good, because people like it. Problem, because leaderboards become meaningless.

One way could be to say that the points are 1 + the score if you make a forecast and 0 otherwise.

Maybe the leaderboard could be done through pairwise comparisons between forecasters?

Maybe the leaderboard could just divide points through the number of questions, so you get mean points rather than the sum of points. Also

Incentive problems
https://arxiv.org/abs/2106.11248