One problem bettors face when making probabilistic predictions for the outcome of a single event is that they cannot know if that prediction was correct, even after the result is known.
Nate Silver initially wrote about applying statistical analysis to baseball, but has risen to prominence following his near perfect prediction of the 2008 presidential election, where he correctly forecast the results of 49 of the 50 states.
The feat was impressive, although swings towards one candidate or party do tend to be universal and consistent across a country. So if the trend is correctly identified, constituencies or states can fall as predictably as dominoes. In recognising this, few if any bookmakers would be prepared to accept accumulative bets on a single party to win multiple seats at a general or presidential election.
More recently Silver was bullish about the chances of hosts Brazil winning the World Cup, much more so than the betting public. The ‘layers’ however sided with the host, but the strength of their confidence was less than Silver’s at the start of the competition, and declined as the tournament progressed – as they reacted to injuries (Neymar) and suspensions (Thiago Silva).
Brazil were subsequently humiliated 7-1 in the semi-finals by eventual winners, Germany, while Silver’s optimistic probabilistic prediction, like Brazil’s World Cup campaign, was derided as a failure.
Similarly, Liverpool’s near title winning season in the 2013/14 Premier League was widely regarded as a flop for predictive models. As late as mid-February, with the Reds just three-points off top spot, they were still regarded as 17.000 outsiders – a chance of just 5.88% – having begun the campaign at 34.000.
Despite the ability to continually revise opinion with the most up-to-date information, Liverpool were still seen unlikely to win the title, judged on their updated individual match odds.
They were expected to gain an average of 76 points over the 38 games, but instead finished with 84, missing out on the title by a mere two-points.
Bettors can note that these three examples – the 2012 presidential election, Brazil at the 2014 World Cup and Liverpool in the 2013/14 EPL – are occasions where predictions have either been declared a resounding success or fairly abject failures, depending upon the singular outcome.
However, all each prediction has done is attach a probabilistic value to a range of outcomes, it has not stated it will certainly happen.
In Silver’s opinion, Brazil had a 65% chance of progressing to the final. Some may have baulked at the strength of this opinion, especially as bookmakers made the semi-final a virtual coin toss. But Silver’s confidence still allowed for a 35% probability that Germany would find a way to Rio.
Predictive models always appear potent and effective when the games fall to the projected favourite and weak and absurd when the outsider prevails.
Case Study: Liverpool 2013/14 Premier League season
The idea that predictions are merely probabilistic interpretations of a wide range of possible outcomes can also be demonstrated by the case of Liverpool in 2013/14. Each of the 38 matches gave them a chance to pick up 0, 1 or 3 points and a weighted calculation of these expected points from their 38 games totalled around 76 points.
With this said, bettors must remember that no single result is a certainty and each of the three possible match outcomes for all 38 matches had an associated likelihood of occurring, which will lead to a range of possible final point totals.
The overall view – based on the match odds – from the graph above, was that Liverpool would typically gain 76 points in 2013/14. That view may not have been correct, but even with the season completed, the system can only formulate that assessment of Liverpool’s 2013/14 team, in probabilistic terms.
Simulating the 2013/14 season many times will inevitably see less likely match outcomes prevailing either to Liverpool’s benefit or detriment. The model – when assigning the bookmaker’s opinion that Liverpool’s strength as a 76 point EPL team – resulted in the Reds accumulating 84-points 15% of the time.
It is clear to see that these layers of uncertainty that surround any predictive process are not only present in the prediction, but also in the data that bettors have used to make that prediction.
Therefore it is paramount that bettors do not become over confident about the ability of predictive models to perform, and use them as a balanced betting strategy.