For example, there's a tipster that, over five years, had a 100% track record of predicting tennis matches with an even probability of success - making $50,000 profit. Impressive, right? Not if you discovered that the star tipster was, in fact, a monkey.
Let's say we run a simulation which sees 10,000 tennis tipsters (or monkeys, it really doesn't matter) each with a 50% chance of either making $10,000 a year or losing $10,000 a year. If any tipster has a losing year, they are eliminated.
The tipsters/monkeys make their predictions by simply pushing one of two buttons. If we run the test for one year 5,000 of our tipsters would be $10,000 in profit and the same number $10,000 in the red and binned. In year two we would have 2,500 monkeys with perfect record and if we keep going by Year 5 we would have 313 monkeys from that original cohort that would statistically be able through pure luck to make successive accurate predictions and $50,000.
Confusing survivors with savants
This phenomenon is called survivorship bias, and it has huge significance in the real world of tipsters, because the successful tipster currently topping the tipster league table on hottips.com may just be a lucky monkey pushing a button.
What are the important factors that influence this process? The size of the original sample is critical. If you just focus on the winners in this process, ignoring all the other billions of monkeys producing gibberish, you're being fooled by randomness. The simple fact is that by starting from a large enough sample, some of the participants will end up looking like a savant by pure luck.
On a very basic level, a good judge of a tipster would be whether they use Pinnacle.
The other critical factor is the probability of the event. Our example used a fair coin toss (50/50 chance of heads or tails), but in the real world a bookmaker will hold an edge. Re-running our test with higher margins produces fewer lucky winners: the lower the margin, the easier it is to achieve long-term success.
On a very basic level, a good judge of a tipster would be whether they use Pinnacle. Our odds are proven to be the best, so if they don't use us, they clearly don't know their stuff.
A clever illustration of survivorship bias
There are plenty of great examples of survivorship bias, but one particularly clever stunt by the famous English illusionist Derren Brown in a 2008 programme called 'The System' cleverly illustrated how deceiving it can be.
The show was based around the idea that a system could be developed to 'guarantee a winner' of horse races - a claim regular bettors will be accustomed to. The show followed Khadisha, to whom Brown anonymously sent five correct horse race predictions in a row. There was no trickery at work either, the predictions were fair and accurate and the programme built towards a climax focusing on a sixth and final prediction where, confidence boosted by Brown's 100% tipping record, Khadisha invested $4,000 of her own money... and lost.
The fundamental lesson for sports betting is that anyone can hit a lucky run, and the more improbable something is; the bigger role luck has.
Of course there was no system; Khadisha was simply the product of survivorship bias.
Brown had actually started by contacting 7,776 people (sufficient sample size), and split them into six groups, giving each group a different horse from a six-horse race. Note the number of variables is as equally important as the number of predictions in how quickly the original sample reduces in size.
After each race 5/6 of the people had lost and were dropped from the system (like our failing monkeys) and equal proportions of the survivors randomly sent another selection. Kadisha happened to be the ultimate survivor, winning five times in a row.
The fundamental lesson for sports betting is that anyone can hit a lucky run, and the more improbable something is; the bigger role luck has. If your typewriting monkey produces the Complete Works of Shakespeare from a sample of several billion, don't get too excited. If he repeats the feat, however, take a closer look.
A simple formula to evaluate a tipster's abilities
A simple way to evaluate a tipster's true abilities is to take the square root of the total number of selections and add that number to one half of the total plays made:
√ (No. Selections) + ½ (Total Plays Made)
For example, if he has 400 tips, the square root would be 20, which added to one half of 400, gives a total of 220 theoretical wins.
If the tipster is 20 selections above 200, he is two standard deviations above average. There's about a 1 in 40 chance of a 50% handicapper doing that. So a player with 400 selections would need to go 220-180, or 60-40 with 100 selections to be this rare.
Without being a master statistician, you can quickly see that the more selections you can view, the easier it is to evaluate a player. In many cases, it's safer to follow someone with a lower winning percentage if they have a lot more plays.