False positives: why sometimes 95=8.7

Imagine there has been an outbreak of a rare disease in your hometown. Only 1 out of every 200 people is infected and there is a test that can detect if you are infected (or not) with an accuracy of 95%. You take the test and the result is positive, indicating that you have the disease.

What is the probability that you are infected?

Intuitively, the answer to this question seems to be 95%. But if you are familiar with Bayes’ theorem (or if you read the title of this blogpost) you might know that the probability of having the disease is much lower than that. It’s only 8.7% to be exact.

How can this be? Continue reading

The illusion of skill and the reality of an efficient market

You probably know the saying that the house always wins. This is true for games of chance like roulette or coin-flipping, where the probability of every possible outcome is known beforehand.

If you flip a coin multiple times, and each time it lands heads you win 0.97$ and every time it lands tails you lose 1$, over the long run, you are going to have a bad time. We know this, because we can assign a definite probability to the coin landing either heads or tails (50/50). Any profit obtained from a series of bets like this, is based on chance and nothing more.

However, in sports betting the true probabilities for every outcome aren’t knowable in the same sense as the probability of a coin landing on either heads or tails is. This opens the door to exploiting market inefficiencies, mistakes made by the bookkeepers trying to assess the true underlying probabilities.

It also opens the door to something called the illusion of skill.

Continue reading

Statistical Models – what do they know? Do they know things?? Let’s find out! #3

Welcome to the third episode of Statistical Models –  What do they know? Do they know things?? Let’s find out!

This time we will be looking at the accuracy of over 30 public and private prediction models and see how they compared to the betting markets.

As in the last two parts of the series the accuracy of every model was measured with the Ranked Probability Score (RPS), a scoring rule that calculates how big the gap between what was predicted and what happened was. Continue reading

Should Jürgen Klopp worry about Liverpool’s bipolar Premier League performance 16/17? (No, and kidney cancer rates can help us understand why)

It has been almost two years now since Jürgen Klopp introduced himself as ‘the Normal One’ at Liverpool, and looking back at his first full Premier League season it appears to be clear: He lied.

Trying to describe the 2016/2017 season, “normal” isn’t the first word that comes to mind when thinking of Liverpool.

Continue reading

Statistical Models – what do they know? Do they know things?? Let’s find out! #1

It’s a question as old as time, or at least as old as public football analytics. How good are our prediction models?

To try and answer this question, I’ll be using the Ranked Probability Score (RPS). The RPS is a statistical method that measures the quality of predictions. It calculates how far off predictions were from the actual results, which means that a low RPS is better than a high RPS. Continue reading

Hindsight bias and how to measure the quality of football predictions

Our mind constantly tries to make sense of the world. Whenever something happens, it either falls into the category of “yes, we live in a world where this is possible”, or “the fuck was that?”

If it falls into the latter category, we look for explanations in the events that preceded whatever surprised us and adjust our view of the world to accommodate this new information. Continue reading