Bayes’ theorem was the subject of a detailed article. The essay is good, but over 15,000 words long — here’s the condensed version for Bayesian newcomers like myself:

**Tests are not the event.**We have a cancer*test*, separate from the event of actually having cancer. We have a*test*for spam, separate from the event of actually having a spam message.**Tests are flawed.**Tests detect things that don’t exist (false positive), and miss things that do exist (false negative).**Tests give us test probabilities, not the real probabilities.**People often consider the test results directly, without considering the errors in the tests.**False positives skew results.**Suppose you are searching for something really rare (1 in a million). Even with a good test, it’s likely that a positive result is really a*false positive*on somebody in the 999,999.**People prefer natural numbers.**Saying “100 in 10,000″ rather than “1%” helps people work through the numbers with fewer errors, especially with multiple percentages (“Of those 100, 80 will test positive” rather than “80% of the 1% will test positive”).**Even science is a test**. At a philosophical level, scientific experiments can be considered “potentially flawed tests” and need to be treated accordingly. There is a*test*for a chemical, or a phenomenon, and there is the*event*of the phenomenon itself. Our tests and measuring equipment have some inherent rate of error.

**Bayes’ theorem converts the results from your test into the real probability of the event.** For example, you can:

**Correct for measurement errors**. If you know the real probabilities and the chance of a false positive and false negative, you can correct for measurement errors.**Relate the actual probability to the measured test probability.**Bayes’ theorem lets you relate Pr(A|X), the chance that an event A happened given the indicator X, and Pr(X|A), the chance the indicator X happened given that event A occurred. Given mammogram test results and known error rates, you can predict the actual chance of having cancer.

## Anatomy of a Test

The article describes a cancer testing scenario:

- 1% of women have breast cancer (and therefore 99% do not).
- 80% of mammograms detect breast cancer when it is there (and therefore 20% miss it).
- 9.6% of mammograms detect breast cancer when it’s
**not**there (and therefore 90.4% correctly return a negative result).

Put in a table, the probabilities look like this:

How do we read it?

- 1% of people have cancer
- If you
**already have cancer**, you are in the first column. There’s an 80% chance you will test positive. There’s a 20% chance you will test negative. - If you
**don’t have cancer**, you are in the second column. There’s a 9.6% chance you will test positive, and a 90.4% chance you will test negative.

## How Accurate Is The Test?

Now suppose you get a positive test result. What are the chances you have cancer? 80%? 99%? 1%?

Here’s how I think about it:

- Ok, we got a positive result. It means we’re somewhere in the top row of our table. Let’s not assume anything — it could be a true positive or a false positive.
- The chances of a
*true positive*= chance you have cancer * chance test caught it = 1% * 80% = .008 - The chances of a
*false positive*= chance you don’t have cancer * chance test caught it anyway = 99% * 9.6% = 0.09504

The table looks like this:

And what was the question again? Oh yes: what’s the chance we really have cancer if we get a positive result. The chance of an event is the number of ways it could happen given all possible outcomes:

`Probability = desired event / all possibilities`

The chance of getting a real, positive result is .008. The chance of getting any type of positive result is the chance of a true positive plus the chance of a false positive (.008 + 0.09504 = .10304).

So, our chance of cancer is .008/.10304 = 0.0776, or about 7.8%.

Interesting — a positive mammogram only means you have a 7.8% chance of cancer, rather than 80% (the supposed accuracy of the test). It might seem strange at first but it makes sense: the test gives a false positive 9.6% of the time (quite high), so there will be **many** false positives in a given population. For a rare disease, **most** of the positive test results will be wrong.

Let’s test our intuition by drawing a conclusion from simply eyeballing the table. If you take 100 people, only 1 person will have cancer (1%), and they’re most likely going to test positive (80% chance). Of the 99 remaining people, about 10% will test positive, so we’ll get roughly 10 false positives. Considering all the positive tests, just 1 in 11 is correct, so there’s a 1/11 chance of having cancer given a positive test. The real number is 7.8% (closer to 1/13, computed above), but we found a reasonable estimate without a calculator.

## Bayes’ Theorem

We can turn the process above into an equation, which is Bayes’ Theorem. It lets you take the test results and correct for the “skew” introduced by false positives. You get the real chance of having the event. Here’s the equation:

And here’s the decoder key to read it:

- Pr(A|X) = Chance of having cancer (A) given a positive test (X). This is what we want to know: How likely is it to have cancer with a positive result? In our case it was 7.8%.
- Pr(X|A) = Chance of a positive test (X) given that you had cancer (A). This is the chance of a true positive, 80% in our case.
- Pr(A) = Chance of having cancer (1%).
- Pr(not A) = Chance of not having cancer (99%).
- Pr(X|not A) = Chance of a positive test (X) given that you didn’t have cancer (~A). This is a false positive, 9.6% in our case.

Try it with any number:

It all comes down to the chance of a **true positive result** divided by the **chance of any positive result**. We can simplify the equation to:

Pr(X) is a normalizing constant and helps scale our equation. Without it, we might think that a positive test result gives us an 80% chance of having cancer.

Pr(X) tells us the chance of getting *any* positive result, whether it’s a real positive in the cancer population (1%) or a false positive in the non-cancer population (99%). It’s a bit like a weighted average, and helps us compare against the overall chance of a positive result.

In our case, Pr(X) gets really large because of the potential for false positives. Thank you, normalizing constant, for setting us straight! This is the part many of us may neglect, which makes the result of 7.8% counter-intuitive.

## Intuitive Understanding: Shine The Light

The article mentions an intuitive understanding about shining a light through your real population and getting a test population. The analogy makes sense, but it takes a few thousand words to get there :).

Consider a real population. You do some tests which “shines light” through that real population and creates some test results. If the light is completely accurate, the test probabilities and real probabilities match up. Everyone who tests positive is actually “positive”. Everyone who tests negative is actually “negative”.

But this is the real world. Tests go wrong. Sometimes the people who have cancer don’t show up in the tests, and the other way around.

Bayes’ Theorem lets us look at the skewed test results and correct for errors, recreating the original population and finding the real chance of a true positive result.

## Bayesian Spam Filtering

One clever application of Bayes’ Theorem is in spam filtering. We have

- Event A: The message is spam.
- Test X: The message contains certain words (X)

Plugged into a more readable formula (from Wikipedia):

Bayesian filtering allows us to predict the chance a message is really spam given the “test results” (the presence of certain words). Clearly, words like “viagra” have a higher chance of appearing in spam messages than in normal ones.

Spam filtering based on a blacklist is flawed — it’s too restrictive and false positives are too great. But Bayesian filtering gives us a middle ground — we use *probabilities*. As we analyze the words in a message, we can compute the chance it is spam (rather than making a yes/no decision). If a message has a 99.9% chance of being spam, it probably is. As the filter gets trained with more and more messages, it updates the probabilities that certain words lead to spam messages. Advanced Bayesian filters can examine multiple words in a row, as another data point.

## Further Reading

There’s a lot being said about Bayes:

Have fun!

Pingback: Проверенный чёрт()

Pingback: Иван Бегтин | Пример ведения IT блога()

Pingback: Better Explained « Xavier Seton’s Blog()

Pingback: Bayes Theorem for project managers « Eight to Late()

Pingback: Alexander Kruel · A Guide to Bayes’ Theorem – A few links()

Pingback: An Intuitive Explanation of Eliezer Yudkowsky’s Intuitive Explanation of Bayes’ Theorem()

Pingback: The Talk That Wasn’t That Great — Bayes rules and all that « Pacific NorthWest Science()

Pingback: Black Swans, the “Ludic Fallacy”, and Bayesian inference « Boundless Rationality()

Pingback: some tidbits « Esoteric and godless musings from a locked away sage.()

Pingback: How to remember Bayes’ Theorem without really trying | How to remember math without really trying()

Pingback: How to remember Bayes’ Theorem without really trying | How To Remember Math Without Really Trying()

Pingback: The Laws of Thought | Facing the Singularity()

Pingback: Better explained: Bayes Theorem « Super Hyper Josh()

Pingback: Egg consumption - Prostate cancer risk | Mark's Daily Apple Health and Fitness Forum page()

Pingback: considering that we're always seeing probabilities for health issues, this is…()

Pingback: Bookmarks for February 7th | Chris’s Digital Detritus()

Pingback: Jack Kruse....what is he up to? | Mark's Daily Apple Health and Fitness Forum page()

Pingback: Interesting -The Carbohydrate Hypothesis of Obesity: a Critical Examination - Page 11 | Mark's Daily Apple Health and Fitness Forum page 11()

Pingback: Latest Stock Market and Wall Street stories » NYT Election Oracle, Nate Silver, On Why Blogging Is Great For Science()

Pingback: Morning Feature – The Signal and the Noise, Part II: How Should the News Change Your Views? | BPI Campus()

Pingback: An Intuitive (and Short) Explanation of Bayes’ Theorem | BetterExplained | Spice Ridge Journal()

Pingback: The likelihood of having the disease - USMLE Forums()

Pingback: Understanding of Bayes Theorem With Ratios | BetterExplained()

Pingback: Playing in the Sandbox: Building a Spam Detector With Python | Inside the Nerdery()

Pingback: The Rise of Decision Analytics | Lumina | How theory and techniques have made decision-making a science as well as an art()

Pingback: What I’m Reading, Friday, September 20, 2013 | Rationally Thinking Out Loud()

Pingback: Is God a good theory? A response to Sean Carroll (Part Two) | Uncommon Descent()

Pingback: Bayes’ Rule for Ducks – Plan Space from Outer Nine()

Pingback: Testing my patients at Dr. Leigh Saint-Louis()

Pingback: 5 out of 4 Americans Do Not Understand Statistics « Science-Based Medicine()

Pingback: The supreme, most ultimate post on Bayes’ Theorem (maybe) | An Aspiring Rationalist's Ramble()

Pingback: The Prism Podcast – Episode 22 | The Prism()

Pingback: Bayesian Reasoning and Intuition | Business Forecasting()

Pingback: An Introduction To Bayesian Inference | Fewer Lacunae()

Pingback: cigarettes sale - buy online smokes()

Pingback: How To Avoid Black And White Thinking: A Marriage Of Heuristics And Precision | College Info Geek()

Pingback: On Medicine Improving physicians’ understanding of statistical tests()

Pingback: Improving physicians’ understanding of statistical tests - On Medicine()

Pingback: Desacuerdo y respeto | El blog de Artir()

Pingback: DERB’S DECEMBER (2016!) DIARY: Roll Over Ta-Nehisi Coates, I Have An Escalator Anecdote Too! (Where’s My McArthur Grant?) Etc.()

Pingback: Machine Learning Engineer Interview Resources | liehendi()

Pingback: 41 Key Machine Learning Interview Questions with Answers()

Pingback: Posteriors and P-Values – Robust Bayesian()

Pingback: An Intuitive (and Short) Explanation of Bayes’ Theorem – Murzik in Space()

Pingback: ‘Weighting’ opinions — how to get close(ish) to perfect information as a lazy person – The Establishmentarian()

Pingback: Foundations of Data Mining Part III: Training a Naive-Bayes Classifier – Joshua T. Pierce()

Pingback: Probabilities Revised – Anam Zahid()

Pingback: Friday Links: 18th August 2017 – RaymondTheWilliams()

Pingback: Friday Links: 25/08/17 – RaymondTheWilliams()

Pingback: Applied Data Science Examples - Seattle Data Guy()

Pingback: NB: 9.ix 2017 « Ülo Ennuste Economics()

Pingback: What Decision Makers Need to Know Before Investing in CRO or A/B Testing Software()

Pingback: [Из песочницы] Простое объяснение теоремы Байеса Man in Town()

Pingback: Bayes theorem – a worked example – Sho't left to data science()

Pingback: AGT Cohort 002 [27/04/18]: Mental Models – AGT Group()