Rebel Bayes Day 1

Prior beliefs about Bayesian statistics, updated by reading Statistical Rethinking by Richard McElreath.

Duncan Garmonsway
February 18, 2019

Reading week

This week I am reading Statistical Rethinking by Richard McElreath. Each day I post my prior beliefs about Bayesian Statistics, read a bit, and update them. See also Day 2, Day 3, Day 4 and Day 5.

Prior beliefs

  1. Bayesian statistics is a way to combine distributions by a kind of averaging.
  2. Andrew Gelman is intimidating.
  3. Bayesian hypothesis testing is a thing Bayesians do in private when they realise they still have to make decisions somehow.
  4. You can choose any prior you want as long as it doesn’t really affect the posterior.
  5. None of this has anything whatsoever to do with Monte Carlo or the prosecutor’s fallacy. It’s actually about baseball.
  6. Bayesian methods are easy to understand unless you’ve been brainwashed by the frequentists.
  7. If computers had been invented before frequentist statistics, nobody would have invented frequentist statistics.
  8. They don’t teach Bayes to undergrads because it’s embarrassingly easy.
  9. Only pedants think there’s a meaningful difference between “confidence intervals” and “credible intervals”.
  10. Bayesian A/B testing is the acceptable face of early stopping, aka ethical and pragmatic experimental design.
  11. The Bayesian revival was masterminded by publishers to double their market by publishing Bayesian variants of everything.
  12. You can use Bayesian methods to account for measurement uncertainty.
  13. You can use Bayesian methods to account for prior beliefs.
  14. Physicists worked out all the useful Bayesian methods ages ago but they have way cooler things to boast about.
  15. Researchers use Bayes as an excuse to play with code.
  16. WinBUGS is a leading indicator of a bad course.
  17. STAN is the man.
  18. Statistics is a science of guesswork; decision-making with uncertainty.
  19. MCMC stands for Monte Casino Molotov Cocktail.
  20. It’s not about Bayes theorem.
  21. You can skip the calculus.

New data

Preface

1. The Golem of Prague

1.1 Statistical golems

1.2 Statistical rethinking

1.3 Three tools for golem engineering.

2. Small Worlds and Large Worlds

2.1 The garden of forking data

2.2 Building a model

2.3 Components of the model

2.4 Making the model go

3. Sampling the Imaginary

3.2 Sampling to summarize

Updated beliefs

  1. ✓ Bayesian statistics is a way to combine distributions by a kind of averaging.
  2. Andrew Gelman is intimidating It’s difficult to do the right thing in statistics, but that doesn’t stop individual statisticians being very sure of themselves.
  3. Bayesian hypothesis testing is a thing Bayesians do in private when they realise they still have to make decisions somehow Everyone knows that hypothesis tests are only one way to inform a decision. Bayesians don’t have anything special to say about it but they do anyway.
  4. You can choose any prior you want as long as it doesn’t really affect the posterior If your model behaves unexpectedly then it might not be good enough. No mention of the risk that a well-behaved model confirms wrong beliefs.
  5. ✓ None of this has anything whatsoever to do with Monte Carlo or the prosecutor’s fallacy. It’s actually about baseball
  6. ✓ Bayesian methods are easy to understand unless you’ve been brainwashed by the frequentists
  7. ? If computers had been invented before frequentist statistics, nobody would have invented frequentist statistics.
  8. ✕ They don’t teach Bayes to undergrads because it’s embarrassingly easy
  9. ✓ Only pedants think there’s a meaningful difference between “confidence intervals” and “credible intervals”.
  10. ? Bayesian A/B testing is the acceptable face of early stopping, aka ethical and pragmatic experimental design.
  11. ? The Bayesian revival was masterminded by publishers to double their market by publishing Bayesian variants of everything.
  12. ✓ You can use Bayesian methods to account for measurement uncertainty.
  13. ✓ You can use Bayesian methods to account for prior beliefs.
  14. ? Physicists worked out all the useful Bayesian methods ages ago but they have way cooler things to boast about.
  15. ✓ Researchers use Bayes as an excuse to play with code.
  16. ? WinBUGS is a leading indicator of a bad course.
  17. ? STAN is the man.
  18. ✓ Statistics is a science of guesswork; decision-making with uncertainty.
  19. ✕ MCMC stands for Monte Casino Molotov Cocktail.
  20. ✓ It’s not about Bayes theorem.
  21. ✓ You can skip the calculus.

Critic’s Choice

The analysis of the data as a maximum run length of one value, and the number of switches between values. It’s a neat illustration of a model that represents one aspect of reality but not every aspect. I’d like to know how sensitive ‘maximum run length’ and ‘number of switches’ are to the sample size.

Corrections

If you see mistakes or want to suggest changes, please create an issue on the source repository.

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. Source code is available at https://github.com/nacnudus/duncangarmonsway, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Garmonsway (2019, Feb. 18). Duncan Garmonsway: Rebel Bayes Day 1. Retrieved from https://nacnudus.github.io/duncangarmonsway/posts/2019-02-18-rebel-bayes-day-1/

BibTeX citation

@misc{garmonsway2019rebel,
  author = {Garmonsway, Duncan},
  title = {Duncan Garmonsway: Rebel Bayes Day 1},
  url = {https://nacnudus.github.io/duncangarmonsway/posts/2019-02-18-rebel-bayes-day-1/},
  year = {2019}
}