Rebel Bayes Day 4

Prior beliefs about Bayesian statistics, updated by reading Statistical Rethinking by Richard McElreath.

Duncan Garmonsway
February 21, 2019

Reading week

This week I am reading Statistical Rethinking by Richard McElreath. Each day I post my prior beliefs about Bayesian Statistics, read a bit, and update them. See also Day 1, Day 2, Day 3 and Day 5.

Prior beliefs

New data

10. Counting and Classification

10.1 Binomial regression

I found the chimpanzee example rather bewildering – the first such moment so far. It has been completely rewritten in the draft 2nd Edition, but this series is about the 1st Edition. Also the redraft still uses pulled_left as the outcome variable, so my query about that still stands.

10.2 Poisson regression

10.3 Other count regressions

11. Monsters and Mixtures

11.1 Ordered categorical outcomes

11.2 Zero-inflated outcomes

11.3 Over-dispersed outcomes

12. Multilevel Models

12.1 Example: Multilevel tadpoles

12.2 Varying effects and the underfitting/overfitting trade-off

I came totally unstuck here, and it’s the same in the draft 2nd Edition, so please chip in if you think you can help. It’s the first time I’ve found the Bayesian method harder to follow than the frequentist.

In the previous multilevel model in 12.1 we “adaptively learn the prior that is common to all of these intercepts.” The model is:

\[ \begin{align} s_i & \sim \mathrm{Binomial}(n_i, p_i) \\ \mathrm{logit}(p_i) & = \alpha_{\small{TANK}[i]} \\ \alpha_{\small{TANK}} & \sim \mathrm{Normal}(0, 5) \end{align} \]

In this section, the varying effects model renames TANK to POND, and puts priors on the parameters of \(\alpha_{\small{POND}}\):

\[ \begin{align} s_i & \sim \mathrm{Binomial}(n_i, p_i) \\ \mathrm{logit}(p_i) & = \alpha_{\small{POND}[i]} \\ \alpha_{\small{POND}} & \sim \mathrm{Normal}(\alpha, \sigma) \\ \alpha & \sim \mathrm{Normal}(0, 1) \\ \sigma & \sim \mathrm{HalfCauchy}(0, 1) \end{align} \]

But in the end, isn’t \(\alpha_{\small{POND}}\) still a vector drawn from a single normal prior? What was the point of putting priors on the prior?

Regardless of my lack of understand, a couple of useful quotes:

12.3 More than one type of cluster

12.4 Multilevel posterior predictions

Updated beliefs

Critic’s choice

Seeing models adaptively regularize and avoid overfitting is magical. Figure 10.1 is a delightful and unexpected drawing of a chimpanzee at a dining table.

Corrections

If you see mistakes or want to suggest changes, please create an issue on the source repository.

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. Source code is available at https://github.com/nacnudus/duncangarmonsway, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Garmonsway (2019, Feb. 21). Duncan Garmonsway: Rebel Bayes Day 4. Retrieved from https://nacnudus.github.io/duncangarmonsway/posts/2019-02-21-rebel-bayes-day-4/

BibTeX citation

@misc{garmonsway2019rebel,
  author = {Garmonsway, Duncan},
  title = {Duncan Garmonsway: Rebel Bayes Day 4},
  url = {https://nacnudus.github.io/duncangarmonsway/posts/2019-02-21-rebel-bayes-day-4/},
  year = {2019}
}