Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is my favorite book on statistics. Full stop. The author Andrew Gelman created a whole new branch of Bayesian statistics with both his theoretical work on hierarchical modeling while also publishing Stan to enable practical applications of hierarchical models.

It took me about a year to work through this book on the side (including the exercises) and it provided the foundation for years of fruitful research into hierarchical Bayesian models. It’s a definitely not an introductory read, but for any looking to advance their statistical toolkit, I cannot recommend this book highly enough.

As a starting point, I’d strongly suggest the first 5 chapters for an excellent introduction to Gelman’s modeling philosophy, and then jumping around the table of contents to any topics that look interesting.



“ The author Andrew Gelman created a whole new branch of Bayesian statistics ...” Love Gelman, but this is playing fast and loose with facts.


His book on hierarchical modeling with Hill has 20398 cites on Google Scholar https://scholar.google.com/scholar?cluster=94492350364273118... and Wikipedia calls him "a major contributor to statistical philosophy and methods especially in Bayesian statistics[6] and hierarchical models.[7]", which sounds like the claim is more true than false.


He co-wrote the reference textbook on the topic and made interesting methodological contributions, but Gelman acknowledges other people as creators of the theoretical underpinnings of multilevel/hierarchical modeling, including Stein or Donoho [1]. The field is quite old, one can find hierarchical models in articles that were published many decades ago.

Also, IMHO, his best work has been done describing how to do statistics. He has written somewhere I cannot find now that he sees himself as a user of mathematics, not as a creator of new theories. His book Regression and Other Stories is elementary but exceptionally well written. He describes how great Bayesian statisticians think and work, and this is invaluable.

He is updating Data Analysis Using Regression and Multilevel/Hierarchical Models to the same standard, and I guess BDA will eventually come next. As part of the refresh, I imagine everything will be ported to Stan. Interestingly, Bob Carpenter and others working on Stan are now pursuing ideas on variational inference to scale things further.

[1] https://sites.stat.columbia.edu/gelman/research/unpublished/...


Totally agree and great point that hierarchical models have been around for a long time; however, these were primarily analytical, leveraging conjugate priors or requiring pretty extensive integration.

I would say his work with Stan and his writings, along with theorists like Radford Neal, really opened the door to a computational approach to hierarchical modeling. And I think this is a meaningfully different field.


I give Gelman a lot of credit for popularizing hierarchical models, but you give him too much.

Before Stan existed we used BUGS [1] and then JAGS [2]. And most of the work on computation (by Neal and others) was entirely independent of Gelman.

[1] https://en.wikipedia.org/wiki/Bayesian_inference_using_Gibbs...

[2] https://en.wikipedia.org/wiki/Just_another_Gibbs_sampler


What is a book / course on statistics that I can go through before this so that I can understand this?


Here is one path to learn Bayesian starting from basics, assuming modern R path with tidyverse (recommended):

First learn some basic probability theory: Peter K. Dunn (2024). The theory of distributions. https://bookdown.org/pkaldunn/DistTheory

Then frequentist statistics: Chester Ismay, Albert Y. Kim, and Arturo Valdivia - https://moderndive.com/v2/ Mine Çetinkaya-Rundel and Johanna Hardin - https://openintrostat.github.io/ims/

Finally Bayesian: Johnson, Ott, Dogucu - https://www.bayesrulesbook.com/ This is a great book, it will teach you everything from very basics to advanced hierachical bayesian modeling and all that by using reproducible code and stan/rstanarm

Once you master this, next level may be using brms and Solomon Kurz has done full Regression and Other Stories Book using tidyerse/brms. His knowledge of tidyverse and brms is impressive and demonstrated in his code. https://github.com/ASKurz/Working-through-Regression-and-oth...


I would include Richard McElreath's _Statistical Rethinking_ here after, or in combination with, _Bayes Rules!_. A translation of the code parts into the tidyverse is available free online, as are lecture videos based on the book.


I don’t mean for the bar to sound too high. I think working through khan academy’s full probability, calculus and linear algebra courses would give you a strong foundation. I worked through this book having just completed the equivalent courses in college.

It’s just a relatively dense book. There’s some other really good suggestions in this thread, most of which I’ve heard good things about. If you have a background in programming, I’d suggest Bayesian Methods for Hackers as a really good starting point. But you can also definitely tackle this book head on, and it will be very rewarding.


Highly recommend Stats 110 from Blitzstein. Lectures and textbook are all online https://stat110.hsites.harvard.edu/


Bayesian Statistics the Fun Way is probably the best place to start if you're coming at this from 0. It covers the basics of most of the foundational math you'll need along the way and assumes basically no prerequisites.

After than Statistical Rethinking will take you much deeper into more complex experiment design using linear models and beyond as well as deepening your understanding of other areas of math required.


Regression and Other Stories. It’s also co-authored by Gelman and it reads like an updated version of his previous book Data Analysis Using Hierarchical/Multilevel Models.

Statistical Rethinking is a good option too.


Can second Regression and Other Stories, it's freely available here: https://users.aalto.fi/~ave/ROS.pdf, and you can access additional information such as data and code (including Python and Julia ports) here: https://avehtari.github.io/ROS-Examples/index.html


If you are near Columbia the visiting students post baccalaureate program(run by the SPS last I recall) allows you to take for credit courses in the Social Sciences department. Professor Ben Goodrich has an excellent course on Bayesian Statistics in Social Sciences which teaches it using R(now it might be in Stan).

That course is a good balance between theory and practice. It gave me a practical intuition understanding why posterior distribution of parameters and data are important and how to compute them.

I took the course in 2016 so a lot could have changed.


I found the book from David Mackay on Information Theory, Inference, and Learning Algorithms to be well written and easy to follow. Plus it is freely available from his website: https://www.inference.org.uk/itprnn/book.pdf

It goes through fundamentals of Bayesian ideas in the context of applications in communication and machine learning problems. I find his explanations uncluttered.


Really sad he died of cancer a few years ago.


There is a collection of curated resources here: https://www.pymc.io/projects/docs/en/stable/learn.html


I would really love to have the story of PyMC told, especially it's technical evolution, how it was implemented first and how it changed over the years.


For effectively and efficiently learning the calculus, linear algebra, and probability underpinning these fields, Math Academy is going to be your best resource.


Statistical Rethinking by Richard McElreath. He even has a youtube series covering the book if you prefer that modality.


Doing Bayesian Data Analysis by John Kruschke (get the 2nd edition). The name is even an homage to the original.


Can you explain to me in simple terms how your fruitful research benefited you in a concrete way. Is this simply an enlightening hobby or do you have significant everyday applications? What kind of cool job has you employ Bayesian Data Analysis day to day and for what benefit? How do the suits relate to such knowledge and it's beneficial application that may be well beyond their ken?


My applications have focused on noisy, high dimensional small datasets in which it is either very expensive or impossible to get more data.

One example is rare class prediction on long form text data eg phone calls, podcasts, transcripts. Other networks including neural networks and LLMs are either not flexible enough or require far too much data to achieve the necessary performance. Structured hierarchical modeling is the balance between those two extremes.

Another example is in genomic analysis. Similarly high dimensional, noisy, low data. Additionally, you don’t actually care about the predictions, you want to understand what genes or sets of genes are driving phenotypic behaviors.

I’d be happy to go into more depth via email or chat if this is something you are interested in (on my profile).

Some useful reads

[1] https://sturdystatistics.com/articles/text-classification

[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC5028368/


Is there a good book that covers statistics as it is applied to testing - like for medical research or as optimization or manufacturing or whatever?


The key insight to recognize is that within the Bayesian framework hypothesis testing is parameter estimation. Your certainty in the outcome of the test is your posterior probability over the test-relevant parameters.

Once you realize this you can easily develop very sophisticated testing models (if necessary) that are also easy to understand and reason about. This dramatically simplifies.

If you're looking for a specific book recommendation Statistical Rethinking does a good job covering this at length and Bayesian Statistics the Fun Way is a more beginner friendly book that covers the basics of Bayesian hypothesis testing.


I might checkout Statistical Rethinking given how frequently it is being recommended!

Edit: Haha I just found the textbook and I’m remembering now that I actually worked through sections of it back when I was working through BDA several years back.


This book is very relevant to those fields. There is a common choice in statistics to either stratify or aggregate your dataset.

There is an example in his book discussing efficacy trials across seven hospitals. If you stratify the data, you lose a lot of confidence, if you aggregate the data, you end up just modeling the difference between hospitals.

Hierarchical modeling allows you to split your dataset under a single unified model. This is really powerful for extracting signal for noise because you can split your dataset according to potential confounding variables eg the hospital from which the data was collected.

I am writing this on my phone so apologies for the lack of links, but in short the approach in this book is extremely relevant of medical testing.


It’s unclear which post you’re referring to - can you clarify which book you mean by “this book”?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: