





Full description not available




M**K
A Gem of a Tome
This book is The Source. It has the optimum balance of completeness of treatment and conciseness. I’m coming from a maths background and I am finding the book satisfyingly grounded. The boundary of pre-requisite knowledge is clear, and it doesn’t mention much by wave of the hand which will leave you Googling concepts for days on end. These qualities make such a book a rare gem in the statistics literature.What really sets this book apart is its mixture of theory, heuristics, and examples. So they set up the theoretical foundation, explain when and how the method or framework applies (and what e.g. params to tweak), and then give examples of how exactly to apply it, with post-analysis. This sounds like basic stuff, but it is surprising how many books neglect this holistic treatment.Finally, each chapter is self-contained for the most part, so you can use it as a tutorial, sequentially, or as time goes on as a periodic reference dipping in and out of sections as needed. The authors have made this book function well in either mode.
D**L
Great
Excellent book, brilliantly printed
B**N
Better than ever, unless you don't like Stan
This book, in each of its editions, has been the best graduate-level book on the subject the time of its publication.Since the 2nd edition came out there have been substantial improvements in MCMC computation algorithms and convergence modelling as well in Bayesian nonparametric modelling. Substantial new material has been added to cover these items. The one potential caveat is that the authors have stripped out all the BUGS code that was in the previous two editions and replaced it with code in their new language, Stan. They claim Stan is better (faster, better convergence in certain situations where BUGS is known to struggle) but BUGS is proven technology whereas Stan is a (very promising) newcomer. There's more than enough new material to justify upgrading to edition 3 in my view.
C**R
Seeing the light
Excellent book at the intermediate level to learn about Bayesian methods. I struggled through a few machine learning texts but didn't get a good handle on fundamental topics such as MCMC methods and hierarchical methods. This book explains these topics thoroughly and doesn't rely on mathematical formalism only. If you find yourself in a similar boat, I would definitely consider BDA3. One minor observation though is that the latter chapters with the new material are less even in terms of lucidity.If you happen to be looking for a simpler introduction then perhaps consider the book by John Kruschke. If you are not convinced about Bayesian statistics yet, consider E.T. Jaynes' wonderful book on Probability Theory from a more philosophical perspective.
G**L
A great book in Bayesian statistics
A great book in Bayesian statistics. Its pragmatic approach is superb, especially its emphasis in predictive model selection and testing that is similar in spirit to the classical concept of cross-validation. The book contains a miriad of examples. The point-of-view of the authors comes mainly from social sciences, and therefore readers from physical sciences might need more time to get used to the terminology. My only criticism is the conscious omission of the Bayes factors in the book, so it is better use other sources for this topic.
A**R
Nice Book
Fast dilivery and clear explaination to the Concepts. Hope this Book could help me to become conversant with Bayesian methods.
J**D
The book.
This is the book.Nothing more needs to be said.
M**I
this book would be pretty much pointless
It's an ok book.It's worth only as a reference and mainly as a practical guide. Most methods are barely explained, no theoretical foundations of why they work is given. If you're looking for a recipe book, then this is it. However, even at that, it's not suitable for every subject. In econometrics, this book would be pretty much pointless. Much of the topics we're interested do not show up...On the good side, some exercises have solutions on their site. But even then, these exercises aren't the most enticing I've ever seen. Example: In chapter 2, there's an exercise on non-infor priors. They say to choose one and apply to the problem. The solutions at the author's site say that any of those referenced in the book work. Well, choose Jeffreys. Then, you'll get a negative fisher information scalar. Hum... this can't be. I post a question on CV stackexchange, and then I discover that Jeffreys prior doesn't always work when you have a discrete r.v. And the book is silent on this matter. There are more examples like this... It's only useful for those already proficient at the subject, or looking just as rough practical guide for biostatistics.
Trustpilot
2 months ago
2 weeks ago