This is probably the most important of the four parts. The most important points to try and grasp from this one are the priors in Bayesian inference and the principles of MCMC.
Note: if you’re feeling overloaded, you could skip the last 10 minutes of this talk, since you don’t need to worry about Metropolis-coupled MCMC (i.e. heat vs. cold chains) or asymmetric proposals.
- In your own words can you describe each component of Bayes’ rule? Which parts are difficult to understand?
- Can you describe the difference between discrete and continuous variables? And between probabilities and probability densities?
- What is the difference between vague vs. informative priors? Unfortunately Paul’s archery app isn’t available right now, so pay close attention to the demo.
- What is the aim of MCMC in Bayesian inference? Visit the MCMC robot app to explore this further.
In this talk, Paul says that most times you don’t need to worry about the proposal distributions because available software experiments with the proposals at the beginning of your MCMC run (this process is called tuning) and tries to optimize the best proposal size for your model/data. This is true, however, in practice, as data sets become larger and models become more complex, we sometimes need to pay attention to these, as convergence can be challenging.