Applied Linear Modeling

(2018) In this block-course we developed Generalized Linear Models and Mixed Linear Models from scratch. The idea is to introduce the linear model and all its encompasing tests as one coherent framework (i.e. t-test, ANOVA, ANCOVA, repeated measures ANOVA) and then continue with GLMs and LMMs. The mornings were two to three 1h lectures in the afternoon a 2.5-3h exercise session (previous edition was withwith Basil Wahn)

Lecture Slides

All slides are generated in R, thus all the plots can be easily rebuild and customized if necessary.

  • Linear Modeling We introduce linear modeling as an overarching theme. We use dummy codings, Standard Errors, Confidence Intervals, Bootstrapping, the concept of “there exists only one test”,  interactions, and some philosophical aspects
  • Generalized Linear Models We discussed Logistic+Poisson regression GLM. A focus is put on motivation and interpretation. We reconcile all members of the GLM family and put special focus on the variance / mean assumptions
  • Linear Mixed Models We discussed repeated measures designs (within-subject) and move from there to LMMs. We discuss implementation, interpretation, assumption checks. Convergence problems are discussed. Random Coefficients (Intercept, Slopes, Correlations) and finally multiple random variables e.g.
    subjects and items.

You can find the Rpresentation source code (including code to generate 90% of graphs) for GLM here and for LMM here.

Exercises

Logistic regression. Here we use the cowles data from John Fox’s Applied Regression Analysis Book. We predict whether a student will volunteer in a study or not based on sex, extraversion and neuroticism. An interaction is modeled and interpreted as well.

Mixed Linear model. Here we use data from one of my studies to build a simple linear mixed model. We will look at parameter transformations, assumption tests (which fail in this case) and log-log interactions. We also have models that do not converge, model comparisons using likelihood-ratio tests. Finally we check whether multiple random variables are necessary.

Bayesian Inference

(2016) together with Peter König. In this course we watched the mini-lectures of Jarad Niemi (can be found on youtube) and discussed the content. This allows students to rewatch the videos, introduce topics themselves and give them multiple different angles of the same topic. On the negative side, the videos are not always motivating the content very well. Every week we had homeworks in R, which we discussed intensively in the course. Some of the exercises are taken from “Doing Bayesian Data Analysis” and “Statistical Rethinking” which are both books that I highly recommend. Most are made by myself. Feel free to use them under CC-BY.

  • Week 1 Probabilities & Bayes Rule
  • Week 2 Bernoulli, Iterative Updating
  • Week 3 Monte Carlo Integration implementation
  • Week 4 Inverse Cumulative Function and Accept-Reject Method implementation
  • Week 5 Metropolis Implementation and Introduction to STAN
  • Week 6 Multi-Parameter Metropolis and Metropolis-within-Gibbs implementation. Bonus:Banana-Shaped Posterior
  • Week 7 Hierarchical Models in STAN
  • Week 7 Poisson distribution in STAN and posterior predictive model checks (data file .csv)