Generalized linear models
March 24, 2016 — August 5, 2021
Using the machinery of linear regression to predict in somewhat more general regressions, using least-squares or quasi-likelihood approaches. This means you are still doing something like familiar linear regression, but outside the setting of e.g. linear response and possibly homoskedastic Gaussian noise.
1 TODO
Discover the magical powers of log-concavity and what they enable.
2 Classic linear models
See linear models.
3 Generalised linear models
The original extension. Kenneth Tay’s explanation is simple and efficient.
To learn:
- When can we do this? e.g. Must the response be from an exponential family for really real? What happens if not?
- Does anything funky happen with regularisation? What?
- Model selection theory
3.1 Response distribution
🏗 What constraints do we have here?
3.2 Linear Predictor
🏗
3.3 Link function
An invertible (monotonic?) function relating the mean of the linear predictor and the mean of the response distribution.
3.4 Quasi-likelihood
A generalisation of likelihood used in some tricky corners of GLMs. (Wedderburn 1976) used it to provide a unified GLM/ML rationale. I don’t yet understand it. Heyde says (Heyde 1997):
Historically there are two principal themes in statistical parameter estimation theory
It is now possible to unify these approaches under the general description of quasi-likelihood and to develop the theory of parameter estimation in a very general setting. […]
It turns out that the theory needs to be developed in terms of estimating functions (functions of both the data and the parameter) rather than the estimators themselves. Thus, our focus will be on functions that have the value of the parameter as a root rather than the parameter itself.
4 Hierarchical generalised linear models
GLM + hierarchical model = HGLM.
5 Generalised additive models
Generalised generalised linear models. Semiparametric simultaneous discovery of some non-linear predictors and their response curve under the assumption that the interaction is additive in the transformed predictors \[ g(\operatorname{E}(Y))=\beta_0 + f_1(x_1) + f_2(x_2)+ \cdots + f_m(x_m). \]
These have now also been generalised in the obvious way.
6 Generalised additive models for location, scale and shape
Folding GARCH and other regression models into GAMs.
GAMLSS is a modern distribution-based approach to (semiparametric) regression models, where all the parameters of the assumed distribution for the response can be modelled as additive functions of the explanatory variables
7 Vector generalised additive models
See Yee (2015).
8 Vector generalised hierarchical additive models for location, scale and shape
Exercise for the student.
9 Generalised estimating equations
🏗
But see Johnny Hong and Kellie Ottoboni. Is this just the quasi-likelihood thing again?
10 GGLLM
Generalized² Linear² models (Gordon 2002) unify GLMs with non-linear matrix factorisations.