Causal inference in highly parameterized ML
September 18, 2020 — March 7, 2024
Applying a causal graph structure in the challenging environment of a no-holds-barred nonparametric machine learning algorithm such as a neural net or its ilk. I am interested in this because it seems necessary and kind of obvious for handling things like dataset shift, but is often ignored. What is that about?
I do not know at the moment. This is a link salad for now.
See also the brain salad graphical models and supervised models.
1 Invariance approaches
Léon Bottou, From Causal Graphs to Causal Invariance:
For many problems, it’s difficult to even attempt drawing a causal graph. While structural causal models provide a complete framework for causal inference, it is often hard to encode known physical laws (such as Newton’s gravitation, or the ideal gas law) as causal graphs. In familiar machine learning territory, how does one model the causal relationships between individual pixels and a target prediction? This is one of the motivating questions behind the paper Invariant Risk Minimization (IRM). In place of structured graphs, the authors elevate invariance to the defining feature of causality.
He commends the Cloudera Fast Forward tutorial Causality for Machine Learning, which is a nice bit of applied work.
2 Causality for feedback and continuous fields
3 Double learning
See Double learning.
4 As “Deep Causality”
Not sure what this is yet (Berrevoets et al. 2024; Deng et al. 2022; Lagemann et al. 2023).
5 Benchmarking
Detecting causal associations in time series datasets is a key challenge for novel insights into complex dynamical systems such as the Earth system or the human brain. Interactions in such systems present a number of major challenges for causal discovery techniques and it is largely unknown which methods perform best for which challenge.
The CauseMe platform provides ground truth benchmark datasets featuring different real data challenges to assess and compare the performance of causal discovery methods. The available benchmark datasets are either generated from synthetic models mimicking real challenges, or are real world data sets where the causal structure is known with high confidence. The datasets vary in dimensionality, complexity and sophistication.
6 Tooling
6.1 Dowhy
6.2 Causalnex
CausalNex is a Python library that uses Bayesian Networks to combine machine learning and domain expertise for causal reasoning. You can use CausalNex to uncover structural relationships in your data, learn complex distributions, and observe the effect of potential interventions.
6.3 caus2e
The main contribution of cause2e is the integration of two established causal packages that have currently been separated and cumbersome to combine:
- Causal discovery methods from the py-causal package, which is a Python wrapper around parts of the Java TETRAD software. It provides many algorithms for learning the causal graph from data and domain knowledge.
- Causal reasoning methods from the DoWhy package, which is the current standard for the steps of a causal analysis starting from a known causal graph and data
6.4 TETRAD
TETRAD (source, tutorial) is a tool for discovering and visualizing and calculating giant empirical DAGs, including general graphical inference and causality. It’s written by eminent causality inference people.
Tetrad is a program which creates, simulates data from, estimates, tests, predicts with, and searches for causal and statistical models. The aim of the program is to provide sophisticated methods in a friendly interface requiring very little statistical sophistication of the user and no programming knowledge. It is not intended to replace flexible statistical programming systems such as Matlab, Splus or R. Tetrad is freeware that performs many of the functions in commercial programs such as Netica, Hugin, LISREL, EQS and other programs, and many discovery functions these commercial programs do not perform. …
The Tetrad programs describe causal models in three distinct parts or stages: a picture, representing a directed graph specifying hypothetical causal relations among the variables; a specification of the family of probability distributions and kinds of parameters associated with the graphical model; and a specification of the numerical values of those parameters.
py-causal is a wrapper around TETRAD for python, and R-causal for R.
7 Incoming
- Nisha Muktewar and Chris Wallace, Causality for Machine Learning is the book Bottou recommends on this theme.
- For coders, Ben Dickson writes on Why machine learning struggles with causality.
- Cheng Soon Ong recommends Finn Lattimore to me as an important perspective.
- biomedia-mira/deepscm: Repository for Deep Structural Causal Models for Tractable Counterfactual Inference (Pawlowski, Coelho de Castro, and Glocker 2020).
- ICML 2022 Tutorial on causality and deep learning
- Causality and Deep Learning: Synergies, Challenges and the Future Tutorial