Observability and sensitivity in learning dynamical systems
Parameter identifiability in dynamical models
November 9, 2020 — November 9, 2020
In linear systems theory, the term observability is used to discuss whether we can identify a parameter or a latent state, which I will conflate for the current purposes.
Sometimes learning a parameter as such is a red herring; we actually wish to learn an object which is a function of parameters, such as a transfer function, and many different parameter combinations will approximate the object similarly well. If we know what the actual object of interest is, we might hope to integrate out the nuisance parameters and detect sensitivity to this object itself; but maybe we do not even know that. Then what do we do?
Very closely related, perhaps identical uncertainty quantification.
1 Dynamical systems
How precisely can I learn a given parameter of a dynamical system from observation? In ODE theory, a useful concept is sensitivity analysis, which tells us how much gradient information our observations give us about a parameter. This comes in local (at my current estimate) and global (for all parameter ranges) flavours.