Function space versus weight space in NNs
October 15, 2024 — October 15, 2024
On the tension between the representation of functions in function space and in weight space in neural networks. We “see” the outputs of neural networks as functions, generated by some inscrutable parameterization in terms of weights, which is more abstruse but also more tractable to learn in practice. Why might that be?
When we can learn in function space many things work better in various senses (see, e.g. GP regression), but such methods rarely dominate in messy practice. Why might that be? When can we operate in function space? Sometimes we really want to, e.g. in operator learning.
See also low rank GPs, partially Bayes NNs, neural tangent kernels, functional regression, functional inverse problems, overparameterization, wide limits of NNs…