Model-based NN
Upproximateing unrolled algorithms
December 8, 2020 — June 28, 2023
Yonina Eldar on Model-Based Deep Learning
In our lab, we are working on model-based deep learning, where the design of learning-based algorithms is based on prior domain knowledge. This approach allows us to integrate models and other knowledge about the problem into both the architecture and training process of deep networks. This leads to efficient, high-performance, and yet interpretable neural networks which can be employed in a variety of tasks in signal and image processing. Model-based networks require far fewer parameters than their black-box counterparts, generalize better, and can be trained from much less data. In some cases, our networks are trained on a single image, or only on the input itself so that effectively they are unsupervised.
1 Unrolling algorithms
Turning iterations into layers. Connection to Implicit NNs, which kinda-sorta take an infinite limit of unrolling steps.
A classic is Gregor and LeCun (2010), and a number of others related to this idea intermittently appear (Adler and Öktem 2018; Borgerding and Schniter 2016; Gregor and LeCun 2010; Sulam et al. 2020). Inevitable summary paper in Monga, Li, and Eldar (2021).
2 Incoming
- Jonas Adler, Learning to reconstruct
- Jonas Adler, Accelerated Forward-Backward Optimization using Deep Learning