Why does deep learning work?

Are we in the pocket of Big VRAM?

May 30, 2017 — December 14, 2020

machine learning
neural nets
optimization
statmech
Figure 1

No time to frame this well, but there are a lot of versions of the question, so… pick one. The essential idea is that we say: Oh my, that deep learning model I just trained had terribly good performance compared with some simpler thing I tried. Can I make my model simpler and still get good results? Or is the overparameterization essential? Can I know a decent error bound? Can I learn anything about the underlying system by looking at the parameters I learned?

And the answer is not “yes” in any satisfying general sense. Pfft.

1 Synthetic tutorials

2 Magic of (stochastic) gradient descent

Figure 2: Going deep stochastically

The SGD fitting process looks like processes from statistical mechanics.

Proceed with caution, since there is a lot of messy thinking here. Here are some things I’d like to read, but their inclusion should not be taken as a recommendation. The common theme is using ideas from physics to understand deep learning and other directed graph learning methods.

There are also arguments that SGD is doing some kind of MCMC simulation from the problem posterior (Mandt, Hoffman, and Blei 2017) as in NN ensembles or learning a kernel machine (Domingos 2020).

2.1 … with saddle points

tl;dr it looks like you need to worry about saddle points but you probably do not (Lee et al. 2017, 2016).

3 Magic of SGD+overparameterization

Looking at a different part of the problem, the combination of overparameterization and SGD is argued to be the secret (Allen-Zhu, Li, and Song 2018b)

Our main finding demonstrates that, for state-of-the-art network architectures such as fully-connected neural networks, convolutional networks (CNN), or residual networks (Resnet), assuming there are n training samples without duplication, as long as the number of parameters is polynomial in \(n\), first-order methods such as SGD can find global optima of the training objective efficiently, that is, with running time only polynomially dependent on the total number of parameters of the network.

4 Function approximation theory

Ignoring learnability, the pure function-approximation results are an interesting literature. If you can ignore that troublesome optimisation step, how general a thing can your neural network approximate as its depth and width and sparsity changes? The most recent thing I looked at is (Elbrächter et al. 2021), which also has a survey of that literature. See also (Bölcskei et al. 2019; Wiatowski and Bölcskei 2015). They derive some suggestive results, for example, that scaling with depth of network is vastly more favourable than in width for a fixed weight budget.

5 Crazy physics stuff I have not read

Wiatowski et al, (Wiatowski, Grohs, and Bölcskei 2018; Shwartz-Ziv and Tishby 2017) argue that looking at neural networks as random fields with energy propagation dynamics provides some insight into how they work. Haber and Ruthotto leverage some similar insights to argue you can improve NNs by looking at them as ODEs.

Lin and Tegmark, argue that statistical mechanics provides insight into deep learning, and neuroscience (Lin and Tegmark 2016b, 2016a). Maybe on a similar tip, Natalie Wolchover summarises (Mehta and Schwab 2014). Charles H Martin. Why Deep Learning Works II: the Renormalization Group.

There is also a bunch more filed under statistical mechanics of statistics.

6 There is nothing to see here

There is another school again, which argues that much of deep learning is not so interesting after all when you blur out the more hyperbolic claims with a publication bias filter. e.g. Piekniewski, Autopsy of a deep learning paper

Machine learning sits somewhere in between [science and engineering]. There are examples of clear scientific papers (such as e.g. the paper that introduced the backprop itself) and there are examples of clearly engineering papers where a solution to a very particular practical problem is described. But the majority of them appear to be engineering, only they engineer for a synthetic measure on a more or less academic dataset. In order to show superiority some ad-hoc trick is being pulled out of nowhere (typically of extremely limited universality) and after some statistically non significant testing a victory is announced.

There is also the fourth kind of papers, which indeed contain an idea. The idea may even be useful, but it happens to be trivial. In order to cover up that embarrassing fact a heavy artillery of “academic engineering” is loaded again, such that overall the paper looks impressive.

7 Incoming

  • Simon J.D. Prince’s new book Understanding Deep Learning (Prince 2023)
  • Gradient Dissent, a list of reasons that large backpropagation-trained networks might be worrisome. There are some interesting points in there, and some hyperbole. Also: If it were true that there are externalities from backprop networks (i.e. that they are a kind of methodological pollution that produces private benefits but public costs) then what kind of mechanisms should be applied to disincentives them?
  • C&C Against Predictive Optimization.

8 References

Allen-Zhu, Li, and Song. 2018a. On the Convergence Rate of Training Recurrent Neural Networks.”
———. 2018b. A Convergence Theory for Deep Learning via Over-Parameterization.”
Anderson, and Berg. 2017. The High-Dimensional Geometry of Binary Neural Networks.” arXiv:1705.07199 [Cs].
Arora, Cohen, and Hazan. 2018. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization.” arXiv:1802.06509 [Cs].
Baldassi, Borgs, Chayes, et al. 2016. Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes.” Proceedings of the National Academy of Sciences.
Barron. 1993. Universal Approximation Bounds for Superpositions of a Sigmoidal Function.” IEEE Transactions on Information Theory.
Bartlett, Montanari, and Rakhlin. 2021. Deep Learning: A Statistical Viewpoint.” Acta Numerica.
Belilovsky, Eickenberg, and Oyallon. 2019. Greedy Layerwise Learning Can Scale To ImageNet.” In International Conference on Machine Learning.
Belkin. 2021. Fit Without Fear: Remarkable Mathematical Phenomena of Deep Learning Through the Prism of Interpolation.” Acta Numerica.
Berner, Grohs, Kutyniok, et al. 2021. The Modern Mathematics of Deep Learning.”
Bölcskei, Grohs, Kutyniok, et al. 2019. Optimal Approximation with Sparsely Connected Deep Neural Networks.” SIAM Journal on Mathematics of Data Science.
Chang, Meng, Haber, et al. 2018. Reversible Architectures for Arbitrarily Deep Residual Neural Networks.” In arXiv:1709.03698 [Cs, Stat].
Choromanska, Henaff, Mathieu, et al. 2015. The Loss Surfaces of Multilayer Networks.” In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics.
Chou, Rauhut, and Ward. 2023. Robust Implicit Regularization via Weight Normalization.”
Dalalyan. 2017. Further and Stronger Analogy Between Sampling and Optimization: Langevin Monte Carlo and Gradient Descent.” arXiv:1704.04752 [Math, Stat].
Dauphin, Pascanu, Gulcehre, et al. 2014. Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization.” In Advances in Neural Information Processing Systems 27.
Domingos. 2020. Every Model Learned by Gradient Descent Is Approximately a Kernel Machine.” arXiv:2012.00152 [Cs, Stat].
Elbrächter, Perekrestenko, Grohs, et al. 2021. Deep Neural Network Approximation Theory.” IEEE Transactions on Information Theory.
Gilbert, Zhang, Lee, et al. 2017. Towards Understanding the Invertibility of Convolutional Neural Networks.” arXiv:1705.08664 [Cs, Stat].
Golowich, Rakhlin, and Shamir. 2017. Size-Independent Sample Complexity of Neural Networks.” arXiv:1712.06541 [Cs, Stat].
Haber, and Ruthotto. 2018. Stable Architectures for Deep Neural Networks.” Inverse Problems.
Haber, Ruthotto, Holtham, et al. 2017. Learning Across Scales - A Multiscale Method for Convolution Neural Networks.” arXiv:1703.02009 [Cs].
Halko, Martinsson, and Tropp. 2010. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions.”
He, and Tao. 2021. Recent Advances in Deep Learning Theory.”
Hu, Song, Weinstein, et al. 2022. Training Overparametrized Neural Networks in Sublinear Time.”
Im, Tao, and Branson. 2016. An Empirical Analysis of the Optimization of Deep Network Loss Surfaces.” arXiv:1612.04010 [Cs].
Jentzen, Kuckuck, and von Wurstemberger. 2023. Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory.”
Jin, Ge, Netrapalli, et al. 2017. How to Escape Saddle Points Efficiently.” In PMLR.
Kawaguchi, Kaelbling, and Bengio. 2017. Generalization in Deep Learning.” arXiv:1710.05468 [Cs, Stat].
Khan, and Rue. 2024. The Bayesian Learning Rule.”
Lee, Panageas, Piliouras, et al. 2017. First-Order Methods Almost Always Avoid Saddle Points.” arXiv:1710.07406 [Cs, Math, Stat].
Lee, Simchowitz, Jordan, et al. 2016. Gradient Descent Converges to Minimizers.” arXiv:1602.04915 [Cs, Math, Stat].
Levy. 2016. The Power of Normalization: Faster Evasion of Saddle Points.” arXiv:1611.04831 [Cs, Math, Stat].
Lin, and Tegmark. 2016a. Critical Behavior from Deep Dynamics: A Hidden Dimension in Natural Language.” arXiv:1606.06737 [Cond-Mat].
———. 2016b. Why Does Deep and Cheap Learning Work so Well? arXiv:1608.08225 [Cond-Mat, Stat].
Lipton. 2016. Stuck in a What? Adventures in Weight Space.” arXiv:1602.07320 [Cs].
Mandt, Hoffman, and Blei. 2017. Stochastic Gradient Descent as Approximate Bayesian Inference.” JMLR.
Mehta, and Schwab. 2014. An Exact Mapping Between the Variational Renormalization Group and Deep Learning.”
Neu, Dziugaite, Haghifam, et al. 2021. Information-Theoretic Generalization Bounds for Stochastic Gradient Descent.” arXiv:2102.00931 [Cs, Stat].
Olah, Mordvintsev, and Schubert. 2017. Feature Visualization.” Distill.
Pascanu, Dauphin, Ganguli, et al. 2014. On the Saddle Point Problem for Non-Convex Optimization.” arXiv:1405.4604 [Cs].
Perez. 2016. Deep Learning: The Unreasonable Effectiveness of Randomness.” Medium (blog).
Philipp, Song, and Carbonell. 2017. Gradients Explode - Deep Networks Are Shallow - ResNet Explained.” arXiv:1712.05577 [Cs].
Prince. 2023. Understanding Deep Learning.
Roberts. 2021a. Why Is AI Hard and Physics Simple?
———. 2021b. SGD Implicitly Regularizes Generalization Error.”
Roberts, Yaida, and Hanin. 2021. The Principles of Deep Learning Theory.” arXiv:2106.10165 [Hep-Th, Stat].
Rolnick, and Tegmark. 2017. The Power of Deeper Networks for Expressing Natural Functions.” arXiv:1705.05502 [Cs, Stat].
Rosenfeld, and Tsotsos. 2018. “Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing.”
Rumelhart, Hinton, and Williams. 1986. Learning Representations by Back-Propagating Errors.” Nature.
Ruthotto, and Haber. 2020. Deep Neural Networks Motivated by Partial Differential Equations.” Journal of Mathematical Imaging and Vision.
Sagun, Guney, Arous, et al. 2014. Explorations on High Dimensional Landscapes.” arXiv:1412.6615 [Cs, Stat].
Shwartz-Ziv, and Tishby. 2017. Opening the Black Box of Deep Neural Networks via Information.” arXiv:1703.00810 [Cs].
Song, Vempala, Wilmes, et al. 2017. On the Complexity of Learning Neural Networks.” arXiv:1707.04615 [Cs].
Unser. 2019. A Representer Theorem for Deep Neural Networks.” Journal of Machine Learning Research.
Wiatowski, and Bölcskei. 2015. A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.” In Proceedings of IEEE International Symposium on Information Theory.
Wiatowski, Grohs, and Bölcskei. 2018. Energy Propagation in Deep Convolutional Neural Networks.” IEEE Transactions on Information Theory.
Xie, Liang, and Song. 2016. Diversity Leads to Generalization in Neural Networks.” arXiv:1611.03131 [Cs, Stat].
Zhang, Bengio, Hardt, et al. 2017. Understanding Deep Learning Requires Rethinking Generalization.” In Proceedings of ICLR.
———, et al. 2021. Understanding Deep Learning (Still) Requires Rethinking Generalization.” Communications of the ACM.