Function approximation and interpolation

June 9, 2016 — June 9, 2016

convolution
functional analysis
Hilbert space
nonparametric
signal processing
sparser than thou
Figure 1

On constructing an approximation of some arbitrary function, and measuring the badness thereof.

THIS IS CHAOS RIGHT NOW. I need to break out the sampling/interpolation problem for regular data, for one thing.

0.1 Choosing the best approximation

In what sense? Most compact? Most easy to code?

If we are not interpolating, how much smoothing do we do?

We can use cross-validation, especially so-called “generalized” cross-validation, to choose the smoothing parameter efficiently, in some sense.

Or you might have noisy data, in which case you now have a function approximation and inference problem, with error due to both approximation and sampling complexity. Compressive sensing can provide finite-sample guarantees for some of these settings.

To discuss: loss functions.

An interesting problem is how you align the curves that are your objects of study; That is a problem of warping.

0.2 Spline smoothing of observations

The classic.

Special superpowers: Easy to differentiate and integrate.

Special weakness: many free parameters, not so easy to do in high dimension.

See splines.

0.3 Polynomial bases

See polynomial bases.

0.4 Fourier bases

🏗️

0.5 Radial basis function approximation

1 Radial basis function approximation

I actually care about this mostly for densities, so see mixture models, for what information I do have.

1.1 Rational approximation

Padé’s is the method I’ve heard of. Are there others? Easy to differentiate. OK to integrate if you cheat using a computational algebra package.

1.2 References

Anil, Lucas, and Grosse. 2018. Sorting Out Lipschitz Function Approximation.”
Barron, A. 1989. Minimum Complexity Estimation.” In.
Barron, A.R. 1993. Universal Approximation Bounds for Superpositions of a Sigmoidal Function.” IEEE Transactions on Information Theory.
Barron, Andrew R., Cohen, Dahmen, et al. 2008. Approximation and Learning by Greedy Algorithms.” The Annals of Statistics.
Berinde, and Indyk. 2009. Sequential Sparse Matching Pursuit.” In 2009 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2009.
Berner, Grohs, Kutyniok, et al. 2021. The Modern Mathematics of Deep Learning.”
Beylkin, and Monzón. 2010. Approximation by Exponential Sums Revisited.” Applied and Computational Harmonic Analysis.
Blumensath, Thomas, and Davies. 2008. “Gradient Pursuit for Non-Linear Sparse Signal Modelling.” In Signal Processing Conference, 2008 16th European.
Blumensath, T., and Davies. 2008. Gradient Pursuits.” IEEE Transactions on Signal Processing.
Bölcskei, Grohs, Kutyniok, et al. 2019. Optimal Approximation with Sparsely Connected Deep Neural Networks.” SIAM Journal on Mathematics of Data Science.
Boyd, John P. 2001. Chebyshev & Fourier Spectral Methods. Lecture Notes in Engineering.
Boyd, Nicholas, Hastie, Boyd, et al. 2016. Saturating Splines and Feature Selection.” arXiv:1609.06764 [Stat].
Carlson, and Fritsch. 1985. Monotone Piecewise Bicubic Interpolation.” SIAM Journal on Numerical Analysis.
Cheney, and Light. 2009. A Course in Approximation Theory.
Cybenko. 1989. Approximation by Superpositions of a Sigmoidal Function.” Mathematics of Control, Signals and Systems.
Daniely. 2017. Depth Separation for Neural Networks.” arXiv:1702.08489 [Cs, Stat].
Davies. n.d. “Multidimensional Triangulation and Interpolation for Reinforcement Learning.”
Davis, Geoffrey M. 1998. A Wavelet-Based Analysis of Fractal Image Compression.” IEEE Transactions on Image Processing.
Davis, G., Mallat, and Avellaneda. 1997. Adaptive Greedy Approximations.” Constructive Approximation.
DeVore, Hanin, and Petrova. 2021. Neural Network Approximation.” Acta Numerica.
Dierckx. 1996. Curve and Surface Fitting Splines.
Dougherty, Edelman, and Hyman. 1989. Nonnegativity-, Monotonicity-, or Convexity-Preserving Cubic and Quintic Hermite Interpolation.” Mathematics of Computation.
Ekanadham, Tranchina, and Simoncelli. 2011. Recovery of Sparse Translation-Invariant Signals With Continuous Basis Pursuit.” IEEE Transactions on Signal Processing.
Elbrächter, Perekrestenko, Grohs, et al. 2021. Deep Neural Network Approximation Theory.” IEEE Transactions on Information Theory.
Fomel. 2000. Inverse B-Spline Interpolation.”
Fritsch, and Carlson. 1980. Monotone Piecewise Cubic Interpolation.” SIAM Journal on Numerical Analysis.
Goodwin, Michael. 1997. Matching Pursuit with Damped Sinusoids.” In 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.
Goodwin, M M. 2001. Multiscale Overlap-Add Sinusoidal Modeling Using Matching Pursuit and Refinements.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.
Goodwin, M., and Vetterli. 1997. Atomic Decompositions of Audio Signals.” In 1997 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, 1997.
Goodwin, M M, and Vetterli. 1999. Matching Pursuit and Atomic Signal Models Based on Recursive Filter Banks.” IEEE Transactions on Signal Processing.
Graps. 1995. An Introduction to Wavelets.” IEEE Computational Science Engineering.
Grohs, and Herrmann. 2022. Deep Neural Network Approximation for High-Dimensional Elliptic PDEs with Boundary Conditions.” IMA Journal of Numerical Analysis.
Higham. 1992. Monotonic Piecewise Cubic Interpolation, with Applications to ODE Plotting.” Journal of Computational and Applied Mathematics.
Hornik, Stinchcombe, and White. 1989. Multilayer Feedforward Networks Are Universal Approximators.” Neural Networks.
Hou, and Andrews. 1978. Cubic Splines for Image Interpolation and Digital Filtering.” IEEE Transactions on Acoustics, Speech and Signal Processing.
Huggins, and Zucker. 2007. Greedy Basis Pursuit.” IEEE Transactions on Signal Processing.
Knudson, Yates, Huk, et al. 2014. Inferring Sparse Representations of Continuous Signals with Continuous Orthogonal Matching Pursuit.” In Advances in Neural Information Processing Systems 27.
Koehler, Mehta, and Risteski. 2020. Representational Aspects of Depth and Conditioning in Normalizing Flows.” arXiv:2010.01155 [Cs, Stat].
Koppel, Warnell, Stump, et al. 2016. Parsimonious Online Learning with Kernels via Sparse Projections in Function Space.” arXiv:1612.04111 [Cs, Stat].
Kronland-Martinet, Guillemain, and Ystad. 1997. Modelling of Natural Sounds by Time–Frequency and Wavelet Representations.” Organised Sound.
Lee, Wee Sun, Bartlett, and Williamson. 1996. Efficient Agnostic Learning of Neural Networks with Bounded Fan-in.” IEEE Transactions on Information Theory.
Lee, Holden, Ge, Ma, et al. 2017. On the Ability of Neural Nets to Express Distributions.” In arXiv:1702.07028 [Cs].
Lyche, and Mørken. 2011. Spline Methods.
Opschoor, Petersen, and Schwab. 2020. Deep ReLU Networks and High-Order Finite Element Methods.” Analysis and Applications.
Orr. 1996. Introduction to Radial Basis Function Networks.”
Pati, Rezaiifar, and Krishnaprasad. 1993. Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition.” In Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers.
Pinkus. 1999. Approximation Theory of the MLP Model in Neural Networks.” Acta Numerica.
Poggio, and Girosi. 1990. Networks for Approximation and Learning.” Proceedings of the IEEE.
Poole, Lahiri, Raghu, et al. 2016. Exponential Expressivity in Deep Neural Networks Through Transient Chaos.” In Advances in Neural Information Processing Systems 29.
Ramsay. 1988. Monotone Regression Splines in Action.” Statistical Science.
Rioul, and Vetterli. 1991. Wavelets and Signal Processing.” IEEE Signal Processing Magazine.
Rolnick, and Tegmark. 2017. The Power of Deeper Networks for Expressing Natural Functions.” arXiv:1705.05502 [Cs, Stat].
Rubinstein, Bruckstein, and Elad. 2010. Dictionaries for Sparse Representation Modeling.” Proceedings of the IEEE.
Sachdeva. 2013. Faster Algorithms via Approximation Theory.” Foundations and Trends® in Theoretical Computer Science.
Song, Vempala, Wilmes, et al. 2017. On the Complexity of Learning Neural Networks.” arXiv:1707.04615 [Cs].
Telgarsky. 2016. Benefits of Depth in Neural Networks.” In arXiv:1602.04485 [Cs, Stat].
———. 2017. Neural Networks and Rational Functions.” In PMLR.
Torrence, and Compo. 1998. A Practical Guide to Wavelet Analysis.” Bulletin of the American Meteorological Society.
Unser, Michael, Aldroubi, and Eden. 1991. Fast B-Spline Transforms for Continuous Image Representation and Interpolation.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Unser, M., Aldroubi, and Eden. 1993a. B-Spline Signal Processing. I. Theory.” IEEE Transactions on Signal Processing.
———. 1993b. B-Spline Signal Processing. II. Efficiency Design and Applications.” IEEE Transactions on Signal Processing.
Vetterli. 1999. Wavelets: Approximation and Compression–a Review.” In AeroSense’99.
Wang, Smola, and Tibshirani. 2014. The Falling Factorial Basis and Its Statistical Applications.” In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32. ICML’14.
Weidmann, and Vetterli. 2012. Rate Distortion Behavior of Sparse Sources.” IEEE Transactions on Information Theory.
Weinert, and Kailath. 1974. Minimum Energy Control Using Spline Functions.” In 1974 IEEE Conference on Decision and Control Including the 13th Symposium on Adaptive Processes.
Wiatowski, and Bölcskei. 2015. A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.” In Proceedings of IEEE International Symposium on Information Theory.
Wood. 1994. Monotonic Smoothing Splines Fitted by Cross Validation.” SIAM Journal on Scientific Computing.
Yarotsky, and Zhevnerchuk. 2020. The Phase Diagram of Approximation Rates for Deep Neural Networks.” In Proceedings of the 34th International Conference on Neural Information Processing Systems. NIPS’20.
Zeevi, and Meir. 1997. Density Estimation Through Convex Combinations of Densities: Approximation and Estimation Bounds.” Neural Networks: The Official Journal of the International Neural Network Society.