Fast multipole methods
August 18, 2016 — September 20, 2016
“Efficiently approximating fields made up of many decaying sources.”
Barnes-Hut algorithms, fast Gauss transforms, generalized multipole methods.
Not something I intend to worry about right now, but I needed to clear these refs out of my overcrowded Mercer kernel approximation notebook. Fast multipole methods can also approximate certain Mercer kernels by rapidly evaluating the field strength at given points, rather than approximating the kernels themselves with something simpler.
How do these methods compare to/relate to H-matrices?
Overview on Vikas Rakyar’s thesis page: FAST SUMMATION ALGORITHMS:
The art of getting ‘good enough’ solutions ‘as fast as possible’.
Huge data sets containing millions of training examples with a large number of attributes (tall fat data) are relatively easy to gather. However, one of the bottlenecks for successful inference of useful information from the data is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either \(O(N^2)\) or \(O(N^3)\), where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speed up these MVP primitives by fast approximate algorithms that scale as \(O(N)\) and also provide high accuracy guarantees. I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. Currently, the proposed algorithms have been applied in kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking.
1 Implementation
figtree (C++, MATLAB, Python) does Gaussian fields in the inexact case.