## Preliminary list of abstracts

**Note:** We are using the fantastic MathJax JavaScript library to typeset the mathematics on this web page.
You can right-click on a formula to zoom in or to select the rendering engine you like (all under 'Settings').
Under Firefox the default renderer turns out to be MathML (which is quick), but you might prefer the HTML-CSS renderer for more faithful LaTeX rendering.
If you encounter any problems with this setup: then please email us!

Click here to go back to the list of abstracts.

*Keywords:*Fast Gauss Transform; non-local operator

- Tausch, Johannes (Mathematics, Southern Methodist University, United States)

Many problems in statistical learning depend on computing the discrete Gauss transform efficiently. The task is to evaluate the potentials $$ \Phi(x_i) = \sum_{j=1}^N \exp\left( -{|x_i-y_j|^2\over \delta} \right) w_j $$ for given sources $y_j \in [0,1]^d$, targets $x_i\in [0,1]^d$, weights $w_j$ and variance $\delta$.

The direct evaluation of the Gauss transform is an $O(N^2)$ algorithm which is impractical for large data sets. However, a variant of the fast multipole method can be used to evaluate the potentials in optimal $O(N)$ complexity. This is the fast Gauss transform (FGT), introduced by Greengard and Strain [1]. The algorithm subdivides the dataset into smaller clusters and approximates interactions between two clusters using Hermite expansions. The algorithm has $O(N)$ complexity, but the constant grows exponentially with the dimension because of the increasing number of interacting clusters and the number of terms in the Hermite expansion.

Since the Gauss kernel is an entire function it can be well approximated in $[0,1]^{2d}$ by a multivariate Chebyshev series [2]. The resulting degenerate kernel readily leads to fast algorithm. Unlike previous FGT algorithms this approach does not cluster interactions and will work well if the sources and targets are fairly uniformly distributed and the variance is not too small.

On the other hand, many realistic datasets have low-dimensional features which the current implementation of the Chebyshev-based method cannot exploit. In this case we cluster the data and represent the data points in the principal axes of the cluster. We now have to approximate the Gauss kernel in a $2d$-dimensional rectangular box with widely varying side lengths. This anisotropy leads to new theoretical and algorithmic issues that will be discussed in the talk.

*SIAM J. Sci. Comput.*, 12:79–94, 1991.

*SIAM J. Sci. Comput.*, 31(5):3547–3565, 2009.