## Preliminary list of abstracts

**Note:** We are using the fantastic MathJax JavaScript library to typeset the mathematics on this web page.
You can right-click on a formula to zoom in or to select the rendering engine you like (all under 'Settings').
Under Firefox the default renderer turns out to be MathML (which is quick), but you might prefer the HTML-CSS renderer for more faithful LaTeX rendering.
If you encounter any problems with this setup: then please email us!

Click here to go back to the list of abstracts.

*Keywords:*TT format; cross algorithm; Krylov subspaces

- Savostyanov, Dmitry (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)

Tensor train (TT) format has been recently proposed in [7] for data-sparse representation of the high-dimensional tensors based on separation of variables. TT format has the advances of both canonical and Tucker formats: the number of parameters does not grow exponentially with dimension, decomposition/approximation problem is stable and can be computed by SVD-based methods.

To break the curse of dimensionality, approximation methods that do not require all elements of initial tensor are crutial. For matrices such an algorithm is well known and used, for example in mosaic/$\mathcal{H}$–matrices formats [9]. It interpolates the given matrix on the positions of several columns and rows and then checks the approximation accuracy over the random set of elements. The choice of the proper positions for the interpolation depends on matrix and is very important. In [2] it is shown that good positions can be selected using the maximum-volume principle, in [3] properties of maximum-volume and closely related dominating submatrices are studied in more details. For three-dimensional tensors, cross interpolation algorithms in Tucker format [6] and also are based on the maximum volume principle.

For arbitrary dimension the interpolation method for the TT format is proposed in [8]. However, sometimes its convergence is not satisfactory even for data with "good" structure. To improve this, we propose a new method combining the maximum-volume interpolation idea with the DMRG approximation scheme, popular among the quantum chemistry community. We apply the resulted algorithm for the fast compression of high-dimensional data and high-scale data using the quantization idea proposed in [5,4].

Also, the DMRG scheme can be improved using the Krylov/Wedderburn framework. Krylov methods are methods of choice to approximate large matrices that allow fast matrix-by-vector multiplication. In [1] we generalize them to three-dimensional tensors. In this talk we discuss the generalization to larger dimensions.

This work is supported by RFBR grants 09-01-91332 (with DFG), 09-01-00565, 10-01-00757.

*Contemporary Mathematics*, 208:47–51, 2001.

*Matrix Methods: Theory, Algorithms, Applications*, pages 247–256. World Scientific Publishing, 2010.

*J. Numer. Math.*, (2011).

*SIAM J. Matrix Anal. Appl.*, 31(4):2130–2145, 2010.

*SIAM J. Matrix Anal. Appl.*, 30(3):939–956, 2008.

*SIAM J. Sci. Comp.*, 31(5):3744–3759, 2009.

*Linear Algebra Appl.*, 432(1):70–88, 2010.

*Computing*, 64(4):367–380, 2000.