## Preliminary list of abstracts

**Note:** We are using the fantastic MathJax JavaScript library to typeset the mathematics on this web page.
You can right-click on a formula to zoom in or to select the rendering engine you like (all under 'Settings').
Under Firefox the default renderer turns out to be MathML (which is quick), but you might prefer the HTML-CSS renderer for more faithful LaTeX rendering.
If you encounter any problems with this setup: then please email us!

Click here to go back to the list of abstracts.

*Keywords:*structured matrices; tensor representations; fast algorithms

- Kazeev, Vladimir (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)

Problems in high dimensions are difficult to deal with because of the so-called "curse of dimensionality", that is the exponential growth of storage costs and complexity with respect to the number of axes. Various low-parametric non-linear approximations have been applied to alleviate representation and computations in high dimensions. Some tensor decompositions, e. g. canonical and Tucker ones, are already considered classical in mathematics, though they do not break the "curse of dimensionality".

Meanwhile the communities of physical chemistry and quantum information theory have been exploiting the Matrix Product State (MPS) representation for almost two decades now. This advantageous concept was rediscovered in mathematics as the Tensor Train (TT) format and equiped with robust truncation and efficient adaptive cross approximation algorithms by Oseledets and Tyrtyshnikov in 2009. The TT format blends together robustness and linear dependence of storage costs and complexity on number of modes, i.e. dimensionality, and mode length, i.e. number of points along a dimension. Moreover, coupling the TT format with the idea of introducing as many fictitious dimensions as possible, which is referred to as "tensorization" or "quantization", leads us to the QTT format. This approach yields storage costs and complexity logarithmic with respect to mode length of initial tensor, which allows us to hail and make the best use of high dimensionality.

Investigation of QTT structure of some special matrices parametirized in the QTT format shows that they themselves are well fitted by the format, which leads to efficient computational algorithms in the Quantics Tensor Train format.