Preliminary list of abstracts

Note: We are using the fantastic MathJax JavaScript library to typeset the mathematics on this web page. You can right-click on a formula to zoom in or to select the rendering engine you like (all under 'Settings'). Under Firefox the default renderer turns out to be MathML (which is quick), but you might prefer the HTML-CSS renderer for more faithful LaTeX rendering. If you encounter any problems with this setup: then please email us!

Click here to go back to the list of abstracts.

Conditional sampling for option pricing under the LT method
Keywords: Linear transformations; Quasi-Monte Carlo simulations; Variance reduction
Sun, 16:10--16:35
  • Achtsis, Nico (Department of Computer Science, K.U.Leuven, Belgium)
  • Nuyens, Dirk (Department of Computer Science, K.U.Leuven, Belgium)

Monte Carlo (MC) and quasi-Monte Carlo (QMC) methods are often used in pricing complex derivatives. The merit of QMC is that, theoretically at least, higher convergence rates can be obtained than regular MC. The payoff function is usually high-dimensional and non-smooth, eliminating the advantage of using QMC. Imai & Tan [2] introduced the LT method which minimizes the effective dimension of the problem by transforming the normal variates using an orthogonal transformation, thereby improving the QMC method. We will present an extension to their method for valuing options that have a barrier feature on an underlying asset, incorporating and extending an idea from Staum & Glasserman [1]. These options have a payoff that depends on whether the asset does or does not cross a certain level during the life of the option. If the probability of (not) hitting is large enough, then much more paths have to be sampled for accurate results. Our extension aims to reduce the required number of paths.

[1] Paul Glasserman and Jeremy Staum. Conditioning on one-step survival for barrier option simulations. Operations Research, 49(6):923–937, 2001.
[2] Junichi Imai and Ken Seng Tan. A general dimension reduction technique for derivative pricing. Journal of Computational Finance, 10(2):129–155, 2006.
Adaptation to anisotropy and inhomogeneity via dyadic piecewise polynomial selection
Keywords: Anisotropic Besov spaces; Multivariate estimation
Thu, 09:25--09:50
  • Akakpo, Nathalie (Laboratoire de Probabilités et Modèles Aléatoires, Université Pierre et Marie Curie, France)

When estimating a multivariate function, it seems natural to consider that its smoothness is likely to vary either spatially, or with the direction, or both. We will refer to the first feature as (spatial) inhomogeneity and to the second one as anisotropy. In particular, we can thus take into account multivariate functions that may in fact be constant in some of the variables. Yet, statistical procedures that adapt both to possible inhomogeneity and anisotropy remain rather scarce. Results of this type rely as much on Statistics as on Approximation Theory.

So we propose an approximation result devoted to piecewise polynomial approximation based on partitions into dyadic rectangles, inspired from DeVore and Yu [1], over possibly inhomogeneous and anisotropic smoothness classes that contain for instance the more traditional Besov classes. Besides, we take into account a possible restriction on the minimal size of the dyadic rectangles, which may arise in statistical applications. For estimating a multivariate function –in the density estimation or regression framework, for instance–, we then introduce a selection procedure that chooses from the data the best partition into dyadic rectangles and the best piecewise polynomial built on that partition thanks to a penalized least-squares type criterion. The degree of the polynomial may vary from one rectangle to another, and also according to the coordinate directions. Such a procedure is able to reach the optimal estimation rate although the exact degree of smoothness is unknown. This results not only from the good approximation properties of dyadic piecewise polynomials, but also from the moderate number of dyadic partitions of the same size. Moreover, our statistical procedure can be implemented with a computational complexity only linear in the sample size.

[1] R. A. DeVore and X. M. Yu. Degree of adaptive approximation. Math. Comp., 55(192):625–635, 1990.
Adaptive cross approximation in higher dimensions
Mon, 16:10--16:35
  • Bebendorf, Mario (Institute for Numerical Simulation, University of Bonn, Germany)

We present and analyze several new schemes for the approximation of functions of three, four and more variables by sums of products of univariate functions. The presented methods are generalizations (each in its own sense) of the Adaptive Cross Approximation (ACA) initially designed to approximate bivariate functions. A characteristic property of the presented techniques is that functions are approximated by its restrictions to lower-dimensional domains.

On low-rank solutions of large-scale generalized Lyapunov equations arising in parametric model reduction
Keywords: Parametric model order reduction
Mon, 15:45--16:10
  • Benner, Peter (Computational Methods in Systems and Control Theory, Max Planck Institute for Dynamics of Complex Technical Systems, Germany)
  • Breiten, Tobias (Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg, Germany)

Parametric model order reduction (PMOR) has become an important tool in computing surrogate models of parameter-dependent, dynamical systems to be used in design optimization and control. PMOR can be considered as an approximation problem for a special multi-parametric function. For this purpose, we employ a connection between linear parametric and bilinear control systems which allows for several efficient approximation techniques. Here, we will focus on a generalization of the method of balanced truncation for linear systems which leads to large-scale Lyapunov equations of the form $$AX+XA^T+\sum_{j=1}^mN_jXN_j^T+BB^T=0,$$ with $A,N_j \in \mathbb R^{n\times n}$ and $B \in \mathbb R^{n\times m}.$ Imposing some assumptions on $N_j$ and $B,$ we will make use of the corresponding tensorized version $$\left(I\otimes A +A\otimes I + \sum_{j=1}^mN_j\otimes N_j\right)\operatorname{vec}{(X)}=-\operatorname{vec}{(BB^T)}$$ to show the existence of low-rank approximations to the solution matrix $X.$ Finally, this will justify extending low-rank methods like e.g. the ADI iteration or other rational Krylov subspace methods which are known to be effective tools in the linear case. By means of a numerical example, we will evaluate the performance of those methods.

On numerical integration of isotropic singularities in $\mathbb R^d$
Keywords: numerical integration; algebraic singularity; approximation on the semiaxis
Thu, 11:10--11:35
  • Chernov, Alexey (Hausdorff Center for Mathematics; Institute for Numerical Simulation, University of Bonn, Germany)

We consider the integrals of the type \begin{equation*} I_1(R) = \int_{\mathbb R^d} \frac{F(x)}{\|x-R\|^\alpha} \, dx,\qquad I_2 = \int_{\mathbb R^d}\int_{\mathbb R^d} \frac{G(x,y)}{\|x-y\|^\alpha} \, dy dx \end{equation*} where $F$ and $G$ are smooth and exponentially decaying at infinity and $\alpha<d$, and introduce a family of quadrature rules for approximate computation of $I_1(R)$, $I_2$. Important special cases of integrals of this type are e.g. one- and two-electron-like integrals with application in Quantum Chemistry \begin{equation*} J_1(R) = \int_{\mathbb R^3} \frac{e^{-a\|x-R_a\|^2}}{\|x-R\|}f(x) ,\quad J_2 = \int_{\mathbb R^3}\int_{\mathbb R^3} \frac{e^{-a\|x-R_a\|^2}e^{-b\|x-R_b\|^2}}{\|x-y\|} f(x)g(y). \end{equation*} In this case, analytic integration is possible if $f=1=g$; if $f$, $g$ are polynomials the evaluation can be performed via recursive formulae [2]. Similar integrals appear in pricing of options on Lévy driven assets: \begin{equation*} \mathcal A_J[\varphi](x) = -\int_{\mathbb R^d}(\varphi(x+z) - \varphi(x)-z\cdot\nabla \varphi(x)) k(z) \, dz, \quad k(z) = \frac{e^{-M\|z\|}}{\|z\|^\alpha}. \end{equation*} Here the integral kernel $k(z)$ corresponds to a tempered process, which can be viewed as an isotropic generalization of the CGMY process to $d \geq 2$ [3] and $\alpha$ is not necessarily an integer.

For numerical computation of $I_1(R), I_2$ we introduce a family of product-type quadrature rules and study its convergence properties. The construction is based on a change of coordinates moving the singularity to the origin and involves suitable tensor-product rules in the radial and angular directions. This approach is related to [1], where the numerical integration over two regular simplices has been studied, in contrast to the present case of unbounded domains.

We illustrate the performance of the suggested quadrature rules on several numerical examples and give preliminary convergence estimates for the quadrature error under suitable smoothness and decay assumptions on the integrands.

[1] Alexey Chernov, Tobias von Petersdorff, and Christoph Schwab. Exponential convergence of $hp$ quadrature for integral operators with Gevrey kernels. M2AN Math. Model. Numer. Anal., 45:387–422, 2011.
[2] Larry E. McMurchie and Ernest R. Davidson. One- and two-electron integrals over cartesian gaussian functions. J. Computational Phys., 26:218–231, 1978.
[3] Ricardo H. Nochetto, Tobias von Petersdorff, and Chen-Song Zhang. A posteriori error analysis for a class of integral equations and variational inequalities. Numer. Math., 116(3):519–552, 2010.
Efficient solution of highly multidimensional models by using the proper generalized decomposition: Towards a change of paradigm in computational mechanics?
Keywords: Multidimensional models; Curse of dimensionality; Proper Generalized Decomposition
Tue, 14:25--14:50
  • Chinesta, Francisco (EADS Chair - GEM Institute CNRS - Ecole Centrale Nantes, Ecole Centrale de Nantes, France)
  • Leygue, Adrien (EADS Chair at GEM Institute, Ecole Centrale de Nantes, France)
  • Cueto, Elias (I3A, University of Zaragoza, Spain)
  • Keunings, Roland (ICTEAM, Université Catholique de Louvain, Belgium)

Numerous models encountered in science and engineering remain nowadays, despite the impressive progresses attained recently in computational simulation techniques, intractable when the usual and well experienced discretization techniques are applied for their numerical simulation. A first challenging issue concerns the treatment of highly multidimensional models arising from quantum mechanics or kinetic theory models of solids or fluids, including micro and nano-structured complex fluids, stochastic problems and parametric models. The main challenge in the treatment of this kind of models is related to the curse of dimensionality because when one applies standard mesh based discretization the number of degrees of freedom involved scales exponentially with the dimension of the space concerned. Moreover, in the context of problems optimization of inverse identification many direct problems must be solved. Again, alternative advanced computational techniques are urgently needed. By introducing all the model or process parameters, initial or boundary conditions, parameters defining the shape, applied loads, etc. as extra-coordinates in the model, optimization or inverse analysis only need the solution only once of the resulting multidimensional model, that some time involves hundreds of dimensions. The curse of dimensionality related to the solution of such kind of models can be efficiently circumvented by applying the proper generalized decomposition, a separated representation constructor that allows solving efficiently models involving hundreds of dimensions. In this works we present numerous applications in computer science and engineering, where highly multidimensional models never until now treated were successfully solved [1-4].

[1] A. Ammar, B. Mokdad, F. Chinesta, R. Keunings. A New Family of Solvers for Some Classes of Multidimensional Partial Differential Equations Encountered in Kinetic Theory Modeling of Complex Fluids. Journal of Non-Newtonian Fluid Mechanics, 139, 153-176, (2006).
[2] A. Ammar, B. Mokdad, F. Chinesta, R. Keunings. A New Family of Solvers for Some Classes of Multidimensional Partial Differential Equations Encountered in Kinetic Theory Modeling of Complex Fluids. Part II: Transient Simulation Using Space-Time Separated Representation. Journal of Non-Newtonian Fluid Mechanicas, 144, 98-121, (2007).
[3] F. Chinesta, A. Ammar, E. Cueto Recent Advances and New Challenges in the Use of the Proper Generalized Decomposition for Solving Multidimensional Models. Archives of Computational Methods in Engineering – State of the Art Reviews, 17/4, 327-350 (2010).
[4] F. Chinesta, A. Ammar, A. Leygue, R. Keunings An Overview of the Proper Generalized Decomposition with Applications in Computational Rheology. Journal of Non Newtonian Fluid Mechanics. 10.1016/j.jnnfm.2010.12.012
Solving fuzzy elliptic PDEs by means of HT tensors
Tue, 11:35--12:00
  • Corveleyn, Samuel (Department of Computer Science, K.U.Leuven, Belgium)
  • Vandewalle, Stefan (K.U.Leuven, Belgium)

We consider the solution of elliptic partial differential equations (PDE) with a high-dimensional fuzzy diffusion coefficient. Uncertainties in mathematical models are often modeled by random variables or random processes. However for the epistemic type of uncertainty, i.e., uncertainty by incomplete knowledge, an alternative based on interval or fuzzy uncertainties may be more appropriate. These equations can be solved by first constructing a polynomial approximation to the solution of the parameterized equation, by a Galerkin or collocation approach. The fuzzy quantities of interest are then calculated in a post-processing step. The specific tensor structure of the linear system of equations that is obtained by discretization of the parameterized equation allows for an approximate solution in some tensor format of low rank. Following [1], we solve the system by means of tensors in the hierarchical Tucker (HT) format using a generalized low rank GMRES method. We discuss how this approach performs in terms of computational work and accuracy in the fuzzy sense.

[1] Jonas Ballani and Lars Grasedyck. A projection method to solve linear systems in tensor format. Technical Report 309, Institut für Geometrie und Praktische Mathematik, RWTH Aachen, Germany, April 2010.
Higher order scrambled digital nets for smooth integrands
Keywords: randomized quasi-Monte Carlo; Numerical integration; digital nets and sequences
Thu, 14:00--14:25
  • Dick, Josef (School of Mathematics and Statistics, The University of New South Wales, Australia)

In this talk we present a random sampling technique to approximate integrals $\int_{[0,1]^s} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x}$ by averaging the function at some sampling points. We focus on cases where the integrand is smooth. We introduce a new RQMC algorithm, for which we prove that it achieves a convergence of the root mean square error (RMSE) of order $N^{-\alpha-1/2+\varepsilon}$ provided the integrand has square integrable partial mixed derivatives up to order $\alpha > 1$ in each variable. Known lower bounds on the RMSE show that this rate of convergence cannot be improved in general for integrands with this smoothness.

B-spline quasi-interpolant representations and sampling recovery of functions with mixed smoothness
Keywords: Linear sampling algorithm; Quasi-interpolant representation; Besov space of mixed smoothness
Sun, 14:50--15:15
  • Dinh, Dũng (Information Technology Institute, Vietnam National University, Hanoi, Vietnam)

Let $\xi = \{x^j\}_{j=1}^n$ be a set of $n$ sample points in the $d$-cube $[0,1]^d$, and $\Phi = \{\varphi_j\}_{j =1}^n$ a family of $n$ functions on $[0,1]^d$. We define the linear sampling algorithm $L_n(\Phi,\xi,\cdot)$ for an approximate recovery of a continuous function $f$ on $[0,1]^d$ from the sampled values $f(x^1), ..., f(x^n)$, by \begin{equation*} L_n(\Phi,\xi,f) := \sum_{j=1}^n f(x^j)\varphi_j. \end{equation*} For the Besov class $B^\alpha_{p,\theta}$ of mixed smoothness $\alpha$, to study optimality of $L_n(\Phi,\xi,\cdot)$ in $L_q([0,1]^d)$ we use the quantity \begin{equation*} r_n(B^\alpha_{p,\theta})_q := \inf_{\xi, \Phi} \sup_{f \in B^\alpha_{p,\theta}} \, \|f - L_n(\Phi,\xi,f)\|_q, \end{equation*} where the infimum is taken over all sets of $n$ sample points $\xi = \{x^j\}_{j=1}^n$ and all families $\Phi = \{\varphi_j\}_{j=1}^n$ in $L_q([0,1]^d)$. We explicitly constructed linear sampling algorithms $L_n(\Phi,\xi,\cdot)$ on the set of sample points $\xi = G^d(m):= \{(2^{-k_1}s_1,...,2^{-k_d}s_d) \in [0,1]^d : k_1 + ... + k_d \le m\}$, with $\Phi$ a family of linear combinations of mixed B-splines which are mixed tensor products of either integer or half integer translated dilations of the centered B-spline of order $r$. For various $0<p,q,\theta \le \infty$ and $1/p < \alpha < r$, we proved upper bounds for the worst case error $ \sup_{f \in B^\alpha_{p,\theta}} \, \|f - L_n(\Phi,\xi,f)\|_q$ which coincide with the asymptotic order of $r_n(B^\alpha_{p,\theta})_q$ in some cases. A key role in constructing these linear sampling algorithms, plays a quasi-interpolant representation of functions $f \in B^\alpha_{p,\theta}$ by mixed B-spline series with the coefficient functionals which are explicitly constructed as linear combinations of an absolute constant number of values of functions. Moreover, we proved that the quasi-norm of the Besov space is equivalent to a discrete quasi-norm in terms of the coefficient functionals.

On a solution to a parabolic equation in the QTT format using the DMRG approach
Mon, 14:50--15:15
  • Dolgov, Sergey (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)
  • Oseledets, Ivan V. (Institute of Numerical Mathematics, Russian Academy of Sciences)

The Matrix Product State representation and the Density Matrix Renormalization Group algorithms are widely used in the quantum physics community to describe and compute quantum states of one-dimensional spin chains. Independently the so-called TT and QTT formats were discovered by E. Tyrtyshnikov and I. Oseledets as tensor approximation techniques. Nevertheless, these formats exploit the similar MPS like structure. Thus the idea of an application of the DMRG approach to quite general high-dimensional problems in the QTT format arose. In this work we consider a solution to a parabolic equation with implicit time-propagating schemes. The main point is the usage of the DMRG approach only for the approximate Matrix-by-Vector operation and the solution of a linear system without using special time-dependent DMRG methods. We present and compare two approaches: the solution of the linear system arising in the implicit scheme at each time step, and precomputation of the inverse matrix in order to apply only MatVec at the time step.

Tensor decompositions in convex optimization problems
Mon, 16:35--17:00
  • Falcó, Antonio F. (Physics, Mathematics and Computer Science, Universidad CEU Cardenal Herrera, Spain)
  • Falcó, Antonio, A. (Departamento de Ciencias Físicas, Matemáticas y de la Computación, Universidad CEU Cardenal Herreram Spain)
  • Nouy, Anthony, A. (GeM - Institut de Recherche en Génie Civil et Mécanique, UMR CNRS 6183, Ecole Centrale Nantes, Université de Nantes, France)

Model reduction techniques based on the construction of separated representations are receiving a growing interest in scientific computing. A family of methods, recently called Proper Generalized Decomposition (PGD) methods, have been introduced for the a priori construction of separated representations (or tensor product approximation) of the solution $u$ of problems defined in tensor product spaces: \begin{align} u\in V=V_1\otimes \ldots \otimes V_d,\quad A(u) = l\qquad \qquad \qquad \qquad (1) \end{align} PGD methods can be interpreted as generalizations of Proper Orthogonal Decomposition (or Singular Value Decomposition, or Karhunen-Loève Decomposition) for the a priori construction of a separated representation $u_m$ of the solution $u$ (i.e. without knowing $u$ a priori): \begin{align} u\approx u_m = \sum_{i=1}^n w_i^1\otimes \ldots \otimes w_i^d,\quad w_i^k\in V_k \end{align} A priori construction means that we only know the equation solved by the function $u$ and not the function $u$ itself. Several definitions of PGDs have been proposed. Basic PGDs are based on a progressive construction of the sequence $u_m$, where at each step, an additional rank-one element $\otimes_{k=1}^d w_{m}^k$ is added to the previously computed decomposition $u_{m-1}.$ These progressive definitions of PGDs can then be considered as Greedy algorithms for constructing separated representations. A possible improvement of these progressive decompositions consists in introducing some updating steps in order to capture an approximation of the optimal decomposition, obtained by defining the whole set of functions simultaneously (and not progressively). For many applications, it allows recovering good convergence properties of separated representations.

In this work, we propose a theoretical analysis of progressive and updated Proper Generalized Decompositions for a general class of problems where (1) is equivalent to the minimization of an elliptic and differentiable functional $J$, $$ J(u) = \min_{v\in V} J(v) $$ where $V$ is a tensor product of reflexive Banach spaces.

Sparse Grids and Multi-Level Monte Carlo
Keywords: Sparse Grids; Multi-Level Monte Carlo; Computational Finance
Wed, 11:10--11:35
  • Gerstner, Thomas (Institut für Mathematik, Universität Frankfurt, Germany)

The Multilevel Monte Carlo method introduced by Michael Giles is a technique to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations. In this method, path simulations with different timesteps are combined in such a way that the ratio of computational cost and error (variance) is minimized. This can reduce the complexity, up to a logarithmic factor, by one order of magnitude. It has many applications, particularly in computational finance.

In this talk, we will show that the Multilevel Monte Carlo method can be interpreted as a sparse grid method in dimension-samples space. We extend the method to deal with adaptivity and apply this adaptive Multilevel Monte Carlo method to the pricing of special financial products such as barrier options.

Infinite-dimensional numerical integration
Keywords: numerical integration; quasi-Monte Carlo; multilevel algorithm
Mon, 11:10--11:35
  • Gnewuch, Michael (Department of Computer Science, Christian-Albrechts-University Kiel, Germany)

In many important applications one has to deal with integration problems for functions of a finite (but a priori unknown) number of variables or of infinitely many variables.

In this talk we want to discuss this type of numerical integration problem for functions in weighted reproducing kernel Hilbert spaces. We present upper and lower bounds for the convergence rates of linear algorihms. The lower bounds are constructive and rely on multilevel schemes and quasi-Monte Carlo integration points. Our upper bounds show that these algorithms obtain in many cases (depending on the considered weights and cost function for function evaluation) the optimal convergence rate.

Hierarchical tensor methods for PDEs with stochastic parameters
Keywords: Hierarchical Tucker; Stochastic PDEs; Low Rank Tensors
Tue, 10:45--11:10
  • Grasedyck, Lars (IGPM, RWTH Aachen, Germany)
  • Ballani, Jonas (Max Planck Institut für Mathematik in den Naturwissenschaften, Leipzig)
  • Kluge, Melanie (IGPM, RWTH Aachen)

We consider the problem to solve a (stochastic) parameter dependent equation \[A(\omega)u(\omega) = b(\omega),\qquad \omega\in\Omega \] for systems $A$ governed by partial differential operators that depend on $\omega$. Our aim is to calculate quantities of interest (mean, variance, maximum etc.) of the set of solutions. One way to solve such a problem is by expansion of the system, the right-hand side as well as the solution in independent stochastic variables $\omega_1,\ldots,\omega_p$, and then solve the arising large-scale deterministic problem \[A(\omega_1,\ldots,\omega_p)u(\omega_1,\ldots,\omega_p) = b(\omega_1,\ldots,\omega_p).\] An alternative approach is to use (quasi or multilevel) Monte Carlo (MC) methods which require just a simple sampling ($M$ simulations), but these are only useful for certain quantities of interest (e.g. the mean). We will present a new approach based on hierarchical Tucker (HT) representations of tensors. This method is based on standard PDE solvers for deterministic systems. The set of solutions is approximated in a low rank (HT) tensor format that allows for many parameters (thousands).

An efficient sparse grid Galerkin-approach for the numerical valuation of basket options under Kou's jump-diffusion model
Keywords: Sparse grids; Preconditioner; Generating system; Finance
Sun, 15:45--16:10
  • Hullmann, Alexander (Institute for Numerical Simulation, University of Bonn, Germany)

The deterministic numerical valuation of basket options on underlyings with jumps results in a multi-dimensional partial integro-differential equation. In order to efficiently discretise the equation in variational form, we use a sparse grid generating system based on linear splines with only $O(N (\log N)^{d-1})$ degrees of freedom, as opposed to $O(N^d)$ in the full-grid case. Here, $N=h^{-1}$ is the finest resolution in one direction and $d$ the space dimension.

To obtain a fast and efficient overall solution method, several numerical issues need to be addressed. In contrast to the local part of the PIDE-operator, the integral-operator leads to a dense matrix after discretisation. We will generalise a finite difference recurrence formula for the Kou jump-diffusion model to the multi-dimensional Galerkin case, which enables us to evaluate the integral in linear time.

Secondly, also the local part of the PIDE-operator will not straightforwardly lead to a sparse operator matrix due to the non-local nature of sparse grid basis functions. But applying the unidirectional principle to sparse grid generating systems, we can evaluate the operator in linear time, using only restrictions, prolongations and non-hierarchical finite-element operators.

The corresponding systems of linear equations must then be solved in a fast way by, e.g. additive multilevel methods. As generating system-based preconditioners for sparse grids exhibit a suboptimal condition number of the order $O((\log N)^{d-1})$ only, we discuss new optimal preconditioners that result in $O(1)$ condition numbers. Here, multiresolution norm equivalences are used without specifically discretising the detail spaces, while the practical advantages of the generating system approach are preserved. Altogether, we thus obtain a solution method for the sparse grid discretisation of the PIDE in optimal complexity.

A non linear approximation method for solving high dimensional partial differential equations: Application in Finance.
Keywords: Greedy algorithms, high-dimensional partial differential equations
Thu, 09:50--10:15
  • Infante Acevedo, José Arturo (CERMICS, Université Paris-Est, France)
  • Lelièvre, Tony (CERMICS, Université Paris Est, Ecole des Ponts - Paristech, FRANCE)

We study an algorithm which has been proposed in [1,4] and analyzed in [2,3] to solve high dimensional partial differential equations. The idea is to represent the solution as a sum of tensor products and to compute iteratively the terms of this sum. This algorithm is very much related to so called greedy algorithms, see [5]. Convergence results of this approach obtained in [2,3] will be presented.

Besides, we will also show the application of this non linear approximation method to the option pricing problem, an important subject of the mathematical finance domain. This leads us to consider two extensions of the standard algorithm, which applies to symmetric linear partial differential equations: (i) nonsymmetric linear problems to value European options, (ii) nonlinear variational problems to price American options. We will present theoretical and numerical difficulties arising in this context.

[1] A. Ammar, B. Mokdad, F. Chinesta, and R. Keunings. A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids. J. Non-Newtonian Fluid Mech., 139:153–176, 2006.
[2] E. Cancès, V. Ehrlacher, and T. Lelièvre. Convergence of a greedy algorithm for high-dimensional convex nonlinear problems, 2010. To appear in Mathematical Models and Methods in Applied Sciences.
[3] C. Le Bris, T. Lelièvre, and Y. Maday. Results and questions on a nonlinear approximation approach for solving high-dimensional partial differential equations. Constructive Approximation, 30(3):621–651, 2009.
[4] A. Nouy. A generalized spectral decomposition technique to solve a class of linear stochastic partial differential equations. Comput. Methods Appl. Mech. Engrg., 196:4521–4537, 2007.
[5] V.N. Temlyakov. Greedy approximation. Acta Numerica, 17:235–409, 2008.
Spectral kernel methods for simulation data and the eigenvector switching problem
Tue, 14:50--15:15
  • Iza Teran, Victor Rodrigo (Numerical Software, Fraunhofer Institute for Scientific Computing, Germany)

Spectral methods are a family of methods of machine learning that have been applied for data analysis, visualization, clustering and classification. Such methods are subclasses of more general, so called manifold methods, that assume the data one observes reside at or near an intrinsic low dimensional sub-manifold in a higher-dimensional space. One does not have access to this underlying manifold but instead try to approximate it from a point cloud using point correlations. One of the most common strategy is to construct an adjacency graph with specific kernels an exploit the obtained structure, which is found through the eigenvalues and eigenvectors of the corresponding matrix. The graph is considered as a proxy for the manifold and the analysis based on this structure corresponds to the desired analysis based on the geometric structure of the manifold. The theoretical justification of this intuition is a research work developed in the last few years. Nevertheless the application of it has shown very interesting results in many areas. For the development of new products in industry, especially in the car industry, computer simulated models of physical process are used, an example of it is the model of a car impacting against a wall at a certain velocity. The high deformation produced can be model on a computer and each model is very fine (several million data points) and detailed (several hundred car components are in the model) that means, we have a high extrinsic dimensionality. A car company generates several hundred simulations, with a small geometrical or constructive parameter change each day, in order to achieve a desired design configuration with a lower cost but at the same time maintaining the security standards. There is a strong need for the engineers to organize the information about all changes globally, including its effect on some output quantity of interest like for example the Head Injury Index. The application of spectral kernel methods for the analysis of this type of data show very interesting results and it is demonstrated that a global organization of the datasets is possible, based on the fact that the intrinsic dimensionality of such datasets is actually very low. Nevertheless, the usefulness of it depends on the type of kernel and the corresponding spectral gap. If eigenvalues get close, the eigenvector switching problem appears and the use of eigenvectors is not justified anymore. In this talk we present the results of our work on the use of spectral kernel methods for data from engineering simulations, we describe several applications and give some theoretical inside that justify the results. After this we describe the eigenvector switching problem, including a practical example of it. Considering the underlying theory it is possible to analyze the wobbly behavior of eigenvectors and we contribute to a possible control of this behavior. It is shown that at least for perturbations caused by geometrical changes, the suggested control effectively eliminate it.

On hybrid models for discrete stochastic reaction systems
Keywords: chemical master equation; model reduction; error estimates
Wed, 10:45--11:10
  • Jahnke, Tobias (Department of Mathematics, Karlsruhe Institute of Technology, Germany)

Many processes in biology, chemistry and physics can be regarded as reaction systems in which different species interact via a number of reaction channels. In the approach of deterministic reaction kinetics, such systems are modelled by ordinary differential equations which describe how the concentrations of the species change in time. This model is simple and computationally cheap, but fails if the influence of stochastic noise cannot be ignored, and if certain species have to be represented in terms of integer particle numbers instead of real-valued concentrations.

In stochastic reaction kinetics, the system is considered as a Markov jump process on a high-dimensional, discrete state space. The goal is to compute the associated time-dependent probability distribution which evolves according to the chemical master equation. Due to the curse of dimensionality, however, solving this equation is computationally expensive or even impossible in most applications.

This dilemma has motivated the derivation of hybrid models which combine the accurate but computationally costly stochastic description and the simple but coarse deterministic description for different parts of the system. A particularly appealing hybrid model has recently been proposed by Andreas Hellander and Per Lötstedt. In this talk, we discuss the properties of the Hellander-Lötstedt model, present bounds for the modelling error, and sketch an extension which allows to overcome certain limitations.

Interpolation lattices for hyperbolic cross trigonometric polynomials
Thu, 11:35--12:00
  • Kämmerer, Lutz (Faculty of Mathematics, Chemnitz University of Technology, Germany)
  • Kunis, Stefan (Osnabrück University, Germany)
  • Potts, Daniel (Chemnitz University of Technology, Germany)

A straightforward discretisation of high dimensional problems often leads to an exponential growth in the number of degrees of freedom. So, computational costs of even efficient algorithms like the fast Fourier transform increase similar.

Trigonometric polynomials with frequencies only supported by hyperbolic crosses allow for a good approximation of functions of appropriate smoothness and decrease the number of used Fourier coefficients strongly. As a matter of course, an important issue is the customisation of efficient algorithms to these thinner discretisations.

By using rank-1 lattices as spatial discretisations one can simplify the corresponding multidimensional trigonometric polynomials to one dimensionals in an easy way. With the well known FFT one can evaluate these stable and fast. We generalize this concept by using generating vectors with real-valued entries. By doing this, we can give new strategies to search for stable spatial discretisations of trigonometric polynomials.

Tensor structure of some structured matrices with applications to efficient operations in the QTT format
Keywords: structured matrices; tensor representations; fast algorithms
Wed, 14:00--14:25
  • Kazeev, Vladimir (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)

Problems in high dimensions are difficult to deal with because of the so-called "curse of dimensionality", that is the exponential growth of storage costs and complexity with respect to the number of axes. Various low-parametric non-linear approximations have been applied to alleviate representation and computations in high dimensions. Some tensor decompositions, e. g. canonical and Tucker ones, are already considered classical in mathematics, though they do not break the "curse of dimensionality".

Meanwhile the communities of physical chemistry and quantum information theory have been exploiting the Matrix Product State (MPS) representation for almost two decades now. This advantageous concept was rediscovered in mathematics as the Tensor Train (TT) format and equiped with robust truncation and efficient adaptive cross approximation algorithms by Oseledets and Tyrtyshnikov in 2009. The TT format blends together robustness and linear dependence of storage costs and complexity on number of modes, i.e. dimensionality, and mode length, i.e. number of points along a dimension. Moreover, coupling the TT format with the idea of introducing as many fictitious dimensions as possible, which is referred to as "tensorization" or "quantization", leads us to the QTT format. This approach yields storage costs and complexity logarithmic with respect to mode length of initial tensor, which allows us to hail and make the best use of high dimensionality.

Investigation of QTT structure of some special matrices parametirized in the QTT format shows that they themselves are well fitted by the format, which leads to efficient computational algorithms in the Quantics Tensor Train format.

Some approaches to dimension reduction for optimization.
Keywords: POD; GTM; krig
Tue, 15:45--16:30
  • Keane, Andy J. (School of Engineering Sciences, University of Southampton, United Kingdom)

This talk illustrates how Proper Orthogonal Decomposition (POD), Generative Topographic mapping (GTM) and Krigging can be used during design optimization to reduce the dimensionality of the problem being tackled. Although these methods do not completely solve all issues in this area they are powerful tools for use alongside more traditional approaches such as screening and principal componnent analysis. Exmaples from airframe and aeroengine design are used to illustrated the proposed approaches.

Grid-based solution of the 3D Hartree-Fock equation with 1D complexity
Keywords: tensor-structured numerical methods; Hartree-Fock equation; 3D tensor rank reduction
Wed, 16:35--17:00
  • Khoromskaia, Venera (Scientific Computing Group, Max-Planck-Institute for Mathematics in the Sciences, Germany)

Numerical solution of the Hartree-Fock equation using grid-based multilevel tensor-structured methods is discussed. The accurate tensor-structured computation of the nonlinear Hartree and the (nonlocal) exchange operators in $\mathbb{R}^3$, discretized on a sequence of $n\times n\times n$ Cartesian grids is considered. The arising three- and six-dimensional integrations are replaced by the rank-structured Hadamard and scalar products and the corresponding 3D convolution implemented with $O(n\log n)$-complexity. The robust multigrid canonical-to-Tucker rank reduction algorithm with the controllable accuracy enables usage of fine Cartesian grids up to $n^3 \approx 10^{12}$. That yields high resolution of the involved computational entities and allows arbitrary location of atoms in a molecule as in the conventional mesh-free analytical-based solution of the Hartree-Fock equation. The new QTT format allows further to reduce the complexity of the algorithms at some steps of the "black box" solver to $O(\log n)$. Numerical results for all electron case of H$_2$O, C$_2$H$_6$, CH$_4$, and the pseudopotential case of CH$_3$OH, C$_2$H$_5$OH demonstrate efficiency of the tensor-structured methods using traditional and recent tensor formats in electronic structure calculations.

Quantics-TT approximation on a class of multivariate functions
Keywords: High dimensional problems; Multivariate functions; Quantics tensor train format
Wed, 14:25--14:50
  • Khoromskij, Boris (Scientific Computing, Max-Planck Institute for Mathematics in the Sciences, Germany)

Modern methods of rank-structured tensor decomposition allow an efficient separable approximation on a class of multivariate functions and operators, providing linear complexity scaling in the dimension. In particular, the recent quantics-TT (QTT) matrix product states technique is proved to provide the super-compressed representation of high-dimensional data with log-volume complexity. We discuss the asymptotically optimal QTT-rank bounds for a class of multivariate functions, substantiating the computational background of the idea of quantics folding to higher dimensions. The explicit QTT expansions for a family of function generated vectors/tensors will be presented. The theory is supported by numerical illustrations in electronic structure calculations, quantum molecular dynamics and stochastic PDEs.

http://personal-homepages.mis.mpg.de/bokh

An Adaptive Sparse Grid scheme for HJB equations
Keywords: HJB equations; Sparse Grids; Optimal Control
Mon, 09:50--10:15
  • Klompmaker, Irene (Institut fuer Mathematik, TU Berlin, Germany)
  • Bokanowski, Olivier (Université Paris-Diderot, France)
  • Garcke, Jochen (TU Berlin, Germany)
  • Griebel, Michael (Universität Bonn, Germany)

We consider an adaptive sparse grid approach for the numerical solution of Hamilton-Jacobi-Bellman equations in high dimensions. HJB equations arise for example for the solution of optimal control problems. We concentrate on equations of the form that describe problems in continuous time and state space with finite time horizon. We use spatial adaptive sparse grids for the discretization of the continuous state spaces and apply a semi lagrangian scheme based on the dynamic programming approach in order to approximate the HJB-solution. We show numerical results where sparse grids enable us to treat problems in up to 6 dimensions.

The smoothing effect of integration in $\mathbb{R}^d$ and the ANOVA decomposition
Keywords: smoothing; ANOVA decomposition
Sun, 16:35--17:00
  • Kuo, Frances Y. (School of Mathematics and Statistics, University of New South Wales, Australia)
  • Griebel, Michael (Institut für Numerische Simulation, University of Bonn, Germany)
  • Sloan, Ian H. (School of Mathematics and Statistics, University of New South Wales, Australia)

We study the ANOVA decomposition of a $d$-variate function $f$ defined on the whole of $\mathbb{R}^d$, where $f$ is the maximum of a smooth function and zero (or $f$ could be the absolute value of a smooth function). Our study is motivated by option pricing problems. We show that under suitable conditions all terms of the ANOVA decomposition, except the one of highest order, can have unlimited smoothness. In particular, this is the case for arithmetic Asian options with both the standard and Brownian bridge constructions of the Brownian motion.

Exact MCMC inference for Markov switching diffusion models
Keywords: exact algorithms for diffusions; retrospective simulation; MCMC
Wed, 09:25--09:50
  • Latuszynski, Krzysztof (Department of Statistics, University of Warwick, United Kingdom)
  • Palczewski, Jan (University of Leeds)
  • Roberts, Gareth O (University of Warwick)

Consider a stochastic differential equation for $X= X_t$, $t \in [0,T]$, \begin{equation} dX_t = b(X_t, Y_t, \theta) dt + \sigma(X_t, \theta) \gamma (Y_t, \theta) dB_t. \end{equation} where $B = B_t$ is a Brownian motion, $\theta \in \Theta \subset \mathbb{R}^n$ is a multi-dimensional parameter and the switching process $Y = Y_t,$ $t \in [0,T]$ is a continuous-time Markov process on the state space $\mathcal{Y} = \{1, \dots, m\}$ with the intensity matrix $\Lambda = \{\lambda_{i,j}\}$. Models of this type naturally arise in finance and economics, where $Y$ represents unobservable factors that describe the state of the economy (e.g., a business cycle).

We assume that $X$ is observed at discrete time instances and propose an MCMC algorithm that samples from the exact full posterior \begin{equation} \pi(X_M, Y, \theta, \Lambda \;| \; X_D) \;\; \propto \;\; \pi_{\theta}(\theta) \pi_{\Lambda}(\Lambda) \pi( Y \, | \, \Lambda) \pi(X_M , X_D \, |\, Y, \theta) \end{equation} including the infinite dimensional sample path of $X$ and avoiding any discretization error. The approach relies on a generalization of the Exact Algorithm of [1,2] and relies on retrospective sampling ideas.

[1] A. Beskos, O. Papaspiliopoulos, G.O. Roberts (2008). A factorisation of diffusion measure and finite sample path constructions. Methodol. Comput. Appl. Probab. 10(1), 85–104.
[2] A. Beskos, O. Papaspiliopoulos, G.O. Roberts, and P. Fearnhead (2006). Exact and computationally efficient likelihood-based estimation for discreetly observed diffusion processes (with discussion). Journal of the Royal Statistical Society B, 68(3), 333-382.
Sparse grid quadrature as a knapsack problem
Thu, 14:25--14:50
  • Leopardi, Paul (Mathematical Sciences Institute, Australian National University, Australia)
  • Hegland, Markus (Australian National University)

As pointed out by (e.g.) Griebel and Knapek [1], some sparse grid problems can be formulated and solved as continuous knapsack problems. The resulting solution is optimal in terms of estimated profit. In the case of sparse grid quadrature on weighted tensor products of reproducing kernel Hilbert spaces, the profit for each item is just the squared norm of a product difference rule, and this can be calculated precisely. This talk describes a particular sparse grid quadrature algorithm and shows that it is optimal in this more precise sense.

[1] M. Griebel, and S. Knapek, Optimized general sparse grid approximation spaces for operator equations, Mathematics of Computation, vol. 78, no. 268, pp. 2223-2257, 2009.
Compactness properties of weighted summation operators on trees
Keywords: entropy numbers; summation operators; trees
Thu, 10:45--11:10
  • Lifshits, Mikhail (Dept Math.Mech., St.Petersburg State University; HIM (semester long term participant), Russia)
  • Linde, Werner (Friedrich-Schiller-Universität Jena)

We investigate compactness properties of weighted summation operators $V_{\alpha,\sigma}$ as mapping from $\ell_1(T)$ into $\ell_q(T)$ for some $q\in (1,\infty)$. Those operators are defined by $$ (V_{\alpha,\sigma} x)(t) :=\alpha(t)\sum_{s \succeq t}\sigma(s) x(s)\,,\quad t\in T\;, $$ where $T$ is a tree and $\alpha$ and $\sigma$ are given weights on $T$. Such operators are natural discrete analogues of Volterra operators.

We introduce a metric $d$ on $T$ such that compactness properties of $(T,d)$ imply two–sided estimates for $e_n(V_{\alpha,\sigma})$, the (dyadic) entropy numbers of $V_{\alpha,\sigma}$. The results are applied for concrete trees as e.g. moderate increasing, biased or binary trees and for weights with $\alpha(t)\sigma(t)$ decreasing either polynomially or exponentially. We also give some probabilistic applications for Gaussian summation schemes on trees.

See more details at http://www.arxiv.org/abs/1006.3867.

Observations on alternating least squares
Mon, 14:25--14:50
  • Mohlenkamp, Martin (Mathematics, Ohio University, United States)

Approximating a multidimensional object by a sum of separable objects is an effective way to bypass the curse of dimensionality. The simplest, most robust, and most common algorithm to do so is Alternating Least Squares (ALS). We observe that ALS can be viewed as alternately setting partial gradients to zero or as alternately performing orthogonal projections. These dual perspectives allow us to analyze the convergence of ALS. For example, we can show that the $L^2$ norms of the increments form a sequence in $l^2$, but may not be in $l^1$. When regularization is included, we can show for example that the gradient is zero at all accumulation points.

Multi-level Monte Carlo for path-dependent Options in the Heston Model
Mon, 10:45--11:10
  • Neuenkirch, Andreas (FB Mathematik, TU Kaiserslautern, Germany)
  • Dereich, Steffen (Fachbereich Mathematik und Informatik, Philipps Universität Marburg)

The Heston model is a popular stochastic volatility model in mathematical finance. Although numerous discretization schemes have been proposed for the approximation of the corresponding SDE, no strong convergence rates are known so far, since the Heston SDE contains non-Lipschitz square-root coefficients.

In this talk, we will study an Euler-type method for the approximation of the Heston Model and derive its strong convergence rate. Based on this, we will build Multi-level Monte-Carlo estimators for the quadrature of path-dependent functionals in the Heston Model.

Fast QMC integration for lognormal random fields
Keywords: Quasi-Monte Carlo; Stochastic PDE
Tue, 09:50--10:15
  • Nichols, James (School of Mathematics and Statistics, University of New South Wales, Australia)
  • Sloan, Ian H. (University of New South Wales, Australia)
  • Kuo, Frances Y. (University of New South Wales, Australia)
  • Graham, Ivan G. (University of Bath, UK)
  • Scheichl, Rob (University of Bath, UK)
  • Schwab, Christoph (ETH Zurich, Switzerland)

Partial differential equations with lognormal random fields in the coefficients are notoriously high-dimensional problems and consequently are difficult calculate integrals or expected values on. Here we present recent results on obtaining convergence of QMC quadrature, in particular results relevant to the "smoothness" of the random field, and examine computational results on the porous-media Darcy flow problem.

Strategies for optimal polynomial approximation of elliptic PDEs with random coefficients
Keywords: high dimensional polynomial approximation; sparse grids; PDEs with random coefficients
Tue, 11:10--11:35
  • Nobile, Fabio (Mathematics, MOX Laboratory, Politecnico di Milano, Italy)
  • Bäck, Joakim (Applied Mathematics and Computational Science, KAUST, Saudi Arabia)
  • Tamellini, Lorenzo (MOX, Department of Mathematics, Politecnico di Milano, Italy)
  • Tempone, Raul (Applied Mathematics and Computational Science, KAUST, Saudi Arabia)

Partial differential equations with stochastic coefficients are a suitable tool to describe systems whose parameters are not completely determined, either because of measurement errors or intrinsic lack of knowledge on the system. In this context, the goal is usually to compute an approximation of statistical moments of the state variables.

In the case of elliptic PDEs, this can be conveniently achieved using high order polynomial approximations of the state variables with respect to the random coefficients. We consider two different methods to build such approximations, namely the Stochastic Galerkin and Stochastic Collocation methods. Global polynomial approximations are sound since, for elliptic PDEs, the state variables usually exhibit high regularity in their dependence to the random parameters.

When the number of parameters is moderate, these methods can be remarkably more effective than classical sampling methods. However, contrary to the latter, the performance of polynomial approximations deteriorates as the number of random variables increases (curse of dimensionality); to prevent this, care has to be put in the construction of the approximating polynomial space.

We present theoretical analyses of the structure of the problem that lead to practical strategies to construct optimal polynomial spaces in the case of Stochastic Galerkin approximations, as well as optimal sparse grids in the case of Stochastic Collocation approximations. We apply the strategy to some practical examples and show its effectiveness.

[1] I. Babuska, F. Nobile, and R. Tempone, "A stochastic collocation method for elliptic partial differential equations with random input data," SIAM Review, vol. 52, pp. 317–355, June 2010.
[2] J. Bäck, F. Nobile, L. Tamellini, and R. Tempone. Implementation of optimal Galerkin and collocation approximations of PDEs with random coefficients. Proceedings of CANUM 2010. To appear in ESAIM Proceedings.
[3] J. Bäck, F. Nobile, L. Tamellini, and R. Tempone. Stochastic spectral Galerkin and collocation methods for PDEs with random coefficients: a numerical comparison. In Spectral and High Order Methods for Partial Differential Equations, J.S. Hesthaven and E.M. Ronquist editors, volume 76 of Lecture Notes in Computational Science and Engineering, pages 43–62. Springer, 2011.
TT-approach to the solution of high-dimensional problems
Keywords: Tensor trains; High-dimensional approximations
Wed, 15:45--16:10
  • Oseledets, Ivan (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)

Solution of high-dimensional problems requires low-parametric representation of multivariate functions and their discrete representations – multidimensional arrays, or tensors. By tensor we mean an array with $d$-indices, $A(i_1,\ldots,i_d)$ Such tensors appear naturally from discretization of multivariate function on a tensor grid. I will discuss, how to approximate such tensors by tensors in Tensor Train format, or simply TT-format. Tensor $\mathbf{A}$ is said to be in the TT-format, if $$A(i_1,\ldots,i_d) = G_1(i_1) G_2(i_2) \ldots G_d(i_d),$$ where $G_k(i_k)$ is an $r_{k-1} \times r_k$ matrix for each fixed $i_k$, and $r_0 = r_d$ to make matrix-by-matrix product a number. Numbers $r_k$ are called TT-ranks TT-format comes with fast linear algebra: addition, matrix-by-vector multiplication, etc, thus it looks promising for the approximation of high-dimensional tensors. The result of such operations is also in the TT-format, but the rank may grow, i.e. certain rounding, i.e. approximation of a given tensor by another one with smaller TT-ranks but with prescribed accuracy. Such procedure can be implemented by a sequence of QR and SVD decompositions, is fast and robust. With the help of it one can already implement standard iterative methods for the solution of high-dimensional equations and eigenvalue problems.

TT-format is based on the matrix SVD, which is connected with low-rank approximation. Low-rank approximation, in turn, is connected to the so-called skeleton decomposition, which is able to reconstruct low-rank matrix form $r$ its rows and columns, where $r$ is the rank. Effective algorithms for the computation of the skeleton decomposition are based on the cross algorithms. TT-format has direct generalization of this beautiful formula: a rank-$r$ tensor can be recovered exactly from $dnr^2$ entries of it. Obvious applications are black-box approximation of "difficult" multivariate functions, since TT-interpolation is a nonlinear interpolation scheme, which is based on the assumption that separable approximation to the function exists. However, such approximation, also known as canonical, is not computable numerically, while TT-approximant is computable.

There are several important applications for tensor method: approximation of multiparametric functions and the solution of stochastic PDEs (here comparison with QMC methods is surely very important), approximation of potential energy surfaces (PES) in quantum chemisty, solution of high-dimensional eigenvalue problem coming from second quantization approach to the FCI method, solution of molecular Schroedinger equation in quantum molecular dynamics. These are "truly" high-dimensional problems. A major breakthrough of recent years is that high-dimensional approximation can be applied to formally low-dimensional objects through quantization (tensorization). The simplest way to illustrate this approach is to consider a univariate function, $f(x)$, on a uniform grid with $2^d$ points. Then, vector of values on such grid can be considered as $d$-dimensional tensor with mode sizes equal to $2$. Thanks to the stability of the TT-format, TT-decomposition of such tensor can be computed fast for $d \leq 24$ even on a laptop, and numerical experiments show, that for many functions QTT-ranks (QTT stands for Quantized TT format) are really small. The storage of the QTT-representation is logarithmic in the number of grid points. Such technique can be generalized to higher dimensions. QTT-format has deep connection with wavelets and non-stationary subdivision schemes, and I will illustrate that by examples.

There exists a MATLAB Toolbox that implements several discussed algorithms – TT-Toolbox (current version 2.0). It can be obtained from the author or at the website http://spring.inm.ras.ru/osel.

A multigrid method for PDEs on spatially adaptive sparse grids
Keywords: sparse grids; PDE; multigrid
Mon, 09:25--09:50
  • Peherstorfer, Benjamin (Department of Informatics, Technische Universität München, Germany)
  • Bungartz, Hans-Joachim (Department of Informatics, Technische Universität München, Germany)
  • Zenger, Christoph (Department of Informatics, Technische Universität München, Germany)

To solve a PDE on spatially adaptive sparse grids with the FE method, we need a matrix-vector product to apply a vector to the system matrix. Numerous work has been published on how to best compute this product in order to obtain the solution of the system of linear equations efficiently, see e.g. [1,4]. However, only very limited or specialized multigrid approaches have been presented.

We present a multigrid method which relies on a dimension-wise (i.e. ANOVA-like) decomposition of the hierarchically lower parts of the function. Note that the natural relationship between sparse grids and such decompositions has been shown in e.g. [3,2]. Whereas the algorithm in [1] needs $\mathcal{O}(2^d)$ recursive calls to compute the bilinear form corresponding to the hierarchically lower and higher parts of the function, we simply store these hierarchically lower parts. Representing the function in the nodal point basis on the current grid, no hierarchically higher parts of the function exist. An extra benefit besides the method's multigrid performance is that its dimension-wise decomposition of the hierarchically lower parts gives us some knowledge concerning the importance of the different dimensions.

We present first numerical experiments for elliptic PDEs in moderately high dimensions which confirm the expected multigrid performance. Both regular and spatially adaptive sparse grids are used. These examples further demonstrate that the additional amount of storage required for the hierarchically lower parts can be kept under control if only non-zero values are stored. Finally, we show that the components of the dimension-wise decomposition do indeed reflect the characteristics of the solution to some extent.

[1] Robert Balder and Christoph Zenger. The solution of multidimensional real helmholtz equations on sparse grids. SIAM Journal on Scientific Computing, 17(3):631–646, 1996.
[2] Christian Feuersänger. Sparse Grid Methods for Higher Dimensional Approximation. PhD thesis, September 2010.
[3] M. Griebel. Sparse grids and related approximation schemes for higher dimensional problems. In L. Pardo, A. Pinkus, E. Suli, and M.J. Todd, editors, Foundations of Computational Mathematics (FoCM05), Santander, pages 106–161. Cambridge University Press, 2006.
[4] Christoph Schwab and Radu-Alexandru Todor. Sparse finite elements for elliptic problems with stochastic loading. Numerische Mathematik, 95(4):707–734, October 2003.
Spatially adaptive sparse grid regression
Keywords: Sparse Grids; Spatial Adaptivity; Regression
Mon, 09:00--09:25
  • Pflüger, Dirk (Department of Informatics, Technische Universität München, Germany)

Regression, (high-dimensional) function reconstruction from scattered data, is a common problem in data-driven tasks. Typically, meshfree methods are employed to circumvent the curse of dimensionality. To deal with regression by discretizing the feature space, the sparse grid combination technique has been successfully employed [1]. Due to their primarily data-independent approach, sparse grids enable one to deal with massive amounts of data.

Meanwhile, the direct sparse grid approach has also been employed to a range of data-driven problems such as classification [2]. It allows to adapt to the problem at hand in a spatially adaptive way, which is crucial for many problems. We now present spatially adaptive sparse grids for regression problems. Using a suitable (problem-aware) choice of basis functions, they provide straightforward criteria for adaptive refinement, built-in dimensional adaptivity, and an ANOVA-style decomposition of the problem at hand. Where the effective dimensionality of the problem is moderate, high dimensionalities as well as large data sets can be dealt with. This is confirmed by results for both artificial datasets and real-world ones (e.g., from astrophysics and finance).

[1] Jochen Garcke and Markus Hegland. Fitting multidimensional data using gradient penalties and the sparse grid combination technique. Computing, 84(1-2):1–25, April 2009.
[2] Dirk Pflüger. Spatially Adaptive Sparse Grids for High-Dimensional Problems. Dr. Hut, München, 2010.
Recovery of functions in high dimensions via compressive sensing
Keywords: Function Recovery; Compressive Sensing; high dimensions
Thu, 09:00--09:25
  • Rauhut, Holger (Hausdorff Center for Mathematics, University of Bonn, Germany)
  • Cohen, Albert (University of Paris VI)
  • DeVore, Ronald A. (Texas A&M University)
  • Foucart, Simon (Drexel University)

Compressive sensing predicts that sparse vectors can be recovered efficiently from highly undersampled measurements. It is known in particular that multivariate sparse trigonometric polynomials can be recovered from a small number of random samples. Classical methods for recovering functions in high spatial dimensions usually suffer the curse of dimension, that is, the number of samples scales exponentially in the dimension (the number of variables of the function). We introduce a new model of functions in high dimensions that uses "sparsity with respect to dimensions". More precisely, we assume that the function is very smooth in most of the variables, and is allowed to be rather rough in only a small but unknown set of variables. This translates into a certain sparsity model on the Fourier coefficients. Using techniques from compressive sensing, we are able to recover functions in this model class efficiently from a small number of samples. In particular, this number scales only logarithmically in the spatial dimension - in contrast to the exponential scaling in classical methods.

Multi-level algorithms for infinite-dimensional integration
Keywords: integration on the sequence space; variable subspace sampling; QMC-rules
Mon, 11:35--12:00
  • Ritter, Klaus (Department of Mathematics, TU Kaiserslautern, Germany)

Stochastic multi-level algorithms have turned out to be a powerful tool for variance reduction in different settings. In this talk, emphasis will be given to the study of integration on (subsets of) the product space ${\mathbb R}^{\mathbb N}$, which is motivated by, e.g., series expansions of stochastic processes. We show how to combine the multi-level technique and variable subspace sampling with (randomized) Quasi-Monte-Carlo-rules for integration on high-dimensional spaces ${\mathbb R}^d$. The analysis is based on known tractability results, which provide uniform error and cost bounds for high-dimensional integration as $d \to \infty$. Our analysis is complemented by numerical experiments for path-dependent options.

Joint work with Fred Hickernell (IIT, Chicago), Sebastian Mayer (TU Darmstadt), Thomas Müller-Gronbach (Universität Passau), and Ben Niu (IIT, Chicago). Supported by the DFG within Priority Program 1324.

Error estimation for the integration with respect to log-concave densities
Keywords: Markov chain Monte Carlo; Metropolis algorithm; log-concave
Wed, 09:50--10:15
  • Rudolf, Daniel (Mathematical Institute, University Jena, Germany)

Let the function $f$ be given and $\rho>0$ be an unnormalized, log-concave density. Suppose that $D\subset \mathbb{R}^d$ is a convex body with additional properties. The goal is to compute the integral of $f$ with respect to the distribution determined by $\rho$, such that \[ S(f,\rho)=\frac{\int_D f(y) \rho(y) {\rm d}y}{\int_D \rho(x) {\rm d}x}, \] is the desired quantity. A straight generation of the desired distribution, say $\mu_\rho$, is in general not possible. Thus, it is reasonable to compute the average over a suitable Markov chain $(X_n)_{n\in \mathbb{N}}$ which approximates $\mu_\rho$. Hence \begin{equation*} S_{n,n_0}(f,\rho):=\frac{1}{n} \sum_{i=1}^n f(X_{i+n_0}) \end{equation*} is the suggested approximation of $S(f,\rho)$, where $n_0$ is called burn-in. The considered Markov chains, e.g. a suitable Metropolis algorithm or Hit-and-Run algorithm, fulfill some convergence conditions. For example the Metropolis algorithm based on a ball walk has an absolute $L_2$-spectral gap, see [1]. We use these convergence properties to apply error bounds of Markov chain Monte Carlo methods to obtain an error estimate for the computation of $S(f,\rho)$, see [2,3].

[1] P. Mathé and E. Novak. Simple Monte Carlo and the Metropolis algorithm. Journal of Complexity, 23(4-6):673–696, 2007.
[2] D. Rudolf. Explicit error bounds for lazy reversible Markov chain Monte Carlo. J. Complexity, 25(1):11–24, 2009.
[3] D. Rudolf. Explicit error bounds for Markov chain Monte Carlo. PhD thesis, University Jena, 2010.
New cross interpolation algorithm and Krylov/Wedderburn scheme for TT format
Keywords: TT format; cross algorithm; Krylov subspaces
Wed, 14:50--15:15
  • Savostyanov, Dmitry (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)
  • Oseledets, Ivan (Institute of Numerical Mathematics, Russian Academy of Sciences, Russia)

Tensor train (TT) format has been recently proposed in [7] for data-sparse representation of the high-dimensional tensors based on separation of variables. TT format has the advances of both canonical and Tucker formats: the number of parameters does not grow exponentially with dimension, decomposition/approximation problem is stable and can be computed by SVD-based methods.

To break the curse of dimensionality, approximation methods that do not require all elements of initial tensor are crutial. For matrices such an algorithm is well known and used, for example in mosaic/$\mathcal{H}$–matrices formats [9]. It interpolates the given matrix on the positions of several columns and rows and then checks the approximation accuracy over the random set of elements. The choice of the proper positions for the interpolation depends on matrix and is very important. In [2] it is shown that good positions can be selected using the maximum-volume principle, in [3] properties of maximum-volume and closely related dominating submatrices are studied in more details. For three-dimensional tensors, cross interpolation algorithms in Tucker format [6] and also are based on the maximum volume principle.

For arbitrary dimension the interpolation method for the TT format is proposed in [8]. However, sometimes its convergence is not satisfactory even for data with "good" structure. To improve this, we propose a new method combining the maximum-volume interpolation idea with the DMRG approximation scheme, popular among the quantum chemistry community. We apply the resulted algorithm for the fast compression of high-dimensional data and high-scale data using the quantization idea proposed in [5,4].

Also, the DMRG scheme can be improved using the Krylov/Wedderburn framework. Krylov methods are methods of choice to approximate large matrices that allow fast matrix-by-vector multiplication. In [1] we generalize them to three-dimensional tensors. In this talk we discuss the generalization to larger dimensions.

This work is supported by RFBR grants 09-01-91332 (with DFG), 09-01-00565, 10-01-00757.

[1] S. A. Goreinov, I. V. Oseledets, and D. V. Savostyanov. Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. Preprint 2010-01, INM RAS, Moscow, 2010.
[2] S. A. Goreinov and E. E. Tyrtyshnikov. The maximal-volume concept in approximation by low-rank matrices. Contemporary Mathematics, 208:47–51, 2001.
[3] S.A. Goreinov, I.V. Oseledets, D.V. Savostyanov, E.E. Tyrtyshnikov, and N.L. Zamarashkin. How to find a good submatrix. In V. Olshevsky and E. Tyrtyshnikov, editors, Matrix Methods: Theory, Algorithms, Applications, pages 247–256. World Scientific Publishing, 2010.
[4] B. N. Khoromskij and I. V. Oseledets. Quantics-TT approximation of elliptic solution operators in high dimensions. J. Numer. Math., (2011).
[5] I. V. Oseledets. Approximation of $2^d \times 2^d$ matrices using tensor decomposition. SIAM J. Matrix Anal. Appl., 31(4):2130–2145, 2010.
[6] I. V. Oseledets, D. V. Savostianov, and E. E. Tyrtyshnikov. Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM J. Matrix Anal. Appl., 30(3):939–956, 2008.
[7] I. V. Oseledets and E. E. Tyrtyshnikov. Breaking the curse of dimensionality, or how to use svd in many dimensions. SIAM J. Sci. Comp., 31(5):3744–3759, 2009.
[8] I. V. Oseledets and E. E. Tyrtyshnikov. TT-cross algorithm for the approximation of multidimensional arrays. Linear Algebra Appl., 432(1):70–88, 2010.
[9] E. E. Tyrtyshnikov. Incomplete cross approximation in the mosaic–skeleton method. Computing, 64(4):367–380, 2000.
Optimization in novel tensor formats and the DMRG and related algorithms
Keywords: HT and TT Tensors; direct optimization; Alternating least squares and DMRG
Mon, 14:00--14:25
  • Schneider, Reinhold (Mathematical Institute, TU Berlin, Germany)
  • Holtz, Sebastian
  • Rohwedder, Thorsten

We will discuss recent progress in tensor product approximation concerning hierarchical Tucker representation introduced by Hackbusch. We will focus mainly on TT-tensors (Oseledets & Tyrtishnikov), which can be written by a matrix product representation (matrix product states (MPS) in quantum information theory). We consider numerical methods solving an optimization problem within a prescribed format, focusing on i) $L_2$-approximation, ii) linear equations and iii) eigenvalue problems. We consider the alternating linear scheme which is a generalzation of an alternating least square (ALS) approach for optimization in TT format. A modification (MALS) applied to N-body Fermn ionic systems resembles the density matrix renormalization group algorithm (DMRG). Furthermore, we propose an iterative linearization scheme projection onto the tangent space of the manifold of TT tensors. Finally convergence behaviour will investigated, and local linear convergence and even sometimes quadratical convergence could be shown und certain assumptions.

Convergence rates of a sparse tensor QMC-FEM discretizations for a class of PDEs with infinite dimensional noise
Keywords: Stochastic PDEs; Multilevel QMC
Tue, 09:25--09:50
  • Schwab, Christoph (Mathematics, ETH Zurich, Switzerland)
  • Kuo, Frances Y. (Department of Mathematics, Univ. New South Wales, Australia)
  • Sloan, Ian H. (Department of Mathematics, Univ. New South Wales, Australia)

We consider elliptic partial differential equations with infinite dimensional noise in the coefficients.

We present an analysis of the PDE to establish new sufficient conditions on the random inputs' fluctuations in order for several families of QMC rules to converge at rates independent of the dimension of the noise.

We present convergence rates of sparse tensor discretizations by QMC methods in random space and by Finite Element Methods in physical space and time.

Extensions to parabolic and hyperbolic PDEs will be indicated.

The results presented here are joint work with Frances Kuo and Ian Sloan. They are building on a new unification of several error bounds for QMC quadratures which are presented in a related talk by I. Sloan.

QMC for noisy PDE
Keywords: QMC
Tue, 09:00--09:25
  • Sloan, Ian H. (Mathematics and Statistics, University of New South Wales, Australia)
  • Kuo, Frances Y. (University of New South Wales, Australia)
  • Schwab, Christoph (ETH Zurich, Switzerland)

Partial differential equations with randomness in the coefficients are now attracting serious attention from computational scientists, often under the heading of "Uncertainty quantification". In present joint work with Christoph Schwab (ETH) and Frances Kuo (UNSW) we have found that QMC combined with a finite element solver is an attractive alternative to Monte Carlo for the computation of expected values, but that to get the best results for a model problem the standard QMC theory needs some tweaking. In this lecture QMC theory will be presented with a refreshing twist, guided by the present application to noisy PDE.

Anisotropic fast Gauss transform
Keywords: Fast Gauss Transform; non-local operator
Wed, 11:35--12:00
  • Tausch, Johannes (Mathematics, Southern Methodist University, United States)

Many problems in statistical learning depend on computing the discrete Gauss transform efficiently. The task is to evaluate the potentials $$ \Phi(x_i) = \sum_{j=1}^N \exp\left( -{|x_i-y_j|^2\over \delta} \right) w_j $$ for given sources $y_j \in [0,1]^d$, targets $x_i\in [0,1]^d$, weights $w_j$ and variance $\delta$.

The direct evaluation of the Gauss transform is an $O(N^2)$ algorithm which is impractical for large data sets. However, a variant of the fast multipole method can be used to evaluate the potentials in optimal $O(N)$ complexity. This is the fast Gauss transform (FGT), introduced by Greengard and Strain [1]. The algorithm subdivides the dataset into smaller clusters and approximates interactions between two clusters using Hermite expansions. The algorithm has $O(N)$ complexity, but the constant grows exponentially with the dimension because of the increasing number of interacting clusters and the number of terms in the Hermite expansion.

Since the Gauss kernel is an entire function it can be well approximated in $[0,1]^{2d}$ by a multivariate Chebyshev series [2]. The resulting degenerate kernel readily leads to fast algorithm. Unlike previous FGT algorithms this approach does not cluster interactions and will work well if the sources and targets are fairly uniformly distributed and the variance is not too small.

On the other hand, many realistic datasets have low-dimensional features which the current implementation of the Chebyshev-based method cannot exploit. In this case we cluster the data and represent the data points in the principal axes of the cluster. We now have to approximate the Gauss kernel in a $2d$-dimensional rectangular box with widely varying side lengths. This anisotropy leads to new theoretical and algorithmic issues that will be discussed in the talk.

[1] L. Greengard and J. Strain. The fast Gauss transform. SIAM J. Sci. Comput., 12:79–94, 1991.
[2] J. Tausch and A. Weckiewicz. Multidimensional fast Gauss transforms by Chebyshev expansions. SIAM J. Sci. Comput., 31(5):3547–3565, 2009.
Difference principle component analysis for crash simulation results
Tue, 14:00--14:25
  • Thole, Clemens-August (Numerische Software, Fraunhofer Institut SCAI, Germany)

Potential scatter of simulation results caused for example by buckling, is still a challenging issue for the predictability of crash simulation results. Principle component analysis (PCA) is a well-known mathematical method for data analysis. In order to characterize scatter PCA analysis was applied to the simulation results from a number of runs using all node positions at all time steps. For industrials relevant problems the size of the data base is larger than 100 GBytes (even, if compressed by FEMzip). As a result the major components dominating the differences between the simulation results are available. Since PCA is a mathematically based method, the selected modes do not separate different physical effects like buckling at different parts of the model. PCA rather tries to maximize the variations by combining several physical effects into one mode.

Difference PCA (DPCA) applies PCA analysis to the results for each part and time step. By analysis of the related covariance matrices, the local dimension of the scatter subspace can be identified and correlation between the scatter at different places can be analyzed. Using DPCA, different origins of scatter can be identified and physically meaningful components can be determined. The talk introduces the approach and shows results for an industrial model.

Swendsen-Wang beats heat-bath
Keywords: Swendsen-Wang; Rapid mixing; Ising model
Wed, 09:00--09:25
  • Ullrich, Mario (Mathematical Institute, FSU Jena, Germany)

Let $G=(V,E)$ be a graph with finite vertex set $V$ and edge set $E\subseteq\binom{V}{2}$, where $\binom{V}{2}$ is the set of all subsets of $V$ with 2 elements, $N:=\vert{V}\vert$. The Ising model is defined as the set of possible configurations $\Omega_{\rm IS}=\{-1,1\}^V$, together with the probability measure \[ \pi_\beta(\sigma) \;:=\; \pi^{G}_\beta(\sigma) \;=\; \frac1{Z_\beta}\, \exp\biggl\{\beta\sum_{\{u,v\}\in E} \boldsymbol{1}_{\{\sigma(u)=\sigma(v)\}}\biggr\}, \] where $u\leftrightarrow v$ means that $u$ and $v$ are neighbors in $G$, $Z$ is the normalization constant and $\beta\ge0$ is called the inverse temperature.

The goal is to generate an (approximate) sample with respect to $\pi_\beta$ in time that is polynomial in the number of vertices $N$. This is possible with "local" Markov chains (like the Metropolis algorithm) if $\beta$ is below some critical value, i.e. $\beta\le\beta_c$, but impossible for larger values.

We prove that the mixing time of the Swendsen-Wang algorithm cannot be much slower than the mixing time of any local Markov chain if the maximum degree of the underlying graph is bounded. I.e. if the maximal degree of $G$ is $\Delta$, then \[ \lambda(P_{\rm SW}) \;\ge\; c^\Delta\,\lambda(P_{\rm L}), \] where $P_{\rm SW}$ (resp. $P_{\rm L}$) is the transition matrix of the Swendsen-Wang (resp. any local) Markov chain, $\lambda(\cdot)$ denotes the spectral gap and $c$ is some constant that depends only on the temperature. As a consequence we get rapid mixing of the Swendsen-Wang algorithm for the two-dimensional Ising model at high temperatures.

Furthermore we present a modified version of the Swendsen-Wang chain that is applicable for planar graphs $G$ and prove that it is rapidly mixing at all temperatures. This seems to be the first Markov chain for the Ising model that is proved to mix in polynomial time for all temperatures.

[1] C. Borgs, J. T. Chayes, and P. Tetali. Tight Bounds for Mixing of the Swendsen-Wang Algorithm at the Potts Transition Point. ArXiv e-prints, November 2010.
[2] M. Ullrich. Rapid Mixing of a Modified Swendsen-Wang Algorithm for the Ising Model. in preparation.
Spline interpolation on sparse grids
Keywords: spline; sparse grids; Besov spaces of mixed smoothness
Sun, 14:00--14:25
  • Ullrich, Tino (Hausdorff-Center for Mathematics, Institute for Numerical Simulation, University of Bonn, Germany)
  • Sickel, Winfried (Mathematical Institute, Department of Mathematics and Computer Science, University of Jena, Germany)

We investigate the rate of convergence of interpolating splines with respect to sparse grids for Sobolev-Besov spaces with mixed derivative on the cube $[0,1]^d$. In particular, we study the interpolation by tensor products of piecewise linear functions. The approximation is based on the Smolyak algorithm and uses only function samples taken from a sparse grid. Such a grid consists of considerably fewer grid points than the full cartesian product grid. The topic has been studied intensively in the last few decades and we were able to improve existing results significantly.

Optimal asymptotic bounds for spherical designs
Keywords: Spherical designs, Brouwer degree; Brouwer degree; Marcinkiewicz–Zygmund inequalities
Thu, 14:50--15:15
  • Viazovska, Maryna (MPIM Bonn, Bonn University, Germany)
  • Bondarenko, Andriy (Centre de Recerca Matem`tica, Campus de Bellaterra, 08193 Bellaterra (Barcelona), Spain and Department of Mathematical Analysis, National Taras Shevchenko University, str. Volodymyrska, 64, Kyiv, 01033, Ukraine)
  • Radchenko, Danylo (Department of Mathematical Analysis, National Taras Shevchenko University, str. Volodymyrska, 64, Kyiv, 01033, Ukraine)

A spherical $t$-design is a finite set of $N$ points on the $d$-dimensional unit sphere $S^d$ such that the average value of any polynomial $P$ of degree $t$ or less on this set is equal to the average value of $P$ on the whole sphere. For each $N\ge c_dt^d$ we prove the existence of a spherical $t$-design on the sphere $S^d$ consisting of $N$ points, where $c_d$ is a constant depending only on $d$. This result proves the conjecture of Korevaar and Meyers concerning an optimal order of minimal number of points in a spherical $t$-design on $S^d$ for a fixed $d$.

Approximation techniques for special eigenvalue problems
Keywords: Tensor decomposition schemes; Matrix Product States; Truncation
Wed, 16:10--16:35
  • Waldherr, Konrad (Institute of Computer Science, TU München, Germany)
  • Huckle, Thomas (Institute of Computer Science, TU München)
  • Schulte-Herbrüggen, Thomas (Institute of Chemistry, TU München)

The computation of the ground state (i.e. the eigenvector related to the smallest eigenvalue) is an important task when dealing with quantum systems. As the dimension of the underlying vector space grows exponentially in the number of physical sites, one has to consider appropriate subsets promising both convenient approximation properties and efficient computations.

For $1$D systems, Matrix Product States (MPS) are in use. Algorithms based on MPS only scale polynomially in the size of the physical system, but still guarantee to approximate ground states faithfully. As the set of Matrix product states is not closed under addition, we need techniques that allow us to project a sum of MPS back onto the MPS space, which turns out to be a highly nonlinear approximation problem.

We will use two methods of how to truncate a given large MPS representation to a smaller one. Furthermore, we will present applications of such compression techniques to numerical methods for solving the above eigenvalue problems.

Tractability of $L_\infty$-approximation on weighted Banach spaces of smooth functions
Keywords: Tractability; $L_\infty$-approximation
Sun, 14:25--14:50
  • Weimar, Markus (Mathematical Institute, Friedrich-Schiller-University Jena, Germany)

In this talk we study the worst case error of $L_\infty$-approximation for multivariate smooth functions $f \colon [0,1]^d \rightarrow \mathbb{R}$ from certain weighted Banach spaces $F_d^\gamma$. Since it is well-known that we can reach an excellent rate of convergence with respect to $n$, the amount of information on $f$ used by an approximation-algorithm, we especially consider the dependence of the error bounds on the dimension $d$. It turns out that this dependence and therefore also the complexity of the problem are strongly connected to the weight $\gamma$ which characterizes the function space.

We will present necessary and sufficient conditions for tractability in the case of product weights. The talk is based on [1].

[1] Markus Weimar. Tractability results for weighted Banach spaces of smooth functions. Submitted to J. Complexity.
Approximation of linear multivariate problems in the randomized setting
Keywords: Multivariate problems; Tractability; Lower Bounds
Thu, 15:45--16:10
  • Wozniakowski, Henryk (Computer Science Department, Columbia university, United States; Institute of Applied Mathematics, University of Warsaw, Poland)

We study approximation of linear operators defined between Hilbert spaces in the randomized setting. We consider algorithms that use $n$ function values or $n$ arbitrary linear functionals. It is known that randomization does not really help and is of the same power as in the worst case case when we may use arbitrary linear functionals. We want to verify when the use of function values in the randomized setting is of the same power as the use of arbitrary linear functionals in the randomized and worst case settings. We provide a number of practically important examples of linear operators and Hilbert spaces for which it is indeed the case, as well as examples for which it is not the case.

Uncertainty quantification in random heterogeneous media
Keywords: Stochastic PDEs; Uncertainty quantification
Tue, 16:30--17:15
  • Zabaras, Nicholas J. (Sibley School of Mechanical and Aerospace Engineering, Cornell University, United States)

Predictive modeling of physical processes in heterogeneous materials requires innovations in mathematical and computational thinking. While recent multiscale approaches have been successful in modeling the effects of microstructure to macroscopic response, a significant grand challenge remains in understanding the effects of microstructural and other uncertainties in characterization of properties and in the design of heterogeneous materials. To address these problems, we are developing a mathematical framework that provides a paradigm shift in the predictive modeling of complex systems in the presence of uncertainties in order to address two major limitations in modeling stochastic PDEs: (I) The stochastic inputs are mostly based on ad hoc models, and (2) The number of independent stochastic parameters is typically very high. To address the former, we are developing non-linear data-driven model reduction strategies to utilize experimentally available information based on low-order realistic models of input uncertainties. To address the latter, we are developing a low-complexity surrogate model of the high-dimensional stochastic multiscale system under consideration. We will discuss ideas based on manifold learning, kernel PCA, locally weighted projection regression methods in high dimensions, sparse Bayesian kernel techniques, and others. A number of examples will be discussed in the data-driven representation of random heterogeneous media and in modeling physical processes (deformation, thermal/hydrodynamic transport, etc.) in such media. We will emphasize both the underlying mathematical challenges but also the significant technological impact of predictive modeling in random heterogeneous media.