Session abstracts

Session III.5 - Information-Based Complexity


 

Talks


Monday, June 19, 14:00 ~ 14:30

Kernel method for parametric PDE with doubled convergence rate

Ian Sloan

UNSW (Sydney), Australia   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

This talk, based on joint work with Frances Kuo and Vesa Kaarnioja, describes a periodic kernel method based on lattice points. It has a doubled rate of convergence compared to previously known estimates. With the use of “serendipitous” weight parameters the cost grows linearly with respect to both dimension and number of lattice points.

View abstract PDF


Monday, June 19, 14:30 ~ 15:00

On probabilistic stability for selected randomized schemes for ODEs

Tomasz Bochacik

AGH University of Science and Technology, Poland   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We study the stability of certain randomized algorithms approximating solutions of ordinary differential equations. We adapt notions of mean-square stability and asymptotic stability, considered in [4] in the context of numerical methods for SDEs, to the case of randomized schemes for ODEs. Moreover, we introduce the notion of stability in probability. We investigate relations between these three types of probabilistic stability, describe probabilistic stability regions, and compare them with absolute stability regions for deterministic schemes, cf. [1,2,3]. We focus on randomized Taylor and Euler schemes.

[1] T. Bochacik. A note on the probabilistic stability of randomized Taylor schemes. Electron. Trans. Numer. Anal. 58, 101-114, 2023.

[2] T. Bochacik, P. Przybyłowicz. On the randomized Euler schemes for ODEs under inexact information. Numer. Algorithms 91, 1205–1229, 2022.

[3] T. Bochacik, M. Goćwin, P. M. Morkisz, P. Przybyłowicz. Randomized Runge-Kutta method - Stability and convergence under inexact information. J. Complex. 65, 101554, 2021.

[4] D. J. Higham. Mean-square and asymptotic stability of the stochastic theta method. Siam J. Numer. Anal. 38, 753-769, 2000.

Joint work with Maciej Goćwin (AGH University of Science and Technology), Paweł M. Morkisz (AGH University of Science and Technology) and Paweł Przybyłowicz (AGH University of Science and Technology).

View abstract PDF


Monday, June 19, 15:00 ~ 15:30

Estimation of time-dependent parameters in SDEs-based models using neural networks

Andrzej Kałuża

AGH University of Science and Technology in Krakow, Poland   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

Estimation of time-dependent parameters in SDE-based models using neural networks is a complex problem that has important applications in many fields, for example, finances or energy consumption. Estimating time-dependent parameters is a challenge due to the multitude of estimated values, and the most common approach is simplification using a piecewise-constant function. In the presentation, we focus on estimating parameters in multivariate regression and stochastic differential equations (SDEs), our most practical cases. In a more general setting provided in our paper, we introduce the estimation technique of unknown parameters based on a discrete sampling of Markov processes. Specifically, we propose a novel approach based on neural networks to estimate time-dependent parameters in SDEs, which allows us to model the temporal evolution of the system more accurately and efficiently. The crucial part is formulating the loss function based on the maximum likelihood approach, which enables us to translate our approximation task into an optimization problem and use deep learning techniques and the implementation framework. We demonstrate the effectiveness of our approach through a series of experiments. Overall, our work contributes to the growing body of research on parameter estimation in SDE-based models and provides a valuable tool for researchers and practitioners in many fields.

Joint work with Paweł M. Morkisz (AGH University of Science and Technology, Poland and NVIDIA, USA), Bartłomiej Mulewicz (Aigorithmics Sp. z o. o. Poland and AGH University of Science and Technology, Poland), Paweł Przybyłowicz (AGH University of Science and Technology, Poland) and Martyna Wiącek (AGH University of Science and Technology, Poland).

View abstract PDF


Monday, June 19, 15:30 ~ 16:00

On quadratures with optimal weights for spaces with bounded mixed derivatives

Michael Griebel

INS, University of Bonn, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We discuss quadrature rules in $d$ dimensions with optimal weights for given Quasi Monte Carlo point sets. Here we focus on spaces $H^r_{mix}$ with bounded $r$-th mixed derivatives. It turns out that we obtain a convergence rate of the order $O(N^{-r}\log(N)^{\alpha(d,r)})$ for some $\alpha$ which depends on the dimension $d$ and the smoothness $r$. The main order term $N^{-r}$ is then substantially improved in contrast to conventional plain QMC, which achieves just a main order term of $N^{-1}$.

Moreover. we consider optimal weights for spaces $H^s_{mix}$ with bounded $s$-th mixed derivatives and use these weights in a quadrature rule for functions from $H^r_{mix}$. There are now two situations: $ s \gt r $ and $s \lt r$. We observe that we obtain an error of the order $O(N^{-\min(r,s)} \log(N)^{\beta(d,r,s)})$ for some $\beta$ which depends on the dimension $d$ and the smoothness $r$ and $s$.

This way, if we do not know a-priorily the smoothness of our integrand class, it makes sense to choose optimal weights for some large $r$. Then, we always gain uniformly the best main error rate possible.

Joint work with Uta Seidler (INS, University of Bonn, Germany).

View abstract PDF


Monday, June 19, 16:30 ~ 17:00

Weighted least-squares approximation in expected $L^2$ norm

Matthieu Dolbeault

Sorbonne Université, France   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We investigate the problem of approximating a function $u$ in $L^2$ with a linear space of functions of dimension $n$, using only evaluations of $u$ at $m$ chosen points. A first approach, based on weighted least-squares at i.i.d random points, provides a near-best approximation of $u$, but requires $m$ of order $n \log(n)$. A reduction to a linear sample size, while preserving the quality of approximation can be obtained based on the solution to the Kadison-Singer problem. We improve on this result by using a randomized greedy strategy, which allows to reduce the oversampling ratio $m/n$ and provides an algorithm of polynomial complexity.

Joint work with Albert Cohen (Sorbonne Université, France) and Abdellah Chkifa (Mohammed VI Polytechnic University, Morocco).

View abstract PDF


Monday, June 19, 17:00 ~ 17:30

On Least Squares Approximation Based on Random or Optimal Data

Mario Ullrich

JKU Linz, Österreich   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We study the $L_p$-approximation, $2\le p\le\infty$ of functions with the help of (unregularized) least squares methods based on "random" information, like function evaluations, and we want to compare this with the power of arbitrary algorithms based on arbitrary linear information, i.e., the best we can do theoretically.

Here, we survey on results of the past 5 years that eventually lead to a sharp comparison which showed that function evaluations are often enough for optimal results (in a worst-case sense).

Joint work with Matthieu Dolbeault (Sorbonne University, France), David Krieg (JKU Linz, Austria), Kateryna Pozharska (TU Chemnitz) and Tino Ullrich (TU Chemnitz).

View abstract PDF


Monday, June 19, 17:30 ~ 18:30

The power of random information: recent results

Mathias Sonnleitner

University of Passau, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We survey recent developments in the study of random standard/linear information for numerical integration and approximation from an Information-based complexity point of view. In particular, we focus on asymptotic worst-case upper bounds as the number of measurements tends to infinity and compare random to optimal information. Regarding asymptotic optimality, linear algorithms based on weighted least squares have proven to be quite effective in different contexts which we try to present in a unified way.

This talk will be accessible also for non-experts interested in i.i.d. random measurements. Based on joint work with A. Hinrichs, D. Krieg, E. Novak and J. Prochno.

View abstract PDF


Tuesday, June 20, 14:00 ~ 14:30

Homogeneous algorithms for linear (?) problems

Peter Kritzer

RICAM, Austrian Academy of Sciences , Austria   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

In this talk, we consider linear problems in the worst case setting. It is known that, in general, linear algorithms are not optimal for such problems. However, as we will outline in this talk, homogeneous algorithms are. Furthermore, we discuss in how far homogeneous algorithms can be helpful in problems that deviate from the assumptions usually made for linear problems.

Joint work with David Krieg (JKU Linz, Austria).

View abstract PDF


Tuesday, June 20, 14:30 ~ 15:00

On the Role of Adaption in Linear Problems

Stefan Heinrich

RPTU Kaiserslautern-Landau, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

The relation between $n$-th minimal errors of linear problems in the randomized non--adaptive and adaptive setting is studied. In a recent paper [1] the author solved a long-standing problem of Information-Based Complexity: Is there a constant $c \gt 0$ such that for all linear problems $\mathcal{P}$ the randomized non-adaptive and adaptive $n$-th minimal errors can deviate at most by the factor $c$? That is, does the following hold for all linear $\mathcal{P}$ and $n\in {\bf N}$ \[ e_n^{\rm ran-non} (\mathcal{P})\le ce_n^{\rm ran} (\mathcal{P}) \,? \] The analysis of vector-valued mean computation showed that the answer is negative. In this talk we discuss further aspects of this problem: large gaps between adaptive and non-adaptive minimal errors [2], infinite dimensional problems with deviation tending to infinity with $n$, as well as scalar-valued problems: mean computation and integration.

References

[1] S. Heinrich, Randomized Complexity of Parametric Integration and the Role of Adaption I. Finite Dimensional Case, preprint, see https://www.uni-kl.de/AG-Heinrich/Publications.html

[2] S. Heinrich, Randomized Complexity of Vector-Valued Approximation, to appear in: A. Hinrichs, P. Kritzer, F. Pillichshammer (eds.). Monte Carlo and Quasi-Monte Carlo Methods 2022. Springer Verlag. Preprint see https://www.uni-kl.de/AG-Heinrich/Publications.html

View abstract PDF


Tuesday, June 20, 15:00 ~ 15:30

Problems in the worst case approximation of linear operators in the presence of noise

Leszek Plaskota

University of Warsaw, Poland   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

Information-based complexity (IBC) deals with computational complexity of solving problems for which information is partial, noisy, and priced. Most of the IBC research is however devoted to problems for which information is partial and exact, despite the fact that noise is a frequent guest in real-world computational problems. Moreover, the presence of noise enriches the computational model and leads to interesting theoretical questions. In this talk we present some solved and open problems related to the worst case complexity of approximating linear operators from information contaminated with bounded or Gaussian noise.

View abstract PDF


Tuesday, June 20, 15:30 ~ 16:00

Worst case tractability of linear problems in the presence of noise: linear information

Pawel Siedlecki

University of Warsaw, Poland   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We study the worst case tractability of multivariate linear problems defined on separable Hilbert spaces. Information about a problem instance consists of noisy evaluations of arbitrary bounded linear functionals, where the noise is either deterministic or random. The cost of a single evaluation depends on its precision and is controlled by a cost function. We establish mutual interactions between tractability of a problem with noisy information, the cost function, and tractability of the same problem, but with exact information.

Joint work with Leszek Plaskota (University of Warsaw, Poland).

View abstract PDF


Tuesday, June 20, 16:30 ~ 17:00

The curse of dimensionality for the $L_p$-discrepancy with finite $p$

Erich Novak

FSU Jena, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

The $L_p$-discrepancy is closely related to the worst-case error of algorithms for numerical integration. Its inverse for dimension $d$ and error threshold $\varepsilon \in (0,1)$ is the number of points in $[0,1]^d$ that is required such that the minimal normalized $L_p$-discrepancy is less or equal $\varepsilon$.

The inverse of $L_2$-discrepancy grows exponentially fast with the dimension $d$, i.e., we have the curse of dimensionality, whereas the inverse of $L_{\infty}$-discrepancy depends linearly on $d$; both results are from 2001. The behavior of inverse of $L_p$-discrepancy for $p \not\in \{2,\infty\}$ was open. We show that the $L_p$-discrepancy suffers from the curse of dimensionality for all $p$ of the form $p=2 \ell/(2 \ell -1)$ with $\ell \in \mathbb{N}$. We prove lower bounds for the worst-case error of numerical integration in an anchored Sobolev space of once differentiable functions in each variable whose first derivative has finite $L_q$-norm, where $q$ is an even positive integer with $1/p+1/q=1$.

Conjecture: The curse holds for all $p$ with $1 \lt p \lt \infty$.

Joint work with Friedrich Pillichshammer (JKU Linz, Austria).

View abstract PDF


Tuesday, June 20, 17:00 ~ 17:30

Approximation and tractability of isotropic Sobolev embeddings with increasing smoothness

Thomas Kühn

University of Leipzig, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

Let $H^s(\mathbb{T}^d)$ be the isotropic Sobolev space of smoothness $s \gt 0$ on the $d$-dimensional torus. In the talk the influence of increasing smoothness on the approximation problem for the embeddings $H^{s(d)}(\mathbb{T}^d)\hookrightarrow L_2(\mathbb{T}^d)$, $d\in\mathbb{N}$, is studied. More precisely, I will give necessary and sufficient conditions for strong polynomial, polynomial, quasi-polynomial, weak and uniformly weak tractability in terms of growth conditions on the smoothness parameters $s(d)$ as $d\to\infty$.

View abstract PDF


Tuesday, June 20, 17:30 ~ 18:00

$L^2$-approximation and numerical integration on Hermite spaces

Michael Gnewuch

Universität Osnabrück, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We consider spaces of functions of infinetely many variables that are defined with the help of all univariate Hermite polynomials, which (properly normalized) form an orthonormal basis of the space of all square-integrable functions over the real line endowed with the standard normal distribution. Those spaces belong to the class of reproducing kernel Hilbert spaces of increasing smoothness and their elements are defined on proper subsets of the sequence space (i.e., the space of all real-valued sequences). Their norms are induced by an underlying function space decomposition, namely the infinite-dimensional ANOVA decomposition. We discuss further properties of those spaces and present optimal results on $L^2$-approximation and on numerical integration.

The talk is based on the two recent papers

1) M. Gnewuch, M. Hefter, A. Hinrichs, K. Ritter, Countable tensor products of Hermite spaces and spaces of Gaussian kernels, Journal of Complexity 71 (2022), 101654. (arXiv:2110.05778v2)

2) M. Gnewuch, A. Hinrichs, K. Ritter, R. Rüssmann, Infinite-dimensional integration and $L^2$-approximation on Hermite spaces, preprint 2023, arXiv:2304.01754.

View abstract PDF


Tuesday, June 20, 18:00 ~ 18:30

Multilevel Function Approximation

Mike Giles

University of Oxford, United Kingdom   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

Building on prior research by others on classical and sparse grid interpolation and Multilevel Monte Carlo methods, this seminar will present new ideas on the approximation of parametric functions of the form $f(\theta)$ where $\theta$ is multi-dimensional and each function evaluation corresponds to either a functional of the solution of a PDE, with parametric dependence on $\theta$, or the expected value of a functional of the solution of an SDE, again with a parametric dependence on $\theta$. In both cases, exact sampling of $f(\theta)$ is not possible, and greater accuracy comes at a higher computational cost.

The key idea to improve the computational cost for a given accuracy is a multilevel representation of the function $f(\theta)$ with coarse levels using very accurate approximations of $f(\theta)$ at a very limited number of points, whereas fine levels use inaccurate approximations at a large number of points.

The talk will outline the underpinning literature, and the central idea, together with meta-theorems which determine the computational complexity if certain assumptions are satisfied. On-going research is aimed at the numerical analysis required to determine whether those assumptions are indeed satisfied, as well as computational work to demonstrate the benefits for both PDEs and SDEs, and verify the predicted complexity.

References:

H.-J. Bungartz, M. Griebel. 'Sparse grids'. pp.147-269, Acta Numerica, 2004.

A.-L. Haji-Ali, F. Nobile, R. Tempone. 'Multi-index Monte Carlo: when sparsity meets sampling'. Numerische Mathematik, 132:767-806, 2016.

S. Heinrich. 'Multilevel Monte Carlo methods'. Lecture Notes in Computer Science, 2179:58-67, 2001.

R. Tempone, S. Wolfers. 'Smolyak’s algorithm: a powerful black box for the acceleration of scientific computations'. pp.201-228 in Sparse Grids and Applications, Springer, 2018.

Joint work with Filippo De Angelis (University of Oxford, U.K.) and Christoph Reisinger (University of Oxford, U.K.).

View abstract PDF


Wednesday, June 21, 14:00 ~ 14:30

A universal numerical integration by digital nets

Takashi Goda

University of Tokyo, Japan   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

This talk concerns numerical integration over high-dimensional unit cubes, for which quasi-Monte Carlo (QMC) methods using (higher order) digital nets have been well studied. Our focus is to show that randomly chosen linearly scrambled nets or polynomial lattice point sets can achieve almost optimal rates of convergence (in the sense of worst-case error) for weighted Sobolev spaces with dominating mixed smoothness or even attain super-polynomial convergence for a class of infinitely differentiable functions with probability $1/2$. By applying a median trick, we establish a universal QMC-based integration rule that does not require any information on smoothness or weights when constructing point sets, yet still achieves nearly optimal worst-case error convergence in various weighted function spaces with high probability. To support our theoretical claim, we present a series of numerical experiments that demonstrate the effectiveness of our approach.

View abstract PDF


Wednesday, June 21, 14:30 ~ 15:00

Randomized lattice rules

Dirk Nuyens

KU Leuven, Belgium   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

It was shown in Kritzer, Kuo, Nuyens and Ullrich (2019) that randomized lattice rules can achieve the near optimal randomized error for integration, which is half an order better than the deterministic one, in the Korobov space. This randomized lattice rule algorithm picks a random prime in the range $(n/2, n]$ and then randomly selects a ``good generating vector'' for that prime. In Dick, Goda and Suzuki (2022) it was later shown that one can construct this generating vector by a randomized component-by-component construction algorithm.

In joint work, Kuo, Nuyens and Wilkes (2023+), we show that the generating vector can be pre-determined and hence only require randomness for selecting the random prime from $(n/2, n]$. We show a component-by-component algorithm which can construct a vector in time $O(d \, n^4 / \log(n))$ which is only $\log(n)$ more expensive than calculating the actual randomized error for the final rule. In a modification, Wilkes and Nuyens (2023+), we show that also for $0 \lt \alpha \le 1/2$ a pre-determined vector is able to achieve the near optimal randomized error where the algorithm randomly picks the prime and a shift.

Joint work with Laurence Wilkes (KU Leuven, Belgium) and Frances Y. Kuo (UNSW Sydney, Australia).

View abstract PDF


Wednesday, June 21, 15:00 ~ 16:00

Lower bounds for numerical integration and approximation

Jan Vybiral

Czech Technical University, Czech Republic   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We review recent methods, which allow to prove lower bounds for numerical integration and approximation of functions from a reproducing kernel Hilbert space $H$. Quite naturally, the technique of a bump function fails if $H$ contains only very smooth (i.e., analytic) functions. We show that it is also not optimal, if the functions from $H$ are barely continuous. In this case, optimal lower bounds can be obtained by a completely different method, which is based on a certain variant of the Schur multiplication theorem on a coordinate-wise product of positive semi-definite matrices. This approach allowed to settle an old problem of E. Novak on intractability of numerical integration of multivariate trigonometric polynomials with degree at most one in each variable. Furthermore, in a series of papers jointly with Aicke Hinrichs and David Krieg (both JKU Linz, Austria) and Erich Novak (FSU Jena, Germany) we developed this tool to an universal instrument for lower bounds on numerical integration.

Joint work with Aicke Hinrichs (JKU Linz, Austria), David Krieg (JKU Linz, Austria) and Erich Novak (FSU Jena, Germany).

View abstract PDF


Wednesday, June 21, 16:30 ~ 17:00

Quasi-random sampling with black box or acceptance-rejection inputs

Christiane Lemieux

University of Waterloo, Canada   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We propose randomized quasi-Monte Carlo (RQMC) methods to estimate expectations $\mu = E(g(Y, W))$ where $Y$ is a vector of random variables independent of $W$ and can be sampled by inversion, whereas $W$ cannot. Various practical problems are of this form, such as estimating expected shortfall for mixture models where $W$ is stable or generalized inverse Gaussian and $Y$ is multivariate normal. We consider two settings: In the first, we assume that there is a non-uniform random variate generation method to sample $W$ in the form of a non-modifiable ``black-box''. The methods we propose for this setting are based on approximations of the quantile function of $W$. In the second setting, we assume that there is an acceptance-rejection (AR) algorithm to sample from $W$ and explore different ways to feed it with quasi-random numbers. This has been studied previously, typically by rejecting points of constant dimension from a low-discrepancy sequence and moving along the sequence. We also investigate the use of a point set of constant (target) size where the dimension of each point is increased until acceptance. In addition, we show how to combine the methods from the two settings in such a way that the non-monotonicity inherent to AR is removed.

Joint work with Erik Hintz (University of Waterloo) and Marius Hofert (University of Hong Kong).

View abstract PDF


Wednesday, June 21, 17:00 ~ 17:30

Consistency of randomized integration methods

Daniel Rudolf

Universität Passau, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We present a characterization of the consistent estimation of expectations of integrable functions with respect to a class of Monte Carlo methods. Consistency here refers to convergence in mean and/or convergence in probability of the estimator to the integral of interest. Moreover, the aforementioned class of methods contains averages based on randomized digital nets, Latin hypercube sampling randomized Frolov as well as Cranley-Patterson rotated point sets.

Joint work with Julian Hofstadler (Universität Passau, Germany).

View abstract PDF


Wednesday, June 21, 17:30 ~ 18:00

$L^2$-approximation and numerical integration on Gaussian Spaces

Robin Rüßmann

RPTU in Kaiserslautern, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We study $L^2$-approximation and integration on reproducing kernel Hilbert spaces $H(L_\sigma)$ of $d$ variables, where $d\in\mathbb{N}$ or $d=\infty$. Here, $L_\sigma$ is given as the tensor product of univariate Gaussian kernels, i.e., $L_\sigma(x,y) := \prod_{j=1}^d \exp(-\sigma_j^2 \cdot (x_j-y_j)^2)$. These spaces are closely related to Hermite spaces $H(K_\beta)$, where $K_\beta$ is again of tensor product form, but based on univariate Hermite kernels, i.e., $K_\beta(x,y):= \prod_{j=1}^{d}\sum_{\nu=0}^\infty \beta_j^{\nu}\cdot h_\nu(x_j)\cdot h_\nu(y_j)$, where $h_v$ is the Hermite polynomial of degree $\nu$. More precisely, for each space $H(L_\sigma)$ there exists a corresponding space $H(K_\beta)$ and an isometric isomorphism $Q$ between both spaces such that one function evaluation of $Qf$ needs only one function evaluation of $f$ and vice versa.

Via this correspondence, we are able to constructively transform any algorithm for $L^2$-approximation or integration on $H(K_\beta)$ into an algorithm for the same problem on $H(L_\sigma)$ and vice versa, preserving error and cost. In the case $d=\infty$, this allows us to investigate both problems on $H(L_\sigma)$ for the first time. In the case $d\in\mathbb{N}$, we are able to transfer some known results between the two function space settings.

Joint work with Michael Gnewuch (University of Osnabrück, Germany) and Klaus Ritter (RPTU in Kaiserslautern, Germany).

View abstract PDF


Wednesday, June 21, 18:00 ~ 18:30

Tractability for additive random fields

Marguerite Zani

Université d'Orléans, France   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We investigate additive fields having Gaussian kernels as marginal covariances. We study tractability via upper and lower bounds.

Joint work with Alexey Khartov (Smolensk State University).

View abstract PDF


 

Posters


Data Compression using Lattice Rules for Machine Learning

Kumar Harsha

Osnabrück University, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

The mean squared error is one of the standard loss functions in supervised machine learning. However, computing this loss for enormous data sets can be computationally demanding. Modifying the approach of J. Dick and M. Feischl [Journal of Complexity 67 (Dec 2021) p. 101587], we present algorithms to reduce large data sets to a smaller size using rank-1 lattice rules. With this compression strategy in the pre-processing step, every lattice point gets a pair of weights representing the original data and response. As a result, the compressed data makes iterative loss calculations in optimization steps much faster. Furthermore, we provide error analysis for functions represented as exponential Fourier series lying in Wiener algebras and Korobov spaces.

Joint work with Michael Gnewuch (Osnabrück University, Germany) and Marcin Wnuk (Osnabrück University, Germany).

View abstract PDF


Efficient recovery of non-periodic functions via samples

Nicolas Nagel

TU Chemnitz, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We combine periodization and subsampling techniques to obtain an efficent reconstruction algorithm for multivariate, non-periodic functions $f: [-1, 1]^d \rightarrow \mathbb{C}$ using samples. Firstly, transforming $f$ to a periodic function $Tf: \mathbb{T}^d \rightarrow \mathbb{C}$ via a cosine composition allows us to use well-known and optimal algorithms for the periodic case and pull them back to the non-periodic one. This leads to approximating $f$ via Chebyshev polynomials using function values on Chebyshev distributed sample points. Secondly, we reduce the sampling budget via frame subsampling from $O(m \log m)$ to $O(m)$ sample points ($m$ the number of used basis functions, derived from a hyperbolic cross) while still keeping good approximation properties. This set of sample points will be used for the whole function class and needs to be calculated only once. In total, we obtain an implementable algorithm to approximate $s$-smooth, non-periodic functions $f: [-1, 1]^d \rightarrow \mathbb{C}$ via $n$ samples with $L_2$-error of near-optimal order $n^{-s} (\log n)^{(d-1)s+1/2}$. We also apply this algorithm to a test function constructed from a quadratic B-spline of smoothness $s=5/2$ and observe the predicted decay rate. With this work we aim to give a theoretical connection between the approximation of periodic and non-periodic functions without loss in the main decay rate, as well as demonstrate the versitility of subsampling.

Joint work with Felix Bartel (TU Chemnitz), Kai Lüttgen (TU Chemnitz) and Tino Ullrich (TU Chemnitz).

View abstract PDF


Sampling recovery in the uniform norm

Kateryna Pozharska

University of Technology, Chemnitz (IM NAS Ukraine), Germany (Ukraine)   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

We consider the problem of function recovery from $n$ function values in the uniform norm using a weighted least squares algorithm. The values are taken from a subsampled set of independent and identically distributed random samples. It has been shown that the discrete information on functions is as powerful as arbitrary linear information. In addition to the reproducing kernel Hilbert space setting, we obtain results for general non-Hilbert classes in terms of the inverse of Christoffel function and the best $L_2$-approximation according to specific finite-dimensional spaces.

Joint work with David Krieg (JKU Linz, Austria), Mario Ullrich (JKU Linz, Austria) and Tino Ullrich (TU Chemnitz, Germany).

View abstract PDF


Integration and function approximation on $\mathbb{R}^d$ using equispaced points and lattice points

Yuya Suzuki

Aalto University, Finland   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

In this work, I will discuss integrating and approximating functions over $\mathbb{R}^d$ by equispaced points for $d=1$ and lattice points for $d\ge2$. In [1], together with D. Nuyens, we derived explicit conditions where lattice points can obtain error convergence of almost $n^{-\alpha}$ for integrating functions with smoothness $\alpha\in\mathbb{N}$ over the unbounded domain $\mathbb{R}^d$, where $n$ is the number of quadrature points. When $d=1$ and integration for $\alpha$-smooth Gaussian Sobolev spaces is considered, in [2], together with Y. Kazashi and T. Goda, we proved that equispaced points achieve the optimal rate $n^{-\alpha}$ up to a logarithmic factor. In contrast, therein, the well known Gauss–Hermite quadrature was shown to achieve merely of the order $n^{-\alpha/2}$. Based on these results, I further consider the function approximation problem and possible use of lattice points on $\mathbb{R}^d$. \\

[1] D. Nuyens and Y. Suzuki \newblock {\em Scaled lattice rules for integration on $\mathbb{R}^d$ achieving higher-order convergence with error analysis in terms of orthogonal projections onto periodic spaces.} \newblock Mathematics of Computation, 92 (2023), pp. 307–347.\\

[2] Y. Kazashi, Y. Suzuki and T. Goda. \newblock {\em Sub-optimality of Gauss-Hermite quadrature and optimality of the trapezoidal rule for functions with finite smoothness.} \newblock Accepted for publication in SIAM Journal on Numerical Analysis.

View abstract PDF