Session III.2 - Approximation Theory
Talks
Monday, June 19, 14:00 ~ 15:00
Compositional Sparsity and Model Classes for High-Dimensional Approximation
Wolfgang Dahmen
University of South Carolina, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
The need to recover or approximate functions of many variables is ubiquitous in numerous application contexts such as machine learning, uncertainty quantification, or data assimilation. In all these scenarios the so-called Curse of Dimensionality is an intrinsic obstruction that has been a long standing theme in approximation theory. It roughly expresses an exponential dependence of “recovery cost” on the spatial dimension. It is well-known that being able to avoid the Curse depends on both, the structure of the particular “model class” of functions one wishes to approximate and on the approximation system that is to be used. In this talk we highlight recent results concerning the interplay between these two constituents. For small spatial dimensions approximation complexity is in essence determined by the smoothness of the approximand, e.g. in terms of Sobolev- or Besov-regularity which is effectively exploited by approximation systems that rely on spatial localization. By contrast, in high-dimensions, more global structural sparsity properties determine the ability to avoid the Curse, unless one imposes excessively high smoothness degrees. Inspired by the highly nonlinear structure of Deep Neural Networks (DNNs) and the Kolmogorov-Arnold Superposition Theorem, we focus in particular on a new notion of “tamed compositional sparsity” that leads to new types of model classes for high-dimensional approximation. The relevance of such classes is perhaps best illustrated in the context of solution manifolds of parameter- dependent families of partial differential equations (PDEs). Specifically, the framework accommodates “inheritance theorems”: compositional sparsity of problem data (like parameter-dependent coefficient fields) are inherited by solutions. In particular, we briefly discuss transport equations. In fact, it is well- known that common model reduction concepts for an effective approximation of corresponding parameter- to-solution maps fail for this type of PDEs. Nevertheless, given compositionally sparse data, corresponding solution manifolds can be shown to belong to compositional approximation classes whose manifold-widths defy the Curse of Dimensionality. Corresponding concrete approximation rates, realized by DNNs, exhibit only a low algebraic dependence on the (large) parametric dimension. We conclude with briefly discussing the bearing of these findings on other problem types and ensuing research directions.
Monday, June 19, 15:00 ~ 15:30
Complex approximation on rational curves
Mihai Putinar
University of California, United States - This email address is being protected from spambots. You need JavaScript enabled to view it.
A foundational theorem of J. Thomson states that the density of complex polynomials on Lebesgue space $L^2$ associated to a compactly supported measure on the complex plane is decided by the existence of analytic bounded point evaluations. We owe to Brennan a final result of this type for complex rational approximation. By means of standard base change techniques, we show that the same characterization of complex polynomial approximation holds true on complex, affine rational curves. Beyond compact supports, for measures supported by a real, affine algebraic curve which admits a polynomial parametrization we relate the existence of analytic bounded point evaluations on the complexification of the original curve to moment indeterminateness.
Joint work with Shibananda Biswas (Indian Institute of Science Education and Research, Kolkata, India) and David Kimsey (Newcastle University, Newcastle upon Tyne, UK).
Monday, June 19, 15:30 ~ 16:00
Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces
Jonathan Siegel
Texas A&M University, United States - This email address is being protected from spambots. You need JavaScript enabled to view it.
Deep ReLU neural networks are among the most widely used class of neural networks in practical applications. We consider the problem of determining optimal $L_p$-approximation rates for deep ReLU neural networks on the Sobolev class $W^s(L_q)$ for all $1\leq p,q\leq \infty$ and $s \gt 0$. Existing sharp results are only available when $q=\infty$, i.e. when the derivatives are measured in $L_\infty$. In our work, we extend these results and determine the best possible rates for all $p,q$ and $s$ for which a compact Sobolev embedding holds, i.e. when $s/d \gt 1/q - 1/p$. This settles in particular the classical non-linear regime where $p \gt q$. Our techniques can also be used to obtain optimal rates for Besov spaces. We will discuss some of the technical details of the proof and conclude by giving a few open research directions.
Monday, June 19, 16:30 ~ 17:00
Metric Approximation of Set-Valued Functions
Nira Dyn
School of Mathematical Sciences, Tel Aviv University, Israel - This email address is being protected from spambots. You need JavaScript enabled to view it.
Abstract : This talk is a review of ideas in the approximation of Set-Valued functions, mapping a closed interval to the space of all non-empty, compact subsets of ${\mathbb R^d}$. These ideas have evolved in a series of papers, written jointly, first with Elza Farkhi and Alona Mokhov, and later also with Elena Berdysheva. We were inspired by the first paper on the approximation of these set-valued functions, authored by Z. Artstein in the late eighties. Most papars on that subject have studied the class of set-valued functions with values restricted to convex sets, based on operations such as Minkowski sum of sets and the Auman integral. The new tool in Artstein's , paper is the "'metric average", with which a "'piecewise linear" interpolant is constructed. In our papers we introduced several more metric tools, such as metric chains, metric divided differences, weighted metric integral, with which we adapt to set-valued functions classical, sampled-based, linear approximation operators, Fourier partial sums, and several positive integral operators. The metric adaptation yields approximants with similar approximation ordersas in the real-valued case, for continuous SVFs, and for SVFs of bounded variation.
Joint work with Elena Berdysheva, (Cape-Town University, South-Africa), Elza Farkhi, (Tel Aviv University, Israel) and Alona Mokhov, (Afeka, Tel Aviv College of Engineering, Israel).
Monday, June 19, 17:00 ~ 17:30
Approximation of smooth functions by short exponential sums
Gerlind Plonka
University of Goettingen, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
We consider function approximation using exponential sums of the form $$ f(t) = \sum_{j=1}^{M} \gamma_{j} \, {\mathrm e}^{\phi_{j}t} = \sum_{j=1}^{M} \gamma_{j} \, z_{j}^{t}, $$ where $M \in {\mathbb N}$, $\gamma_{j} \in {\mathbb C}\setminus \{ 0\}$, and $z_{j}= {\mathrm e}^{\phi_{j}} \in {\mathbb C}\setminus \{ 0\}$ with $\phi_{j} \in {\mathbb C}$ are pairwise distinct.
The exponential sum model can be well applied to periodic as well non-periodic function approximation. However, exponential decay of approximation errors in finite intervals and on $[0, \infty)$ has been shown only for special completely monotonic functions as $f(t) = (1+t)^{-1}$, while numerical experiments show very good approximation results also for other functions as Bessel functions or the Gaussian function.
There are several algorithms around to obtain exponential sum approximations. Beside the very expensive generalized Remez algorithm there exist different suboptimal algorithms based on Prony's method, which employ function values, derivative values or moments of the function to be approximated. Further, there is a close connection between rational approximation in frequency domain and the approximation by exponential sums in spatial domain.
In the talk, we give a survey on recents results on function approximation by exponential sums and corresponding numerical algorithms.
Joint work with Nadiia Derevianko, Markus Petz (University of Goettingen).
Monday, June 19, 17:30 ~ 18:30
Efficient Neural Group Invariant Embeddings
Nadav Dym
Technion, Israel - This email address is being protected from spambots. You need JavaScript enabled to view it.
In many machine learning tasks, the goal is to learn an unknown function which has some known group symmetries. Equivariant machine learning algorithms exploit this by devising architectures (=function spaces) which have these symmetries by construction. Examples include convolutional neural networks which respect translation symmetries, neural networks for graphs or sets which respect their permutation symmetries, or neural networks for 3D point sets which additionally respect Euclidean symmetries.
A common theoretical requirement of symmetry based architecture is that they will be able to approximate any continuous function with the same symmetries. Using Stone-Weirstrass, this boils down to the ability of a function class to separate any two objects which are not related by a group symmetry . We will review results showing that under very general assumptions such a symmetry preserving separating mapping f exists, and the embedding dimension m can be taken to be roughly twice the dimension of the data. We will then propose a general methodology for efficient computation of such f using random invariants. This methodology is a generalization of the algebraic geometry argument used for the well known proof of phase retrieval injectivity. Finally, we will show how this result can be used to achieve efficient separation for point sets with respect to permutation and/or rotation actions.
Based on the papers:
[1] Low dimensional Invariant Embeddings for Universal Geometric Learning Nadav Dym and Steven J. Gortler
[2] Complete Neural Networks for Euclidean Graphs Snir Hordan, Tal Amir, Steven J. Gortler and Nadav Dym
Joint work with Steven J. Gortler (Harvard University, USA), Snir Hordan (Technion, Israel) and Tal Amir (Technion, Israel).
Tuesday, June 20, 14:00 ~ 15:00
TBA
Awardee 10th Popov Prize
TBA, TBA - This email address is being protected from spambots. You need JavaScript enabled to view it.
TBA
Tuesday, June 20, 15:00 ~ 15:30
Polynomial approximation on spaces of homogeneous type
Yuan Xu
University of Oregon, United States - This email address is being protected from spambots. You need JavaScript enabled to view it.
We discuss several results on the polynomial approximation for conic surfaces and solid cones in Euclidean space equipped with the Jacobi weight function. The results will be presented as an example of a general framework for a family of regular domains. The main tools are highly localized kernels constructed via orthogonal polynomials and a spectral operator.
Tuesday, June 20, 15:30 ~ 16:00
Space-time adaptive low-rank approximation for high-dimensional parabolic PDEs
Markus Bachmayr
RWTH Aachen, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
In this talk we present the construction and analysis of a space-time adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables. Similar to the existing adaptive low-rank method for elliptic problems, the method is based on a perturbed Richardson iteration, in the present case applied to a standard space-time variational formulation of the parabolic initial-boundary value problem. The analysis of the method requires a new approximation class for the temporal operator, taking into account the interaction between hierarchical tensor formats of different time indices. Since the parabolic operator is an isomorphism with respect to spaces not endowed with a cross norm, we devise a new low-rank preconditioning scheme based on exponential sum approximations that is adapted to the parabolic case. The method is shown to converge and satisfy similar complexity bounds as the existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for the total error depending on the activated basis functions and ranks in the approximation.
Joint work with Manfred Faldum (RWTH Aachen, Germany).
Tuesday, June 20, 17:00 ~ 17:30
Nonlinear algorithms for state estimation of Hamiltonian systems
Olga Mula
TU Eindhoven, Netherlands - This email address is being protected from spambots. You need JavaScript enabled to view it.
This talk is devoted to the problem of recovering an unknown function $u$ from a Hilbert space from finitely many measurements possibly affected by noise. In recent years, inversion methods based on linear and piece-wise linear approximation spaces with certified recovery bounds have been developed. It is however known that such spaces become ineffective for approximating simple and relevant families of functions, such as piecewise smooth functions that typically occur in physical problems with important advection effects. In this talk, we present a nonlinear inversion method to address this obstruction in the framework of Hamiltonian problems. The method relies on dynamical low rank approximations, and a dynamical optimal sensor placement.
Joint work with Cecilia Pagliantini (University of Pisa, Italy) and Federico Vismara (TU Eindhoven, Netherlands).
Tuesday, June 20, 17:30 ~ 18:00
Universal Lower Bounds for Discrete Potentials of Spherical Codes and Designs
Edward Saff
Vanderbilt University, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
We obtain universal lower bounds for $N$-point polarization (Chebyshev) problems for spherical codes and designs of fixed dimension, strength, and cardinality. The universality means that the main parameters of the bounds are independent of the potentials. Our bounds are valid for a large class of potentials that includes absolutely monotone functions of inner products and are the best possible that can be obtained via linear programming methods for a lower bounding class of polynomial potentials.
Joint work with Joint work with: P. Boyvalenkov (IMI, Bulgarian Academy of Sciences), P. Dragnev (Purdue Ft.-Wayne, USA), D. Hardin (Vanderbilt University, USA), M. Stoyanova (University of Sofia, Bulgaria).
Tuesday, June 20, 18:00 ~ 18:30
A varifold perspective on discrete surfaces
Blanche Buet
Université Paris Saclay, France - This email address is being protected from spambots. You need JavaScript enabled to view it.
We propose a natural framework for the study of surfaces and their different discretizations based on varifolds. Varifolds have been introduced by Almgren to carry out the study of minimal surfaces. Though mainly used in the context of rectifiable sets, they turn out to be well suited to the study of discrete type objects as well.
While the structure of varifold is flexible enough to adapt to both regular and discrete objects, it allows to define variational notions of mean curvature and second fundamental form based on the divergence theorem. Thanks to a regularization of these weak formulations, we propose a notion of discrete curvature (actually a family of discrete curvatures associated with a regularization scale) relying only on the varifold structure. We prove nice convergence properties involving a natural growth assumption: the scale of regularization must be large with respect to the accuracy of the discretization.
However, such accuracy of the discretization involves the so called bounded Lipschitz distance between the varifold associated with the discrete object and the underlying regular varifold, while usual data are not directly provided with a varifold structure (we think of point cloud data without weights nor tangential directions). We hence raise the issue of infering the varifold structure from data points. More precisely, we assume that a continuous object $S$ is given through a probability measure $\mu$ supported in $S$ and our data are obtained by sampling $\mu$ with $N$ points: $(X_1, \ldots, X_N) \sim \mu$ is an i.i.d. sample and our data is an instance of the empirical measure \[ \mu_N = \frac{1}{N} \sum_{i = 1}^N \delta_{X_i} \: . \] We then investigate the definition and convergence of an estimator of the varifold associated with $S$.
Joint work with Gian Paolo Leonardi (University of Trento, Italy), Simon Masnou (University of Lyon, France) and Charly Boricaud (University Paris Saclay, France).
Wednesday, June 21, 14:00 ~ 15:00
A Bombieri type inequality and equidistribution of points
Ujué Etayo
Universidad de Cantabria, Spain - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present a new inequality for norms of polynomials related to the classical Bombieri inequality and we propose a reinterpretation of this new inequality as an integral inequality written in terms of the Green function for the sphere. This more geometric reinterpretation allows generalizing this inequality, taking into account multiplicity and and extend it to any compact Riemannian surface.
Joint work with Haakan Hedenmalm (KTH Royal Institute of Technology) and Joaquim Ortega-Cerdà (Universitat de Barcelona).
Wednesday, June 21, 15:00 ~ 15:30
Bounded commuting projections for multipatch spaces
Martin Campos Pinto
Max Planck Institute for Plasma Physics, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present stable commuting projection operators on de Rham sequences of two-dimensional multipatch spaces with local tensor-product parametrization and non-matching interfaces. Our construction covers the case of shape-regular patches with different mappings and locally refined patches, under the assumption that neighbouring patches have nested resolutions and that interior vertices are shared by exactly four patches. Following a broken-FEEC approach we first apply a tensor-product construction on the single-patch de Rham sequences and modify the resulting patch-wise commuting projections to enforce their conformity while preserving their commuting, projection, and L2 stability properties. The resulting operators are local and stable in L2, with constants independent of both the size and the inner resolution of the individual patches.
Joint work with Frederik Schnack (Max Planck Institute for Plasma Physics).
Wednesday, June 21, 15:30 ~ 16:00
Gaussian bounds for the heat kernel associated to prolate spheroidal wave functions with applications
Pencho Petrushev
University of South Carolina, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
Gaussian upper and lower bounds are established for the heat kernel associated to the prolate spheroidal wave functions (PSWFs) of order zero. This result follows from a general principle that relates semigroups associated to a self-adjoint operator and its perturbation and their kernels. As an application of this general result we also establish the Gaussian bounds for the heat kernels associated to generalized univariate PSWFs and PSWFs on the unit ball in $\mathbb{R}^d$. Further, we develop the related to the PSWFs of order zero smooth functional calculus, which in turn is the necessary ground work in developing the theoryof Besov and Triebel- Lizorkin spaces associated with the PSWFs. One of our main results on Besov and Triebel-Lizorkin spaces associated to the PSWFs asserts that they are the same as the Besov and Triebel-Lizorkin spaces generated by the Legendre operator.
Joint work with Aline Bonami (Institut Denis Poisson, Universit\'e d’Orl\'eans, France) and Gerard Kerkyacharian (University Paris Diderot-Paris 7, LPMA, France).
Wednesday, June 21, 16:30 ~ 17:00
Optimal sampling for $L^2$ approximation: from linear to nonlinear models
Anthony Nouy
Nantes Université - Centrale Nantes, France - This email address is being protected from spambots. You need JavaScript enabled to view it.
We consider the approximation of functions in $L^2$ from point evaluations, using linear or nonlinear approximation tools. For linear approximation, recent results show that weighted least-squares projections allow to obtain quasi-optimal approximations with near to optimal sampling budget. This can be achieved by drawing i.i.d. samples from suitable distributions (depending on the linear approximation tool) and subsampling methods. In a first part of this talk, we review different strategies based on i.i.d. sampling and present alternative strategies based on repulsive point processes that allow to achieve the same task with a reduced sampling complexity. In a second part, we show how these methods can be used to approximate functions with nonlinear approximation tools, in an active learning setting, by coupling iterative algorithms and optimal sampling methods for the projection onto successive linear spaces. We particularly focus on the approximation using tree tensor networks, whose architectures allow for an efficient implementation of optimal sampling procedures within coordinate descent algorithms.
Joint work with Bertrand Michel (Nantes Université - Centrale Nantes, France), Charles Miranda (Nantes Université - Centrale Nantes, France, and WIAS Berlin, Germany) and Philipp Trunschke (Nantes Université - Centrale Nantes,, France).
Wednesday, June 21, 17:00 ~ 17:30
Nonlinear wavelet and spline approximation in BMO
Kamen Ivanov
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present results on two types of nonlinear $n$-term approximation processes: with wavelets in $\operatorname{BMO}({\mathbb R}^d)$ and with splines in $\operatorname{BMO}({\mathbb R})$. Despite the different nature of the wavelets and the splines we are able to show that the similarity of their approximation properties in $L^p$ can be extended to $\operatorname{BMO}$.
Certain Besov-type spaces are naturally involved in these approximation processes. Sharp Jackson and Bernstein estimates are established that allow for a complete characterization of the rates of approximation (approximation spaces).
Wednesday, June 21, 17:30 ~ 18:30
Convergence and error analysis of PINNs
Claire Boyer
Sorbonne Université - This email address is being protected from spambots. You need JavaScript enabled to view it.
Physics-informed neural networks (PINNs) are a promising approach that combines the power of neural networks with the interpretability of physical modeling. PINNs have shown good practical performance in solving partial differential equations (PDEs) and in hybrid modeling scenarios, where physical models enhance data-driven approaches. However, it is essential to establish their theoretical properties in order to fully understand their capabilities and limitations. In this study, we highlight that classical training of PINNs can suffer from systematic overfitting. This problem can be addressed by adding a ridge regularization to the empirical risk, which ensures that the resulting estimator is risk-consistent for both linear and nonlinear PDE systems. However, the strong convergence of PINNs to a solution satisfying the physical constraints requires a more involved analysis using tools from functional analysis and calculus of variations. In particular, for linear PDE systems, an implementable Sobolev-type regularization allows to reconstruct a solution that not only achieves statistical accuracy but also maintains consistency with the underlying physics.
Joint work with Nathan Doumèche (Sorbonne Université and EDF) and Gérard Biau (Sorbonne Université).
Posters
A general approximation lower bound in $L^p$ norm, with applications to feed-forward neural networks
El Mehdi Achour
RWTH Aachen University, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
We study the fundamental limits to the expressive power of neural networks. Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$. The lower bound depends on the packing number of $F$, the range of $F$, and the fat-shattering dimension of $G$. We then instantiate this bound to the case where $G$ corresponds to a piecewise-polynomial feed-forward neural network, and describe in details the application to two sets $F$: Hölder balls and multivariate monotonic functions. Beside matching (known or new) upper bounds up to log factors, our lower bounds shed some light on the similarities or differences between approximation in $L^p$ norm or in sup norm, solving an open question by DeVore et al. (2021). Our proof strategy differs from the sup norm case and uses a key probability result of Mendelson (2002).
Joint work with Armand Foucault (Institut de Mathématiques de Toulouse), Sébastien Gerchinovitz (Institut de Recherche Technologique Saint Exupéry) and François Malgouyres (Institut de Mathématiques de Toulouse).
Function reconstruction using determinantal sampling
Ayoub Belhadji
ENS de Lyon, France - This email address is being protected from spambots. You need JavaScript enabled to view it.
The problem of reconstructing a continuous function based on discrete samples stimulated considerably rich literature. We propose a universal approach for function reconstruction based on repulsive nodes that comes with strong theoretical guarantees and empirical performances. More precisely, we study reconstructions based on nodes that follow the distributions of determinantal point processes adapted to a given reproducing kernel Hilbert space. We prove fast convergence rates that depend on the eigenvalues of the kernel. This unified analysis provides new insights into approximation problems based on determinantal point processes.
Joint work with Rémi Bardenet and Pierre Chainais (Université de Lille, CNRS, Centrale Lille).
Approximation spaces and greedy-like bases
Pablo Manuel Berná
CUNEF Universidad, Spain - This email address is being protected from spambots. You need JavaScript enabled to view it.
The approximation spaces are one of the most famous spaces studied in the field of approximation theory. In that ocassion, our purpose is to study the equivalence (under a quasi-norm) between approximation spaces and some greedy-like classes using weights. Thanks to our study, we obtain sufficient conditions of when certain continuous embeddings imply different greedy-type properties. Along the way, we generalize a result by P. Wojtaszczyk as well as characterize semi-greedy Schauder bases in quasi-Banach spaces.
Joint work with Eugenio Hernández (Universidad Autónoma de Madrid) and Hung Viet Chu (University of Illinois at Urbana-Champaign).
Polynomial Approximation of Symmetric Functions
Geneviève Dusson
CNRS & Université Franche-Comté, France - This email address is being protected from spambots. You need JavaScript enabled to view it.
In this poster, I will present a recent work [1] on the polynomial approximation of symmetric multivariate functions and of multiset functions. More precisely, I will consider functions that are invariant with respect to the permutations of its N variables, each one being d-dimensional, and I will show how these symmetries can be exploited to improve the cost versus error ratio when the function is approximated by polynomials. I will then mention how these results can be used for deriving approximation rates for functions defined on multisets, that it where N becomes a parameter of the input.
[1] Bachmayr, M., Dusson, G., Ortner, C., Thomas, J.: Polynomial Approximation of Symmetric Functions, http://arxiv.org/abs/2109.14771, (2021)
Joint work with Markus Bachmayr (RWTH Aachen, Germany), Christoph Ortner (UBC, Canada) and Jack Thomas (University of Warwick, UK).
Non-linear approximation by greedy bases
David González
CEU Universities, Spain - This email address is being protected from spambots. You need JavaScript enabled to view it.
In 1999, S. V. Konyagin and V. N. Temlyakov introduced the Thresholding Greedy Algorithm (TGA). The main idea is the following one: given a basis in a Banach space $\mathbb X$ and an element $f\in\mathbb X$, the algorithm selects the largest coefficients of $f$ in modulus respect to the given basis. The notion of greediness plays a central role in this theory, where a basis is greedy if the TGA produces the best possible approximation.
In this poster, we provide an overview of the main characterizations of greedy bases with a focus on two lines of study: the isometric case, which was initiated by F. Albiac and P. Wojtaszczyk in 2006, and the characterization using polynomials of constant coefficients, which was started by P. M. Berná and O. Blasco in 2017.
Joint work with Pablo M. Berná (CUNEF Universidad, Spain) and Miguel Berasategui (Universidad de Buenos Aires, Argentina).
Low-energy points on the sphere and the real projective plane
Pedro R. López-Gómez
Universidad de Cantabria, Spain - This email address is being protected from spambots. You need JavaScript enabled to view it.
In the last decades, the problem of evenly distributing points on manifolds like spheres and projective spaces has attracted the attention of the mathematical community due to its theoretical interest and its numerous practical applications, constituting nowadays a very active field of research.
In this poster, I will tackle the problem of distributing points on the usual two-dimensional sphere and on the real projective plane. More precisely, I will present a generalization of a family of points on $\mathbb{S}^2$, the Diamond ensemble, containing collections of $N$ points on $\mathbb{S}^2$ with very low logarithmic energy for all $N\in\mathbb{N}$. In addition, I will also show how the ideas for distributing points on $\mathbb{S}^2$ can be extended to the real projective plane, thereby obtaining lower and upper bounds for the Green and logarithmic energies which constitute the best results in that regard thus far.
Joint work with Carlos Beltrán (Universidad de Cantabria, Spain) and Ujué Etayo (Universidad de Cantabria, Spain).
Potential Theory with Multivariate Kernels on the Sphere
Ryan Matzke
Vanderbilt University, United States of America - This email address is being protected from spambots. You need JavaScript enabled to view it.
While a wealth of theory has been developed for classical energies for kernels with two inputs (such as Riesz kernels), which model pairwise interactions between particles, on the sphere, there as been little to no systematic study of multivariate energies, defined by interactions of triples, quadruples, or even higher numbers of particles, i.e. functionals of the form \[E_K(\omega_N) = \sum_{j_1 \in \omega_N} \sum_{j_2 \in \omega_N \setminus \{j_1\}} \cdots \sum_{j_n \in \omega_N \setminus \{j_1, ..., j_{n-1}\}}K(z_{j_1}, ..., z_{j_n})\] for finite point sets $\omega_N = \{ z_1, ..., z_N\} \subset \mathbb{S}^d$ and \[I_K(\mu) = \int_{\mathbb{S}^d} \cdots \int_{\mathbb{S}^d} K(x _1, ..., x_n) d\mu(x_1) ... d\mu(x_n),\] for Borel probability measures $\mu$, with $n \geq 3$ and $K: (\mathbb{S}^d)^n \rightarrow \mathbb{R} \cup \{\infty\}$. While less common than energies involving a two-input kernel, multivariate energies have appeared in various settings, including geometric measure theory, material science, and discrete geometry. Generally, one intends to determine minimizers of these energies, or various properties of minimizers.
For two-input rotationally-invariant kernels, one can decompose $K$ into a sum of Gegenbauer polynomials and study the coefficients to determine if the uniform measure minimizes $I_K$. Likewise, one can use properties of Gegenbauer polynomials to find bounds for the energy via linear programming. We shall discuss recent work to develop analogues of these results for kernels with three or more inputs, and some consequences.
Joint work with Dmitriy Bilyk (University of Minnesota, USA), Damir Ferizović (Katholieke Universiteit Leuven, Belgium), Alexey Glazyrin (University of Texas Rio Grande Valley, USA), Josiah Park (Texas A & M University, USA) and Oleksandr Vlasiuk (Vanderbilt University, USA).
Chebyshev polynomials in the complex plane
Olof Rubin
Lund University, Sweden - This email address is being protected from spambots. You need JavaScript enabled to view it.
In my poster I will present work on Chebyshev polynomials related to compact subsets of the complex plane. In its general form these are polynomials related to a compact set $\mathsf{E}\subset \mathbb{C}$ in the sense that they are the polynomials \[T_n^{\mathsf{E}}(z) =z^n+\text{lower order terms}\] which have the smallest value of $\max_{z\in \mathsf{E}}|T_n^{\mathsf{E}}(z)|$ among all monic polynomials of degree $n$.
The results we have derived were first conjectured based on numerical data that we attained using a complex generalisation of the Remez algorithm due to P. Tang. By using this algorithm to compute Chebyshev polynomials we have been able to get a good guess on the corresponding asymptotics for a wide variety of sets, and for certain classes of sets we have been able to prove these results.
I will specifically focus on Chebyshev polynomials related to sets which are given as polynomial preimages. I will focus on the process of relating the numerical evidence we have gathered to theoretical concepts and display various conjectures and the numerical results which suggests that these should be true.
The numerical implementation of the algorithm was carried out jointly with J. Manriquez at Lund university and the theoretical results were shown in collaboration with J. Christiansen at Lund university and B. Eichinger at TU Wien.
Joint work with Jacob S. Christiansen (Lund University, Sweden), Benjamin Eichinger (TU Wien, Austria) and Jaime Manriquez (Lund University, Sweden).
Sampling theorems with derivatives in shift-invariant spaces generated by exponential B-splines
Irina Shafkulovska
University of Vienna, Austria - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present sufficient conditions for sampling with derivatives in shift-invariant spaces generated by an exponential B-spline. The sufficient conditions are expressed by a new notion of measuring the gap between consecutive points. As a consequence, we can construct sampling sets arbitrarily close to necessary conditions.
Joint work with Karlheinz Gröchenig (University of Vienna, Austria).
Nonlinear function approximation using different dimension-incremental strategies
Fabian Taubert
University of Technology Chemnitz, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present a dimension-incremental algorithm for the nonlinear approximation of high-dimensional functions in an arbitrary bounded orthonormal product basis. Our goal is to detect a suitable truncation of the basis expansion of the function, where the corresponding basis support is assumed to be unknown. Our method is based on point evaluations of the considered function and adaptively builds an index set of a suitable basis support, such that the approximately largest basis coefficients are still included. There are various minor modifications of the algorithm investigated as well, which may yield additional benefits in several situations. For the first time, we provide a proof of a detection guarantee for such an index set in the function approximation case under certain assumptions on the sub-methods used within our algorithm, which can be used as a foundation for similar statements in various other situations as well. Numerical examples in different settings underline the effectiveness and accuracy of our method.
Joint work with Lutz Kämmerer (University of Technology Chemnitz, Germany) and Daniel Potts (University of Technology Chemnitz, Germany).
A Bernstein-type Result for Stable Approximation with RNNs
Shida Wang
National University of Singapore, Singapore - This email address is being protected from spambots. You need JavaScript enabled to view it.
We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using RNNs. This is a so-called Bernstein-type result in approximation theory, which deduces properties of a target function under the assumption that it can be effectively approximated by a hypothesis space. In particular, we show that nonlinear sequence relationships, viewed as functional sequences, that can be stably approximated by RNNs with hardtanh/tanh activations must have an exponential decaying memory structure - a notion that can be made precise. This extends the previously identified curse of memory in linear RNNs into the general nonlinear setting and quantifies the essential limitations of the RNN architecture for learning sequential relationships with long-term memory. Our theoretical results are confirmed by numerical experiments.
Joint work with Qianxiao Li(National University of Singapore) and Zhong Li(MSRA).
On sharp $L^2$ convergence rates for kernel based interpolation
Tizian Wenzel
University of Stuttgart, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
We consider the interpolation of functions in reproducing kernel Hilbert spaces (RKHS) based on function values. It is well known that the distribution of the interpolation points has a crucial influence on the accuracy of the interpolant. \\ First, we deal with direct estimates, where we recall standard $L^2$ convergence rates and show improved $L^\infty$ convergence rates for adaptively selected interpolation points. This convergence analysis relies on the analysis of the pointwise worst case error over all functions from the RKHS. \\ Second, we deal with inverse estimates and show that a large enough $L^2$ convergence rate implies that a function is contained in the RKHS or intermediate Sobolev spaces. In contrast to the previous analysis, we consider here the worst case error over any set of well distributed interpolation points. \\ Altogether this shows a one-to-one correspondence between smoothness and approximation rate of a function. \\ The results are illustrated by numerical examples.
Joint work with Gabriele Santin (Fondazione Bruno Kessler, Trento, Italy), Bernard Haasdonk (University of Stuttgart, Stuttgart, Germany).