Session II.5 - Random Matrices
Talks
Thursday, June 15, 14:00 ~ 15:00
Random perturbation of low-rank matrices
Ke Wang
The Hong Kong University of Science and Technology, Hong Kong - This email address is being protected from spambots. You need JavaScript enabled to view it.
The analysis of large matrices is a key aspect of high-dimensional data analysis, with computing the singular values and vectors of a matrix being a central task. However, real-world data is often disturbed by noise, which affects the essential spectral parameters of the matrix. While classical deterministic theorems can provide accurate estimates for the worst-case scenario, this talk will focus on the case when the perturbation is random. By assuming that the data matrix has a low rank, optimal subspace perturbation bounds can be achieved under mild assumptions. This talk is based on joint works with Sean O'Rourke and Van Vu.
Joint work with Sean O'Rourke (the University of Colorado Boulder, USA), Van Vu (Yale University).
Thursday, June 15, 15:00 ~ 15:30
Spectral stability under random perturbations
Jorge Garza-Vargas
Caltech, United States of America - This email address is being protected from spambots. You need JavaScript enabled to view it.
In this talk I will discuss the following random matrix phenomenon (relevant in numerical diagonalization): if one adds independent (tiny) random variables to the entries of an arbitrary deterministic matrix A, with high probability, the resulting matrix A′ will have (relatively) stable eigenvenvalues and eigenvectors. More concretely, I will explain the key ideas behind obtaining tail bounds for the eigenvector condition number and minimum eigenvalue gap of a deterministic matrix that has been perturbed by a (tiny) random matrix with independent entries, each having an absolutely continuous distribution.
Joint work with Jess Banks (UC Berkeley), Archit Kulkarni (UC Berkeley) and Nikhil Srivastava (UC Berkeley).
Thursday, June 15, 15:30 ~ 16:00
Predictability and universality in numerical computation via orthogonal polynomials and local laws
Thomas Trogdon
University of Washington, United States - This email address is being protected from spambots. You need JavaScript enabled to view it.
Many iterative algorithms from numerical linear algebra (Krylov subspace methods, in particular) connect directly to polynomials orthogonal with respect to a measure generated by a matrix and a vector, the so-called eigenvector empirical spectral distribution (VESD). The analysis of these algorithms can often be reduced to understanding properties of these orthogonal polynomials. The broad goal of this talk is to use this connection to describe the behavior of these algorithms when applied to random matrices.
For many random matrix distributions, the limiting VESD is known (or at least characterized) via a local law, yet the ill-conditioned nature of the mapping from measure to orthogonal polynomials appears to limit the conclusions one can make. To partially overcome this difficulty, we develop a new Riemann--Hilbert-based perturbation theory for orthogonal polynomials that takes the local law as input. This allows one to make precise statements about the random polynomials as their degree diverges, and therefore, about algorithms applied to random matrices as the number of iterations diverges. Similar technology allows one to understand the behavior of some of these algorithms in finite-precision arithmetic.
Joint work with Percy Deift (NYU), Xiucai Ding (UC Davis), Tyler Chen (NYU) and Elliot Paquette (McGill University).
Thursday, June 15, 16:30 ~ 17:30
Random sparse Hamiltonians and quantum advantage
Joel Tropp
Caltech, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
This talk introduces a simple and elegant random matrix ensemble, called a sparse random Hamiltonian. This model contains very little randomness, yet the spectrum is close to the semicircular distribution. A quantum computer can reliably produce a low-energy state of this type of random Hamiltonian, while it is unlikely that the same task is tractable for a classical computer. As a consequence, this ensemble provides evidence of a potential "quantum advantage".
The proof of these claims involves new tools from nonasymptotic random matrix theory. In particular, a novel application of the Lindeberg principle yields a comparison between the sparse random Hamiltonian and a GUE matrix. The talk focuses on the probabilistic aspects and proof ideas, and no prior knowledge of quantum computation is assumed.
Joint work with Chi-Fang (Anthony) Chen (Caltech, AWS), Alexander M. Dalzell (AWS, Caltech), Mario Berta (Aachen) and Fernando G. S. L. Brandao (Caltech).
Thursday, June 15, 17:30 ~ 18:30
The Conditional DPP Approach to Random Matrix Distributions
Alan Edelman
MIT, United States - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present the conditional determinantal point process (DPP) approach to obtain new (mostly Fredholm determinantal) expressions for various eigenvalue statistics in random matrix theory. Our formulae are highly amenable to numerical computations. Several numerical values that required hours of computing time can now be computed in seconds with our expressions. We also demonstrate that our technique can be applied to sampling of DR paths of the Aztec diamond domino tiling.
Joint work with Sungwoo Jeong.
Friday, June 16, 14:00 ~ 14:30
TBA
Konstantin Tikhomirov
Georgia Tech, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
TBA
Friday, June 16, 14:30 ~ 15:00
TBA
Elizaveta Rebrova
Princeton, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
TBA
Friday, June 16, 15:00 ~ 16:00
Randomization in computational linear algebra
Gunnar Martinsson
University of Texas at Austin, United States of America - This email address is being protected from spambots. You need JavaScript enabled to view it.
The talk will describe how ideas from random matrix theory can be leveraged to effectively, accurately, and reliably solve important problems that arise in data analytics and large scale matrix computations. We will focus in particular on accelerated techniques for computing low rank approximations to matrices. These techniques rely on randomized embeddings that reduce the effective dimensionality of intermediate steps in the computation. The resulting algorithms are particularly well suited for processing very large data sets.
The talk will also survey how randomization can be used to accelerate the solution of certain linear systems, and to minimize the movement of data in standard linear algebraic computations.
Friday, June 16, 16:30 ~ 17:30
Asymptotic expansions relating to longest increasing subsequences in random permutations and analytic de-Poissonization
Folkmar Bornemann
Technische Universität München, Germany - This email address is being protected from spambots. You need JavaScript enabled to view it.
In a seminal work, Baik/Deift/Johansson proved the limit distribution of the length of longest increasing subsequences in random permutations to be the GUE Tracy-Widom distribution. Since the rate of approximation is rather slow, we improve upon this limit by establishing an asymptotic expansion. This is done in two steps: expanding the limit of the Poissonized distribution as the hard-to-soft edge transition limit of LUE, followed by analytic de-Poissonization (replacing Johansson’s monotonicity-based de-Poissonization which would not allow us to go beyond the leading order). The proof is subject to a hypothesis on the zeros of the Poissonized distribution for complex intensities. Unexpectedly, all the concretely calculated expansion terms (up to the tenth order correction so far) take the form of a linear combination of higher-order derivatives of the Tracy-Widom distribution with certain rational polynomials as coefficients.
Friday, June 16, 17:30 ~ 18:00
The Ginibre random matrix ensemble with point insertions: droplets and mother bodies
Arno Kuijlaars
Katholieke Universiteit Leuven, Belgium - This email address is being protected from spambots. You need JavaScript enabled to view it.
A basic result of random matrix theory is the circular law: the eigenvalues of complex matrices with i.i.d. entries fill out a disk as the size of the matrices increases. In the case of the Ginibre ensemble with point insertions there is a repulsion of eigenvalues away from the inserted points. In the large size limit fill out a region known as the droplet, that is different from a disk.
The zeros of the average characteristic polynomial tend to a certain contour inside the droplet, and their limiting zero counting measure is called the mother body. We study the behavior of the droplet and the mother body in some simple cases. The aim is to characterize the droplet and its changes in topology.
Joint work with Sampad Lahiry (KU Leuven, Belgium).
Friday, June 16, 18:00 ~ 18:30
Extreme singular values of inhomogeneous, sparse, rectangular random matrices
Ioana Dumitriu
University of California, San Diego, USA - This email address is being protected from spambots. You need JavaScript enabled to view it.
We develop a unified approach to bounding the largest and smallest singular values of an inhomogeneous random rectangular matrix, based on the non-backtracking operator and the Ihara-Bass formula for general Hermitian matrices with a bipartite block structure. Our main results are probabilistic upper/lower bounds for the largest/smallest singular values of a large rectangular random matrix $X$. These bounds are given in terms of the maximal and minimal $\mathcal{l}_2$-norms of the rows and columns of the variance profile of $X$. The proofs involve finding probabilistic upper bounds on the spectral radius of an associated non-backtracking matrix $B$.
The two-sided bounds can be applied to the centered adjacency matrix of sparse inhomogeneous Erdos-Renyi bipartite graphs for a wide range of sparsity. In particular, for Erdos-Renyi bipartite graphs $G(n,m,p)$ with $p=\omega(logn)/n$, and $m/n \rightarrow y \in (0,1)$, our sharp bounds imply that there are no outliers outside the support of the Marcenko-Pastur law almost surely.
Joint work with Yizhe Zhu (University of California, Irvine).
Saturday, June 17, 14:00 ~ 15:00
A new weak Szemeredi regularity lemma, and the covariance loss
Roman Vershynin
University of California, Irvine, U.S.A. - This email address is being protected from spambots. You need JavaScript enabled to view it.
We will prove a new type of weak Szemeredi regularity lemma. It states that a positive semidefinite matrix with bounded diagonal can be decomposed into a small number of constant blocks, up to a small error in the Frobenius norm. The proof utilizes the probabilistic method, specifically a randomized rounding mechanism based on Grothendieck's identity. The new regularity lemma implies a nearly tight bound on the covariance loss--the amount of covariance that is lost by taking conditional expectations of random vectors. I will try to present the proof of the regularity lemma, which is simple and educational.
Joint work with March Boedihardjo (ETH Zurich) and Thomas Strohmer (UC Davis).
Saturday, June 17, 15:00 ~ 15:30
TBA
Youssef Pierre
NYU Abu Dhabi, UAE - This email address is being protected from spambots. You need JavaScript enabled to view it.
TBA
Saturday, June 17, 15:30 ~ 16:00
Equilibrium measures for attractive-repulsive power law interactions
Sheehan Olver
Imperial, UK - This email address is being protected from spambots. You need JavaScript enabled to view it.
We discuss numerical methods for computing equilibrium measures which are of significant importance in random matrix theory, and extensions to the case of attractive-repulsive interactions, in particular where the particles interact according to a power-law. This includes new recurrence relationships of power law kernels applied to weighted orthogonal polynomials.
Joint work with José Carrillo (Oxford) and Timon Gutleb (Oxford).
Posters
A random matrix perspective on random tensors
Henrique Goulart
Toulouse INP / IRIT, France - This email address is being protected from spambots. You need JavaScript enabled to view it.
Recent years saw substantial progress on the statistical limits of the so-called tensor PCA problem, where one wants to recover a planted signal vector $x$ which gives rise to a symmetric rank-one tensor $x^{\otimes d}$ from observations corrupted by Gaussian noise. However, some of the most significant results are largely based on rather technical tools coming from spin glass theory, thus limiting their accessibility. Moreover, relevant extensions to more general and/or structured tensor models is likely difficult with this approach. Our work proposes a sharply distinct and somewhat more elementary approach, relying on tools from random matrix theory. The key idea is to study random matrices arising from contractions of a large random tensor, which give access to its spectral properties. In particular, for a symmetric $d$th-order rank-one model with Gaussian noise, this yields a novel characterization of the asymptotic performance of maximum likelihood (ML) estimation in terms of a fixed-point equation valid in the regime where weak recovery is possible. For $d=3$, the solution to this equation matches the existing results. We conjecture that the same holds for any order $d$, based on numerical evidence for $d \in \{4,5\}$. Our approach can be (and has been) extended to other models, including asymmetric and non-Gaussian ones.
Joint work with Romain Couillet (Université Grenoble Alpes, France) and Pierre Comon (Université Grenoble Alpes, CNRS, France).
Modified Multivariate Hawkes Process for Brain Temporal Networks
Jaehee Kim
Duksung Women's University, South Korea - This email address is being protected from spambots. You need JavaScript enabled to view it.
Hawkes first proposed the linear Hawkes process (Hawkes, 1971) to model earthquakes and their aftershocks. We review linear, nonlinear, high dimensional, and spatial Hawkes processes and develop a modified Hawkes process for the brain's temporal networks. \noindent \funding{\newline \textbf{Funding:} This work was supported by the National Research Foundation of Korea (NRF) grant (BRL No. 2021R1A4A5028907) and Basic Research (No. 2021R1F1A1054968). } \end{document}
Pseudospectral Shattering for Generalized Eigenvalues and Inverse-Free Matrix Pencil Diagonalization
Ryan Schneider
University of California San Diego, United States - This email address is being protected from spambots. You need JavaScript enabled to view it.
We present a randomized, inverse-free algorithm for producing an approximate diagonalization of any $n \times n$ matrix pencil $(A,B)$. The bulk of the algorithm rests on a randomized divide-and-conquer eigensolver for the generalized eigenvalue problem originally proposed by Ballard, Demmel, and Dumitriu [Technical Report 2010]. We demonstrate that this divide-and-conquer approach can be formulated to succeed with high probability as long as the input pencil is sufficiently well-behaved. This is accomplished by generalizing the recent pseudospectral shattering work of Banks, Garza-Vargas, Kulkarni, and Srivastava [Foundations of Computational Mathematics 2022]. The resulting randomized algorithm (with high probability and in exact arithmetic) produces invertible $S,T$ and diagonal $D$ such that $||A - SDT^{-1}||_2 \leq \varepsilon$ and $||B - SI_nT^{-1}||_2 \leq \varepsilon$ in at most $O \left( \log(n) \log^2 \left( \frac{n}{\varepsilon} \right) T_{\text{MM}}(n) \right)$ operations, where $T_{\text{MM}}(n)$ is the complexity of $n \times n$ matrix multiplication. This not only provides a new set of guarantees for highly parallel generalized eigenvalue solvers but also establishes nearly matrix multiplication time as an upper bound on the complexity of exact arithmetic matrix pencil diagonalization.
Joint work with James Demmel (University of California Berkeley) and Ioana Dumitriu (University of California San Diego).