Speakers
Semi-plenary speakers
- Bach, Francis - INRIA, France
- Loh, Po-Ling - Cambridge University, UK
Invited speakers
- Ablin, Pierre - Apple Research, France
- Altschuler, Jason - NYU, United States
- Austern, Morgane - Harvard, United States
- Cucuringu, Mihai - Oxford, UK
- Cuturi, Marco - Apple Research, France
- Ling, Shuyang - NYU Shanghai, China
- Loureiro, Bruno - ENS Paris, France
- Mixon, Dustin - Ohio State, United States
- Qu, Qing - University of Michigan, United States
- Rebrova, Liza - Princeton, United States
- Sulam, Jeremias - Johns Hopkins, United States
Preliminary program
This schedule is preliminary and could be updated.
Thursday, June 15
14:30 ~ 15:00 | On structured linear measurements for tensor data recovery Liza Rebrova - Princeton University, USA |
15:00 ~ 15:30 | Dimension-free limits of stochastic gradient descent for two-layers neural networks Bruno Loureiro - École Normale Supérieure, France |
15:30 ~ 16:00 | To split or not to split that is the question: From cross validation to debiased machine learning Morgane Austern - Harvard, United States |
16:30 ~ 17:00 | Spectral methods for clustering signed and directed networks and heterogeneous group synchronization Mihai Cucuringu - University of Oxford, United Kingdom |
17:00 ~ 17:30 | Bilipschitz invariants Dustin Mixon - The Ohio State University, USA |
17:30 ~ 18:00 | Near-Optimal Bounds for Generalized Orthogonal Procrustes Problem via Generalized Power Method Shuyang Ling - New York University Shanghai, China |
Friday, June 16
14:30 ~ 15:00 | The Monge Gap, A Regularizer to Learn Optimal Transport Maps Marco Cuturi - Apple / CREST-ENSAE, France |
15:00 ~ 15:30 | Shifted divergences for sampling, privacy, and beyond Jason Altschuler - NYU / UPenn, USA |
15:30 ~ 16:00 | Bilevel optimization for machine learning Pierre Ablin - Apple, France |
16:30 ~ 17:30 | Information theory through kernel methods Francis Bach - Inria - Ecole Normale Supérieure, France |
Saturday, June 17
15:00 ~ 15:30 | Neural Networks as Sparse Local Lipschitz Functions: Robustness and Generalization Jeremias Sulam - Johns Hopkins University, United States |
15:30 ~ 16:00 | Understanding Deep Representation Learning via Neural Collapse Qing Qu - University of Michigan, United States |
16:30 ~ 17:30 | Robust regression revisited Po-Ling Loh - Cambridge, United Kingdom |
Posters
- Naive imputation implicitly regularizes high-dimensional linear models
Ayme Alexis - LPSM, Sorbonne Université, France - Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Optimization
Waïss Azizian - LJK, Université Grenoble Alpes , France - Some theoretical properties of physics-informed neural networks
Nathan Doumèche - Sorbonne Université, France - Accelerating Stochastic Gradient Descent
Kanan Gupta - Texas A&M University, United States of America - Langevin Quasi-Monte Carlo
Sifan Liu - Stanford University, USA - Two-timescale regime for global convergence of neural networks
Pierre Marion - Sorbonne Université, France - Near-optimal learning of Banach-valued, high-dimensional functions via deep neural networks
Sebastian Moraga Scheuermann - Simon Fraser University, Canada - Trimmed sample means for robust uniform mean estimation and regression
Lucas Resende - IMPA (Instituto de Matemática Pura e Aplicada), Brazil - Convergence rates for Lipschitz learning on very sparse graphs
Tim Roith - Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany - Regularization properties of dropout
Anna Shalova - Eindhoven University of Technology, Netherlands - Identifiability of Gaussian Mixtures from their sixth moments
Alexander Taveira Blomenhofer - CWI, Amsterdam, The Netherlands - Conformal Prediction with Missing Values
Margaux Zaffran - INRIA and Ecole Polytechnique, France