View abstract

Session III.5 - Information-Based Complexity

Poster

Data Compression using Lattice Rules for Machine Learning

Kumar Harsha

Osnabrück University, Germany   -   This email address is being protected from spambots. You need JavaScript enabled to view it.

The mean squared error is one of the standard loss functions in supervised machine learning. However, computing this loss for enormous data sets can be computationally demanding. Modifying the approach of J. Dick and M. Feischl [Journal of Complexity 67 (Dec 2021) p. 101587], we present algorithms to reduce large data sets to a smaller size using rank-1 lattice rules. With this compression strategy in the pre-processing step, every lattice point gets a pair of weights representing the original data and response. As a result, the compressed data makes iterative loss calculations in optimization steps much faster. Furthermore, we provide error analysis for functions represented as exponential Fourier series lying in Wiener algebras and Korobov spaces.

Joint work with Michael Gnewuch (Osnabrück University, Germany) and Marcin Wnuk (Osnabrück University, Germany).

View abstract PDF