"The quest for optimal sampling strategies for learning sparse approximations in high dimensions"Cardenas, JuanLearning an accurate approximation to an unknown function from data is a fundamental problem at the heart of many key tasks in computational science and engineering. It presents various challenges, including: the \textit{curse of dimensionality}, which renders classical approaches poorly suited; the fact that obtaining samples is expensive; the potential for the domain to be irregular; and the fact that the target function may take values in an infinite-dimensional Hilbert space. A highly fruitful way to strive to overcome these challenges is by exploiting the fact that functions arising in such applications often admit approximately \textit{sparse} representations in a given dictionary. Accordingly, the purpose of this work is to examine the following question: \textit{supposing multivariate function has an approximately sparse representation, how many samples suffice to learn such an approximation from data, and how can it be computed?} We focus on two scenarios. First, when the representing dictionary elements are known, with the problem being solved via least squares, and second the substantially more challenging scenario where such elements are unknown. We address this using $\ell^1$-minimization strategies. Our results apply to scalar- and Hilbert-valued functions. We also introduce a novel $\ell^1$-minimization strategy for sparse approximation on irregular domains. |
http://univie.ac.at/projektservice-mathematik/e/talks/Cardenas_2021-06_Submission_ICCHA_JMC.pdf |
« back