Deep Learning Seminar

"Approximation theory of neural networks - from the concrete to the abstract"

Voigtlaender, Felix

For understanding the performance of a machine learning system, one usually considers three phenomena: generalization, optimization, and approximation (or expressiveness). In this talk, we solely focus on the latter property, for the case of neural networks.

In the concrete part, we present recent results concerning the approximation theoretic properties of deep ReLU neural networks which help to explain some of the characteristics of such networks. In particular we will see that deeper networks can approximate certain classification functions much more efficiently than shallow networks; this is not the case for most smooth activation functions.

Specifically, we consider the $L^p (\lambda)$-approximation (with the Lebesgue measure $\lambda$) of piecewise smooth functions, and the $L^p(\mu)$-approximation of smooth functions, for an arbitrary probability measure $\mu$. For both classes, we determine the optimal approximation rate in terms of the number of nonzero weights of the network, subject to natural complexity conditions on the individual weights. It turns out that these approximation rates are already attained by networks with fixed depth. Precisely, the smoother the functions to be approximated, the better the approximation rate. To benefit from this improved approximation rate, however, one has to use deeper networks.

In the abstract part of the talk, we introduce certain function spaces -- which we call neural network approximation spaces -- that naturally arise when studying the approximation properties of neural networks. We then discuss several properties of these spaces. In particular, we investigate the existence of embeddings between the neural network approximation spaces and classical function spaces. Although such embeddings exist, we will see that the classical function spaces are not a good fit to describe the approximation capabilities of neural networks. A deeper understanding of the new approximation spaces therefore remains a fascinating topic for future research.

The proportion between the ''concrete'' and ''abstract'' part of the talk will be determined by the interest of the audience.

This is joint work with Rémi Gribonval, Gitta Kutyniok, Morten Nielsen, and Philipp Petersen.
http://univie.ac.at/projektservice-mathematik/e/talks/Voigtlaender_2019-10_ReLU_Approximation.pdf

« back