"Universal approximation properties of random neural networks"Salanevich, PalinaSingle layer feedforward neural networks (SLFN) have been widely used for solving problems such as classification and regression because of their universal approximation capability. At the same time, iterative methods used for training SLFN suffer from slow convergence, get trapped in local minima, and are sensitivity to the initial choice of parameters. To counter this, both researchers and practitioners aimed to introduce random neural networks, where the number of parameters to be learned is reduced. Based on the original construction of Igelnik and Pao, single layer neural networks with random input-to-hidden layer weights and biases have seen success in practice, but lacked the necessary theoretical justification. In this talk, we start to bridge this gap and provide a non-asymptotic bound on the approximation error for such random neural networks, depending on the number of nodes in the hidden layer. We also discuss how this randomized neural network architecture can be generalized to approximate functions on smooth compact lower-dimensional manifolds and provide theoretical guarantees in both asymptotic and non-asymptotic forms for this generalization. |
« back