8th International Conference on
Computational Harmonic Analysis

September 12-16, 2022

Ingolstadt, Germany

"Limitations of Deep Learning for Inverse Problems on Digital Hardware"

Fono, Adalbert

Due to their success in many different fields, computational problems are tackled typically on digital machines. An acknowledged but nevertheless neglected aspect of digital computations is its non-exactness. Real-world problems often depend on continuous quantities whereas digital computers by construction can only represent rational values exactly. Therefore, it is tremendously important to analyse under which circumstances this approximate behaviour leads to verifiably correct results. Otherwise, the output of a computation on a digital machine may be wrong unbeknown to the user of said machine. The mathematical theory of Turing machines and computability are the necessary tools to study the reliability of digital computations. Turing machines represent an idealized model of digital computers and thereby allow to study the capabilities and limitations of digital devices. Our goal is the evaluation of deep neural networks from the perspective of computability theory. Since the training and execution is performed on digital hardware, the limitations of digital hardware also translate to deep learning. By identifying boundaries for algorithmic computation on Turing machines we also describe fundamental limits of deep learning. Thereby, our main focus lies on the class of inverse problems, which, in particular, encompasses any task to reconstruct data from measurements. We prove that finite-dimensional inverse problems are not computable for small relaxation parameters on Turing machines, i.e., there does not exist an algorithm on digital hardware that solves inverse problems for any given accuracy. This establishes a conceptual barrier on the capabilities of neural networks for solving finite-dimensional inverse problems given that the computations are performed on digital hardware. An important question is whether this constraint is connected to the properties of Turing machines or to inherent characteristics of inverse problems? Therefore, it is of interest to study how powerful the signal processing unit must be to enable the algorithmic solution of inverse problems. We analyse inverse problems under a more general (analog) computation model that is based on exact real number calculations, i.e., arbitrary real numbers can be processed and stored. As the Turing machine for digital computations, the Blum-Shub-Smale (BSS) machine is a suitable model to analyse the capabilities of analog computations. Intriguingly, the solution maps of inverse problems can under certain conditions be computed without restrictions on BSS machines. Hence, BSS machines (and thereby analog computers) potentially allow for the implementation of more powerful neural networks compared to their digital counterparts for solving inverse problems.

« back