Strobl24

More on Harmonic Analysis

June 9th - 15th 2024

Strobl, AUSTRIA

"Energy Propagation in Scattering Convolution Networks Can Be Arbitrarily Slow"

Getter, Max

Mallat's windowed scattering transform represents an infinitely deep convolutional neural network (DCNN), typically with the modulus as nonlinearity and predetermined filters. Among other desirable properties of such DCNNs, it is particularly important with regard to low computational complexity that the energy contained in the DCNN propagates quickly across the layers. While the energy decays at an exponential rate for all input signals in \(L^2(\mathbb{R}^d)\) when employing uniform covering filters (Czaja, Li, 2019), a similar result is only known for \(d=1\) and Sobolev functions when employing wavelet filters (Wiatowski, Grohs, Bölcskei, 2018). This raises the question of whether exponential energy decay also holds globally on \(L^2(\mathbb{R}^d)\) for the wavelet scattering transform. We analyze how the choice of predetermined filters affects the speed of energy propagation. For a large class of structured filter banks, we show that energy decay can be arbitrarily slow. Moreover, we prove that for any non-increasing null-sequence \((E_N)_{N \in \mathbb{N}}\in \mathbb{R}_{>0}^{\mathbb{N}}\), there is a dense subset of \(L^2(\mathbb{R}^d)\) for which the corresponding network energies are not in \(\mathcal{O}(E_N)\) as the network depth \(N\) goes to infinity. Notably, our results apply to wavelet filters in any dimension, thereby disproving the wide-spread belief that the energy decays exponentially globally on \(L^2(\mathbb{R}^d)\) for wavelet scattering DCNNs. We complement these findings with positive results by providing rich (filter-dependent) classes of functions whose corresponding network energy decays at a given rate. This is based on joint work with Hartmut Führ.

« back