- This event has passed.
Computational and Applied Mathematics Seminar: Gregory Ongie, Marquette University, A function space view of infinite-width neural networks
November 3 | 12:45 pm - 1:45 pm EDT
It is well-known that nearly any function can be approximated arbitrarily-well by a neural network with non-linear activations. However, one cannot guarantee that the weights in the neural network remain bounded in norm as the approximation error goes to zero, which is an important consideration when practically training neural networks. This raises the question: What functions are well-approximated by neural networks with bounded norm weights? In this talk, I will give a partial answer to this question, by giving a precise characterization of the space of functions that can be approximated arbitrarily well by a two-layer neural network with ReLU activations having an unbounded number of units (“infinite-width”) but whose weights remain bounded in norm. Surprisingly, the characterization relates to the Radon transform as used in computational imaging, and I will show how Radon transform analysis yields new insights about function approximation with two-layer and three-layer neural networks.