Abstract
Recovering signals from undersampled measurements is a well-studied topic in mathematics. During the last decade, many attempts have been made to solve this problem using machine learning, with resulting reconstruction models that report remarkable performance. However, recent work have revealed major systematic stability issues with these models, such as the instability towards adversarial noise. That is, given an image which a neural network can recover correctly, we can easily create a tiny perturbation so that the perturbed image produces severe artifacts during recovery. Similar phenomena are well-established for classification networks, and subsequently several regularization methods for reducing the instabilities of classification networks have been proposed. In this thesis we investigate Parseval networks, in which the every layer is constrained to be a contraction, thus limiting how much a perturbation can be amplified through the network. We adapt these techniques to image reconstruction networks and show that while we seem to sacrifice some performance, the resulting networks do not exhibit the same instabilities.