Over the past decade, compressive sensing and deep learning have emerged as viable techniques for reconstruction of images using far fewer samples than what Shannon's sampling theory dictates. The two methods are fundamentally quite different. Compressive sensing relies heavily on the existence of a sparsifying transform, such as the discrete wavelet transform. Deep learning, on the other hand, tries to generalize large amounts of training data and therefore avoids a priori assumptions about the image. While we for compressive sensing have good mathematical results which allow us to control the recovery error, the same cannot be said about deep learning. Here we have no bounds on the error, and it is unclear whether or not stable recovery is possible. Such bounds are important to guarantee stable recovery, which in turn is vital for applications like medical imaging. Otherwise we risk worst-case scenarios such as a tumor showing up in an MRI scan of a healthy patient if they move a few millimeters. In this thesis we look for connections between the two fields. We consider algorithms for solving compressive sensing, and see that they can be written as neural networks. Because of this, we can for the first time test the stability of these algorithms when exposed to worst case noise. We find promising indications for the stability of certain algorithms. In addition, we see a speedup of several orders of magnitude when we run the algorithms as neural networks.