While the theory of compressive sensing (CS) in modern signal processing typically indicates that uniformly random sampling facilitates the efficient recovery of sparse signals, such measurements are infeasible in many engineering applications and are not well reflected by the constraints of natural systems, including neuronal networks in the brain. Uniformly random sampling also does not leverage the underlying structure of many classes of signals, and may therefore be suboptimal in these cases. We address these issues by formulating a novel neural network framework for learning improved CS sampling based on the intrinsic structure present in classes of training signals. Beyond sparsity in an appropriate domain, this approach does not assume knowledge of any specific signal statistics and is purely data-driven. The learning methodology is biologically realistic in that it utilizes (1) asymmetric feedback and feedforward connections in the neural network and (2) only information from adjacent layers in training the CS measurement matrix. Observing a broad spectrum of learned sampling paradigms that improve CS signal reconstructions relative to uniformly random sampling, our learned sampling is widely applicable across logistical constraints. Motivated by the receptive field structure of sensory systems, we specifically analyze natural scene inputs and demonstrate improved CS reconstruction as a result of training across several choices of penalization schemes on the sampling weights. Considering this learning is effective even under sparse and spatially localized constraints, as commonly observed in the brain, we hypothesize that neuronal connectivity may have manifested with the aim of providing a compressive encoding of data by leveraging its sparse structure, thereby achieving efficient signal transmission.
Victor J. Barranca.
"Neural Network Learning Of Improved Compressive Sensing Sampling And Receptive Field Structure".