Document Type


Publication Date


Published In

SIAM Journal On Applied Dynamical Systems


Neuronal networks vary dramatically in size, connectivity structure, and functionality across downstream layers of the brain. This raises the question of whether information is lost as it is re-encoded along compressive and expansive pathways. In this work, we develop a potential data-driven mechanism for the preservation of information in the activity of neuronal networks across downstream layers, which uses the widespread linearity of individual neuronal responses to sufficiently strong ramped artificial inputs to fit a linear input-output mapping across the network. We analyze the dynamics of several families of two-layer neuronal network models, where the input components far outnumber the downstream neurons, as in compressive pathways, and apply the fitted mapping in conjunction with compressive sensing theory to reconstruct stimuli with sparse structure. The input-output mapping facilitates stimulus reconstructions that only use measurements of downstream neuronal firing rates in response to inputs over a short time duration, furnishing stimulus recovery even when theoretical analysis is intractable or the governing equations of the dynamical system are unknown as in experiment. Similarly accurate stimulus reconstructions are obtained across different single-neuron models, network coupling functions, and image classes. Improved reconstructions are yielded when uniformly random feedforward connectivity is replaced by spatially localized feedforward connectivity akin to receptive fields. We expect that similar principles could be leveraged experimentally in prosthetics as well as in the reconstruction of large-scale network connectivity.


This work is freely available courtesy of the Society for Industrial and Applied Mathematics.

Included in

Mathematics Commons