How does network architecture together with response properties of individual neurons shape the response of a population of cells? How are the properties of neuronal networks related to the computations they have evolved to perform? To address these fundamental questions of systems neuroscience we have recently generalized linear response methods previously used to analyze the population responses of specific neuronal networks. This technique allows us to explicitly describe the correlation structure in the output of a network of spiking neurons in terms of synaptic architecture and the response properties of the constituent cells. Our technique also allows us to address a question that has recently received much attention: What is the magnitude and distribution of correlations in cortex? In particular, Renart, de la Rocha et al. have shown that even in densely connected networks, balance between excitation and inhibition can lead to a cancellation of correlations and an asynchronous state. However, experimental evidence has so far not conclusively demonstrated whether cortical dynamics is asynchronous. The question is complicated by recent simulations of a realistic model of the visual area V1 that demonstrated that different layers can exhibit differing correlation patterns. We examine the impact of spatial structure in networks on the statistics of population activity. Although earlier theoretical studies have frequently considered spatially homogeneous networks, it is known that in cortex the probability that two cells have a synaptic contact is dependent on physical distance. For instance, inhibition is known to act more locally than excitation. We explore the impact of these connectivity patterns by imposing spatial profiles on the synaptic weights in recurrent networks of spiking neurons. We successfully apply linear response methods to predict the effects of altering the spatial structure of the network. Our aim is to help describe how input and architecture determine population activity, a question central to understanding the neural code. |
![]() |