Conceived and designed the experiments: VP BS SC SR. Performed the experiments: VP. Wrote the paper: VP BS SC SR. Conceived and designed the study: VP BS SC SR. Performed the simulations and analysis: VP. Supervised the analysis: BS SC SR.
The authors have declared that no competing interests exist.
Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.
Many biological systems have been described as networks whose complex properties influence the behaviour of the system. Correlations of activity in such networks are of interest in a variety of fields, from gene-regulatory networks to neuroscience. Due to novel experimental techniques allowing the recording of the activity of many pairs of neurons and their importance with respect to the functional interpretation of spike data, spike train correlations in neural networks have recently attracted a considerable amount of attention. Although origin and function of these correlations is not known in detail, they are believed to have a fundamental influence on information processing and learning. We present a detailed explanation of how recurrent connectivity induces correlations in local neural networks and how structural features affect their size and distribution. We examine under which conditions network characteristics like distance dependent connectivity, hubs or patches markedly influence correlations and population signals.
Analysis of networks of interacting elements has become a tool for system analysis in many areas of biology, including the study of interacting species
The connection between correlations and structure is of special interest in neuroscience. First, correlations between neural spike trains are believed to play an important role in information processing
Since recurrent connections represent a substantial part of connectivity, it has been proposed that correlations originate to a large degree in the convergence and divergence of direct connectivity and common input
In recent theoretical work recurrent effects have been found to be an important factor in correlation dynamics and can account for decorrelation
In
We find that variations in synaptic topology can substantially influence correlations. We present several scenarios for characteristic network architectures, which show that different connectivity patterns affect correlations predominantly through their influence on statistics of indirect connections. An influential model for local neural populations is the random network model
Part of this work has been published in abstract form
In order to study correlations in networks of spiking neurons with arbitrary connectivity we use the theory derived in
We will use capital letters for matrices and lower case letters for matrix entries, for example
Symbol | Description |
|
spike train vector |
|
rate vector |
|
interaction kernel matrix, elements |
|
external input |
|
matrix of integrated kernels, elements |
|
diagonal rate matrix, elements, average rate |
|
covariance density function matrix, elements |
|
time lag |
|
integrated covariance density matrix, elements |
|
spike counts |
|
bin size |
|
population count variance |
|
number of neurons |
|
number of excitatory/inhibitory neurons |
|
connection probability |
|
excitatory/inhibitory integrated interaction kernel |
|
average correlation contribution of order |
|
output connections from neuron type |
|
input connections to neuron type |
|
average interaction |
|
average common input |
|
radius of bulk spectrum |
|
average correlation |
|
distance |
|
half width of boxcar-profile |
|
height of boxcar profile |
|
average out degree |
|
fraction hub to hub connections |
|
connection probability in patch |
|
patch size |
Our networks consists of
where
The effect of presynaptic spikes at time
where we denoted the expectation
where
Network parameters are
We describe correlations between spike trains by the covariance density matrix
and corresponds to the probability of finding a spike after a time lag
for
is known, (6) can be solved and the Fourier transform of the cross covariance density
The definition of the Fourier transform implies that
The rate Equation (4) becomes with these definitions
Equation (8) describes the time-dependent correlation functions of an ensemble of linearly interacting units. In this work we concentrate on purely structure-related phenomena under stationary conditions. Therefore we focus on the integrated covariance densities, which are described by Equation (9). Differences in the shape of the interaction kernels which do not alter the integral do not affect our results. One example is the effect of delays, which only shift interaction kernels in time. Furthermore we restrict ourselves to systems where all eigenvalues
The matrix elements
The integrated cross-correlations
see for example
Strictly this is only true in the limit of infinitely large bin size. However, the approximation is good for counting windows that are large with respect to the temporal width of the interaction kernel. In this sense, the sum of the correlations is a measure for the fluctuations of population activity. Another measure for correlations that is widely used is the correlation coefficient,
We simulated networks of linearly interacting point processes in order to illustrate the theory,
Simulations of linearly interacting point processes were conducted using the NEST simulator
In this section we address how recurrent connectivity affects rates and correlations. Mathematically, the kernel matrix
With the shorthand
Equation (9) becomes
where the rates are given by (10),
The terms of this series describe how the rates result from external and recurrent input. The matrix
with
In these expressions, a term like
These paths with two branches are the subgroup of network motifs that contribute to correlations. Further examples are given in
As mentioned before, the sum (14) converges only if the magnitude of all eigenvalues of
Under this condition, the size of higher-order terms, that is the collective influence of paths of length
where
The average correlation across all pairs can be computed by counting the weighted paths between two given nodes. The average contribution of paths of length
Let us separate the contributions from rates to the autocorrelations and define the average correlation
The population fluctuations are determined by
As a first approximation let us assume that every neuron in a given subpopulation
Since input is the same for all neurons, all rates are equal. Their value can be obtained as follows by the expansion of (10),
In a similar manner, analytical expressions for the average correlations can be obtained. Explicit calculations can be found in Section 2 of Supporting
Closed expressions can be derived in the special case where there is a uniform connection probability between all nodes, i.e.
With
and the average correlation
Here,
Equation (22) can be used as an approximation if the degree distribution is narrow. In particular this is the case in large random networks with independent connections, independent input and output and uniform connection probabilities. These conditions ensure that deviations from the fixed out- and in-degrees balance out on average in a large matrix. Numerical examples can be found in the following section.
In this section we analyse networks, where connections between all nodes are realised with uniform probability
one can expand the average correlation into contributions corresponding to paths of different shapes and increasing length. In large random networks each node can connect to many other nodes. The node degree is then the sum of a large number of random variables, and the standard deviation of the degrees relative to their mean will be small. In this case, the constant degree assumption is justified, and Equation (21) gives a good approximation of the different motif contributions, see
Top: low connectivity,
such that each term partly cancels the previous one. The importance of higher-order contributions can be estimated from the eigenvalue spectrum of the connectivity matrix. For large random networks of excitatory and inhibitory nodes, the spectrum consists of one single eigenvalue of the size
The value
By correlation distribution we denote the distribution of the entries
Instead of purely random networks we now consider networks of
A sketch for this construction scheme is depicted in
The stability of such a network depends on the radius of the bulk spectrum. Additionally and in contrast to the random network, besides the eigenvalue corresponding to the mean input of a neuron, a number of additional real eigenvalues exist outside the bulk spectrum. A typical spectrum is plotted in
As in a random network, the degree distribution of nodes in a ring network is narrow, hence Equation (22) is a good approximation for the average correlation if the total connection probability
In this case the average correlation does not depend on the specific connectivity profile. However, the full distribution of correlations depends on the connection profile,
For distance dependent connectivity correlations are also expected to depend on the distance. We define the distance dependent correlation
where
with
and define a distance dependent version of the average common input,
where
one finds for the single contributions
and for the complete correlations
The discrete Fourier transform can be calculated numerically for any given connectivity profile. Results of Equations (27) and (28) are compared to the direct evaluation of (25) in
While the average correlation and therefore the variance of population activity in a network does not depend on structure in the networks considered so far, this is not true for smaller subnetworks. In ring-like structures, small populations of neighbouring neurons are more strongly correlated, and we expect larger fluctuations in their pooled activity. Generalising equation (12) slightly for a population
This expression can be evaluated numerically using Equation (28). For random networks, correlations do not depend on the distance. Hence the population variance increases quadratically with the number of elements. When increasing the population size in ring networks, more neurons which are further apart and only weakly correlated to most of the others are added, therefore a large part of their contribution consists of their rate variance and the population variance increases linearly. An example is shown in
We found that in networks with narrow degree distributions average correlations are determined by global parameters like the population sizes
where the parameter
By construction the parameters
Different effects can be observed in networks of neurons with patchy connections and non-homogeneous spatial distribution of neuron types. A simple network with patchy connections can be constructed from neurons arranged in a ring. We consider two variants: one where all inhibitory neurons are situated in the same area of the ring, compare
A comparison of motif contributions to correlations,
We studied the relation between connectivity and spike train correlations in neural networks. Different rules for synaptic connectivity were compared with respect to their effects on the average and the distribution of correlations. Although we address specific neurobiological questions, one can speculate that our results may also be relevant in other areas where correlated activity fluctuations are of interest, such as in the study of gene-regulatory or metabolic networks.
The framework of linearly interacting point processes in
Although Hawkes' equations are an exact description of interacting point processes only for strictly excitatory interactions, numerical simulations show that predictions are accurate also for networks of excitatory and inhibitory neurons. Hence correlations can be calculated analytically even in effectively inhibitory networks in a wide range of parameters, as has already been proposed in
The activity of cortical neurons is often characterised by low correlations
We quantified correlations by integrated cross-correlation functions in a stationary state. The shape of the resulting correlation functions, which has been treated for example in
In Hawkes' framework, taking into account contributions to pairwise correlations from direct interactions, indirect interactions, common input and interactions via longer paths is equivalent to a self-consistent description of correlations. This interpretation helps to derive analytical results for simple networks. Furthermore it allows an understanding of the way in which recurrent connectivity influences correlations via multiple feed-back and feed-forward channels. In particular, we showed why common input and direct input contributions are generally not sufficient to describe correlations quantitatively, even in a linear model. We showed that average correlations in networks with narrow degree distributions are largely independent of specific connectivity patterns. This agrees with results from a recent study
Can we estimate the importance of recurrence from experimentally accessible parameters? In
We addressed ring networks with distance-dependent connection probability. Here, average correlations do not depend on the connectivity profile. However, for densely coupled neighbourhoods very broad correlation distributions can arise. A Mexican hat-like interaction has especially strong effects, since in that case higher-order contributions amplify correlations. This is not surprising since it is known that Mexican hat-like profiles can support large-scale activity patterns
A generalisation to two-dimensional networks with distance dependent connectivity could be used to further investigate the relation between neural field models which describe large-scale dynamics
Pairwise correlations affect activity in pooled spike trains
If the degree distribution is wide, networks can be constructed where connection probability depends on the out-degree of postsynaptic neurons. We considered networks where excitatory hubs, defined by a large out-degree, form a more or less densely connected subnetwork. Similar networks have been studied in
In networks with patchy connections, an increase of correlations can be observed when populations of neurons are spatially non-homogeneous. Some information about how network architecture influences correlations can be obtained from examining contributions of individual motifs. In patchy networks mainly the contributions of symmetric motifs are higher, when excitatory and inhibitory neurons are separated, and therefore responsible for the correlation increase. In networks with hubs also asymmetric motifs play a role.
We found that fine-scale structure has important implications for the dynamics of neural networks. Under certain conditions, like narrow degree distributions, local connectivity has surprisingly little influence on global population averages. This suggests the use of mean-field models. On the other hand, broad degree distributions or the existence of connected hubs influence activity also on the population level. Such factors represent, in fact, major determinants of the activity state of a network and, therefore, should be explicitly considered in models of large scale network dynamics.
As considerable efforts are dedicated to the construction of detailed connection maps of brains on multiple scales, we believe that the analysis of the influence of detailed connectivity data, possibly with more refined models, has much to contribute to a better understanding of neural dynamics.
Supporting information.
(PDF)
We thank Moritz Helias and Moritz Deger for fruitful discussions and providing an implementation of the Hawkes process in the NEST simulator.