Skip to main content
Advertisement
  • Loading metrics

Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations

  • Sacha Jennifer van Albada ,

    s.van.albada@fz-juelich.de

    Affiliation Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany

  • Moritz Helias,

    Affiliation Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany

  • Markus Diesmann

    Affiliations Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany, Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany

Abstract

Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.

Author Summary

Neural networks have two basic components: their structural elements (neurons and synapses), and the dynamics of these constituents. The so-called effective connectivity combines both components to yield a measure of the actual influence of physical connections. Previous work showed effective connectivity to determine correlations, which quantify the co-activation of different neurons. Conversely, methods for estimating network structure from correlations have been developed. We here extend the range of networks for which the mapping between effective connectivity and correlations can be shown to be one-to-one, and clarify the conditions under which this equivalence holds. These findings apply to a class of networks that is often used, with some variations, to model the activity of cerebral cortex. Since the numbers of neurons and synapses in real mammalian brains are vast, such models tend to be reduced in size for simulation purposes. However, our findings imply that if we wish to retain the original dynamics including correlations, effective connectivity needs to be unchanged, from which we derive scaling laws for synaptic strengths and external inputs, and fundamental limits on the reducibility of network size. The work points to the importance of considering networks with realistic numbers of neurons and synapses.

Introduction

While many aspects of brain dynamics and function remain unexplored, the numbers of neurons and synapses in a given volume are well known, and as such constitute basic parameters that should be taken seriously. Despite rapid advances in neural network simulation technology and increased availability of computing resources [1], memory and time constraints still lead to neuronal networks being routinely downscaled both on traditional architectures [2] and in systems dedicated to neural network simulation [3]. As synapses outnumber neurons by a factor of 103 − 105, these constitute the main constraint on network size. Computational capacity ranges from a few tens of millions of synapses on laptop or desktop computers, or on dedicated hardware when fully exploited [4, 5], to 1012 − 1013 synapses on supercomputers [6]. This upper limit is still about two orders of magnitude below the full human brain, underlining the need for downscaling in computational modeling. In fact, any brain model that approximates a fraction of the recurrent connections as external inputs is in some sense downscaled: the missing interactions need to be absorbed into the network and input parameters in order to obtain the appropriate statistics. Unfortunately, the implications of such scaling are usually not investigated.

The opposite type of scaling, taking the infinite size limit, is sometimes used in order to simplify equations describing the network (Fig 1A). Although this can lead to valuable insights, real networks in the human brain often contain on the order of 105 − 107 neurons (Fig 1B), too few to simplify certain equations in the limit of infinite size. This is illustrated in Fig 1C using as an example the intrinsic contribution to correlations due to fluctuations generated within the network, and the extrinsic contribution due to common external inputs to different neurons in random networks. Although the intrinsic contribution falls off more rapidly than the extrinsic one, it is the main contribution up to large network sizes (around 108 for the given parameters). Therefore, taking the infinite size limit and neglecting the intrinsic contribution leads to the wrong conclusions: The small correlations in finite random networks cannot be explained by the network activity tracking the external drive [7], but rather require the consideration of negative feedback [8] that suppresses intrinsically generated and externally imprinted fluctuations alike [9].

thumbnail
Fig 1. Framework for neural network scaling.

A Downscaling facilitates simulations, while taking the N → ∞ limit often affords analytical insight. B Relevant scales. The local cortical microcircuit containing roughly 105 neurons is the smallest network where the majority of the synapses (∼ 104 per neuron) can be represented using realistic connection probabilities (∼ 0.1). C Results for the N → ∞ limit may not apply even for large networks. In this example, analytically determined intrinsic and extrinsic contributions to correlations between excitatory neurons are shown. The extrinsic contribution to the correlation between two neurons arises due common external input, and the intrinsic contribution due to fluctuations generated within the network (cf. [9] Eq 24). The intrinsic contribution falls off more rapidly than the extrinsic contribution, but nevertheless dominates up to large network sizes, here around 108. The crosses indicate simulation results. Adapted from [9] Fig 7. D Scaling transformations may be designed to preserve average single-neuron or pairwise statistics for selected quantities, population statistics, or a combination of these. When average single-neuron and pairwise properties are preserved, the downscaled network of size N behaves to second order like a subsample of the full network of size N0.

https://doi.org/10.1371/journal.pcbi.1004490.g001

Taking the infinite size limit for analytical tractability and downscaling to make networks accessible by direct simulation are two separate problems. We concentrate in the remainder of this study on such downscaling, which is often performed not only in neuroscience [10, 11, 12, 13] but also in other disciplines [14, 15, 16, 17]. Neurons and synapses may either be subsampled or aggregated [18]; here we focus on the former. One intuitive way of scaling is to ensure that the statistics of particular quantities of interest in the downscaled network match those of a subsample of the same size from the full network (Fig 1D). Alternatively, it may sometimes be useful to preserve the statistics of population sums of certain quantities, for instance population fluctuations.

We here focus on the preservation of mean population-averaged activities and pairwise averaged correlations in the activity. We consider both the size and temporal structure of correlations, but not distributions of mean activities and correlations across the network. Means and correlations present themselves as natural quantities to consider, because they are the first- and second-order and as such the most basic measures of the dynamics. If it is already difficult to preserve these measures, it is even less likely that preserving higher-order statistics will be possible, in view of their higher dimensionality. However, other choices are possible, for instance maintaining total input instead of output spike rates [19].

Besides being the most basic dynamical characteristics, means and correlations of neural activity are biologically relevant. Mean firing rates are important in many theories of network function [20, 21], and their relevance is supported by experimental results [22, 23]. For instance, neurons exhibit orientation tuning of spike rate in the visual system [24] and directional tuning in the motor system [25], and sustained rates are implicated in the working memory function of the prefrontal cortex [22]. Firing rates have also been shown to be central to pattern learning and retrieval in highly connected recurrent neural networks [21]. Furthermore, mean firing rates distinguish between states of arousal and attention [26, 27], and between healthy and disease conditions [28]. The relevance of correlations is similarly supported by a large number of findings. They are widely present; multi-unit recordings have revealed correlated neuronal activity in various animals and behavioral conditions [29, 30, 31]. Pairwise correlations were even shown to capture the bulk of the structure in the spiking activity of retinal and cultured cortical neurons [32]. They are also related to information processing and behavior. Synchronous spiking (corresponding to a narrow peak in the cross-correlogram) has for example been shown to occur in relation to behaviorally relevant events [33, 34, 35]. The relevance of correlations for information processing is further established by the fact that they can increase or decrease the signal-to-noise ratio of population signals [36, 37]. Moreover, correlations are important in networks with spike-timing-dependent plasticity, since they affect the average change in synaptic strengths [38]. Correspondingly, for larger correlations, stronger depression is needed for an equilibrium state with asynchronous firing and a unimodal weight distribution to exist in balanced random networks [39]. The level of correlations in neuronal activity has furthermore been shown to affect the spatial range of local field potentials (LFPs) effectively sampled by extracellular electrodes [40]. More generally, mesoscopic and macroscopic measures like the LFP and fMRI depend on interneuronal correlations [41]. Considering the wide range of dynamical and information processing properties affected by mean activities and correlations, it is important that they are accurately modeled.

We allow the number of neurons N and the number of incoming synapses per neuron K (the in-degree) to be varied independently, generalizing the common type of scaling where the connection probability is held constant so that N and K change proportionally. It is well known that reducing the number of neurons in asynchronous networks increases correlation sizes in inverse proportion to the network size [19, 42, 43, 44, 45]. However, the influence of the number of synapses on the correlations, including their temporal structure, is less studied. When reducing the number of synapses, one may attempt to recover aspects of the network dynamics by adjusting parameters such as the synaptic weights J, the external drive, or neurotransmitter release probabilities [11, 19]. In the present work, spike transmission is treated as perfectly reliable. We only adjust the synaptic weights and a combination of the neuronal threshold and the mean and variance of the external drive to make up for changes in N and K.

A few suggestions have been made for adjusting synaptic weights to numbers of synapses. In the balanced random network model, the asynchronous irregular (AI) firing often observed in cortex is explained by a domination of inhibition which causes a mean membrane potential below spike threshold, and sufficiently large fluctuations that trigger spikes [46]. In order to achieve such an AI state for a large range of network sizes, one choice is to ensure that input fluctuations remain similar in size, and adjust the threshold or a DC drive to maintain the mean distance to threshold. As fluctuations are proportional to J2 K for independent inputs, this suggests the scaling (1) proposed in [46]. Since the mean input to a neuron is proportional to J K, Eq (1) leads, all else being equal, to an increase of the population feedback with , changing the correlation structure of the network, as illustrated in Fig 2 for a simple network of inhibitory leaky integrate-and-fire neurons (note that in this example we fix the connection probability). This suggests the alternative [42, 44, 45] (2) where now the variance of the external drive needs to be adjusted to maintain the total input variance onto neurons in the network.

thumbnail
Fig 2. Transforming synaptic strengths J with the square root of the number of incoming synapses per neuron K (the in-degree) upon scaling of network size N changes correlation structure when mean and variance of the input current are maintained.

A reference network of 10,000 inhibitory leaky integrate-and-fire neurons is scaled up to 50,000 neurons, fixing the connection probability and adjusting the external Poisson drive to keep the mean and variance of total (external plus internal) inputs fixed. Single-neuron parameters and connection probability are as in Table 2. Delays are 1 ms, mean and standard deviation of total inputs are 15 mV and 10 mV, respectively, and the reference network has J = 0.1 mV. Each network is simulated for 50 s. A Onset of oscillations induced by scaling of network size N, visualized by changes in the poles z of the covariance function in the frequency domain. Re(z) determines the frequency of oscillations and Im(z) their damping, such that -Im(z) > 0 means that small deviations from the fixed-point activity of the network grow with time [cf. Eq (76)]. The transformation preserves the poles, while induces a Hopf bifurcation so that the scaled network is outside the linearly stable regime. B Covariance in the network where coupling strength J is scaled with the in-degree K matches that in the reference network, whereas large oscillations appear in the network scaled with . Colors as in A.

https://doi.org/10.1371/journal.pcbi.1004490.g002

For a given network size N and mean activity level, the size and temporal structure of pairwise averaged correlations are determined by the so-called effective connectivity, which quantifies the linear dependence of the activity of each target population on the activity of each source population. The effective connectivity is proportional to synaptic strength and the number of synapses a target neuron establishes with the source population, and additionally depends on the activity of the target neurons. Effective connectivity has previously been defined as “the experiment and time-dependent, simplest possible circuit diagram that would replicate the observed timing relationships between the recorded neurons” [47]. In our analysis we consider the stationary state, but at different times the network may be in a different state exhibiting a different effective connectivity. The definition of [47] highlights the fact that identical neural timing relationships can in principle occur in different physical circuits and vice versa. However, with a given model of interactions or coupling, the activity may allow a unique effective connectivity to be derived [48]. We define effective connectivity in a forward manner with knowledge of the physical connectivity as well as the form of interactions. We show in this study that with this model of interactions, and with independent external inputs, the activity indeed determines a unique effective connectivity, so that the forward and reverse definitions coincide. This complements the groundbreaking general insight of [47].

We consider networks of binary model neurons and networks of leaky integrate-and-fire (LIF) neurons with current-based synapses to investigate how and to what extent changes in network parameters can be used to preserve mean population-averaged activities and pairwise averaged correlations under reductions in the numbers of neurons and synapses. The parameters allowed to vary are the synaptic weights, neuronal thresholds, and the mean and variance of the external drive. We apply and extend the theory of correlations in randomly connected binary and LIF networks in the asynchronous regime developed in [7, 8, 9, 42, 45, 49, 50, 51, 52, 53], which explains the smallness and structure of correlations experimentally observed during spontaneous activity in cortex [54, 55], and we compare analytical predictions of correlations with results from simulations. The results are organized as follows. In “Correlations uniquely determine effective connectivity: a simple example” we provide an intuitive example that illustrates why the effective connectivity uniquely determines correlation structure. In “Correlations uniquely determine effective connectivity: the general case” we show that this one-to-one relationship generalizes to networks of several populations apart from degenerate cases. In “Correlation-preserving scaling” we conclude that, in general, only scalings that preserve the effective connectivity, such as J ∝ 1/K, are able to preserve correlations. In “Limit to in-degree scaling” we identify the limits of the resulting scaling procedure, demonstrating the restricted scalability of asynchronous networks. “Robustness of correlation-preserving scaling” shows that the scaling J ∝ 1/K can preserve correlations, within the identified restrictive bounds, for different networks either adhering to or deviating from the assumptions of the analytical theory. “Zero-lag correlations in binary network” investigates how to maintain the instantaneous correlations in a binary network, while “Symmetric two-population spiking network” considers the degenerate case of a connectivity with special symmetries, in which correlations may be maintained under network scaling without preserving the effective connectivity. Preliminary results have been published in abstract form [56].

Results

Correlations uniquely determine effective connectivity: A simple example

In this section we give an intuitive one-dimensional example to show that effective connectivity determines the shapes of the average pairwise cross-covariances and vice versa. For the following, we first introduce a few basic quantities. Consider a binary or spiking network consisting of several excitatory and inhibitory populations with potentially source- and target-type-dependent connectivity. For the spiking networks, we assume leaky integrate-and-fire (LIF) dynamics with exponential synaptic currents. The dynamics of the binary and LIF networks are respectively introduced in “Binary network dynamics” and “Spiking network dynamics”. We assume irregular network activity, approximated as Poissonian for the spiking network, with population means να. For the binary network, ν = ⟨n⟩ is the expectation value of the binary variable. For the spiking network, we absorb the membrane time constant into ν, defining ν = τm r where r is the firing rate of the population. The external drive can consist of both a DC component μα,ext and fluctuations with variance , provided either by Poisson spikes or by a Gaussian current. The working points of each population, characterized by mean μα and variance of the combined input from within and outside the network, are given by (3) (4) (5) where Jαβ is the synaptic strength from population β to population α, and Kαβ is the number of synapses per target neuron (the in-degree) for the corresponding projection (we use ≡ in the sense of “is defined as”). We call “external variance” in the following, and the remainder “internal variance”. The mean population activities are determined by μα and σα according to Eqs (39) and (67). Expressions for correlations in binary and LIF networks are given respectively in “First and second moments of activity in the binary network” and “First and second moments of activity in the spiking network”.

As a one-dimensional example, consider a binary network with a single population and vanishing transmission delays. The effective connectivity W is just a scalar, and the population-averaged autocovariance a and cross-covariance c are functions of the time lag Δ. We define the population-averaged effective connectivity as (6) where w(J, μ, σ) is an effective synaptic weight that depends on the mean μ [Eq (3)] and the variance σ2 [Eq (4)] of the input. For LIF networks, w = ∂rtarget/∂rsource is defined via Eq (68) and can be obtained as the derivative of Eq (67). Note that we treat the effective influence of individual inputs as independent. A more accurate definition of the population-level effective connectivity, beyond the scope of this paper, could be obtained by also considering combinations of inputs in the sense of a Volterra series [57]. When the dependence of w on J is linearized, the effective connectivity can be written as (7) where the susceptibility S(μ, σ) measures to linear order the effect of a unit input to a neuron on its outgoing activity. In our one-dimensional example, W quantifies the self-influence of an activity fluctuation back onto the population. Expressed in these measures, the differential equation for the covariance function [Eq (52)] takes the form (8) with initial condition [from Eq (41)] (9) which is solved by (10) Eq (10) shows that the effective connectivity W together with the time constant τ of the neuron (which we assume fixed under scaling) determines the temporal structure of the correlations. Furthermore, since a sum of exponentials cannot equal a sum of exponentials with a different set of exponents, the temporal structure of the correlations uniquely determines W. Hence we see that there is a one-to-one correspondence between W and the correlation structure if the time constant τ is fixed, which implies that preserving correlation structure under a reduction in the in-degrees K requires adjusting the effective synaptic weights w(J, μ, σ) such that the effective connectivity W is maintained. If, in addition, the mean activity ⟨n⟩ is kept constant this also fixes the variance a = ⟨n⟩(1 − ⟨n⟩). Eq (10) shows that, under these circumstances with W and a fixed, correlation sizes are determined by N.

Correlations uniquely determine effective connectivity: The general case

More generally, networks consist of several neural populations each with different dynamic properties and with population-dependent transmission delays dαβ. Since this setting does not introduce additional symmetries, intuitively the one-to-one relationship between the effective connectivity and the correlations should still hold. We here show that, under certain conditions, this is indeed the case.

Instead of considering the covariance matrix in the time domain, for population-dependent dynamic properties we find it convenient to stay in the frequency domain. The influence of a fluctuating input on the output of the neuron can to lowest order be described by the transfer function H(ω). This quantity measures the amplitude and phase of the modulation of the neuronal activity given that the neuron receives a small sinusoidal perturbation of frequency ω in its input. The transfer function depends on the mean μ [Eq (3)] and the variance σ2 [Eq (4)] of the input to the neuron. We here first consider LIF networks; in the Supporting Information we show how the results carry over to the binary model.

In “First and second moments of activity in the spiking network”, we give the covariance matrix including the autocovariances in the frequency domain, , as (11) where M has elements Hαβ(ω)Wαβ. If is invertible, we can expand the inverse of Eq (11) to obtain (12) where we assumed the transfer function to have the form which is often a good approximation for the LIF model [45]. In the second step we distinguish terms that only contribute on the diagonal (α = β), those that only contribute off the diagonal (αβ), and those that contribute in either case. For α = β, only the first and last terms contribute, and we get (13) If we want to preserve , this fixes Aα and thereby also Wαα, since it multiplies terms with unique ω-dependence. For αβ, we obtain (14) With Aα fixed, this additionally fixes Wαβ, in view of the unique ω-dependence it multiplies.

Since , a constraint on A necessary for preserving may not translate into the same constraint when we only require the cross-covariances C(ω) to be preserved. However, C(ω) and have identical ω-dependence, as they differ only by constants on the diagonal (approximating autocorrelations as delta functions in the time domain [45]). To derive conditions for preserving C(ω), we therefore ignore the constraint on A but still require the ω-dependence to be unchanged. A potential transformation leaving the ω-dependent terms in both Eqs (13) and (14) unchanged is AαkAα, WαβkWαβ, WααkWαα, but this only works if τα = τγ, dααdαβ = dγαdγβ for some γ, and if the terms for the corresponding γ are also transformed to offset the change in ; or if some of the entries of W vanish. The ω-dependence of and C would otherwise change, showing that, at least in the absence of such symmetries in the delays or time constants, or zeros in the effective connectivity matrix (i.e., absent connections at the population level, or inactive populations), there is a one-to-one relationship between covariances and effective connectivity. Hence, preserving the covariances requires preserving A and W except in degenerate cases. Note that the autocovariances and hence the firing rates can be changed while affecting only the size but not the shape of the correlations, but that the correlation shapes determine W.

Even in case of identical transfer functions across populations, including in particular equal transmission delays and identical τ, the one-to-one correspondence between effective connectivity and correlations can be demonstrated except for a narrower set of degenerate cases. The argument for d = 0 proceeds in the time domain along the same lines as “Correlations uniquely determine effective connectivity: a simple example”, using the fact that for a population-independent transfer function, the correlations can be expressed in terms of the eigenvalues and eigenvectors of the effective connectivity matrix (cf. “First and second moments of activity in the binary network” and “First and second moments of activity in the spiking network”). For general delays, a derivation in the frequency domain can be used. Through these arguments, we show in the Supporting Information that the one-to-one correspondence holds at least if W is diagonalizable and has no eigenvalues that are zero or degenerate.

Correlation-preserving scaling

If the working point (μ, σ) is maintained, the one-to-one correspondence between the effective connectivity and the correlations implies that requiring unchanged average covariances leaves no freedom for network scaling except for a possible trade-off between in-degrees and synaptic weights. In the linear approximation W(J, μ, σ) = S(μ, σ)JK, this trade-off is J ∝ 1/K.

When this scaling is implemented naively without adjusting the external drive to recover the original working point, the covariances change, as illustrated in Fig 3B for a two-population binary network with parameters given in Table 1. The results of J ∝ 1/K scaling with appropriate adjustment of the external drive are shown in Fig 3C. The scaling shown in Fig 3B also increases the mean activities (E: from 0.16 to 0.23, I: from 0.07 to 0.11), whereas that in Fig 3C preserves them.

thumbnail
Fig 3. Correlations from theory and simulations for a two-population binary network with asymmetric connectivity.

A Average pairwise cross-covariances from simulations (solid curves) and Eq (55) (dashed curves). B Naive scaling with J ∝ 1/K but without adjustment of the external drive changes the correlation structure. C With an appropriate adjustment of the external drive (σex = 53.4, σix = 17.7), scaling synaptic weights as J ∝ 1/K is able to preserve correlation structure as long as N and K are reduced by comparable factors. D The same holds for (μex = 43.3, μix = 34.6, σex = 46.2, σix = 15.3), but the susceptibility S is increased by about 20% already for N = 0.75 N0 in this case. In B, C, and D, results of simulations are shown. The curves in C and D are identical because internal inputs, the standard deviation of the external drive, and the distance to threshold due to the DC component of the drive in D are exactly times those in C. Hence, identical realizations of the random numbers for the connectivity and the Gaussian external drive cause the total inputs to the neurons to exceed the threshold at exactly the same points in time in the two simulations. The simulated time is 30 s, and the population activity is sampled at a resolution of 0.3 ms.

https://doi.org/10.1371/journal.pcbi.1004490.g003

If one relaxes the constraint on the working point while still requiring mean activities to be preserved, the network does have additional symmetries due to the fact that only some combination of μ and σ needs to be fixed, rather than each of these separately. This combination is more easily determined for binary than for LIF networks, for which the mean firing rates depend on μ and σ in a complex manner [cf. Eq (67)]. When the derivative of the gain function is narrow (e.g., having zero width in the case of the Heaviside function used here) compared to the input distribution, the mean activities of binary networks depend only on (μθ)/σ [9]. Changing σ while preserving (μθ)/σ leads for a Heaviside gain function to a new susceptibility S′ = (σ/σ′)S [cf. Eq (43)]. For constant K, if the standard deviation of the external drive is changed proportionally to the internal standard deviation, we have σJ and thus JS′ = JS, implying an insensitivity of the covariances to the synaptic weights J [52]. In particular, this symmetry applies in the absence of an external drive. When K is altered, this choice for adjusting the external drive causes the covariances to change. However, adjusting the external drive such that σ′/σ = (JK′)/(JK), the change in S is countered to preserve W and correlations. This is illustrated in Fig 3D for , which is another natural choice, as it preserves the internal variance if one ignores the typically small contribution of the correlations to the input variance ([9] Fig 3D illustrates the smallness of this contribution for an example network). This is only one of a continuum of possible scalings preserving mean activities and covariances (within the bounds described in the following section) when the working point and hence the susceptibility are allowed to change.

Limit to in-degree scaling

We now show that both the scaling J ∝ 1/K for LIF networks (for which we do not consider changes to the working point, as analytic expressions for countering these changes are intractable), and correlation-preserving scalings for binary networks (where we allow changes to the working point that preserve mean activities) are applicable only up to a limit that depends on the external variance.

For the binary network, assume a generic scaling K′ = κK, J′ = ιJ and a Heaviside gain function. We denote variances due to inputs from within the network and due to the external drive respectively by and The preservation of the mean activities implies S′ = (σ/σ′)S as above, where . To keep SJK fixed we thus require (15) where we have used in the second line. For σext = 0 this scaling only works for κ > 1, i.e., increasing instead of decreasing the in-degrees. More generally, the limit to downscaling occurs when or (16) independent of the scaling of the synaptic weights. Thus, larger external and smaller internal variance before scaling allow a greater reduction in the number of synapses. The in-degrees of the example network of Fig 3 could be maximally reduced to 73%. Note that ι could in principle be chosen in a κ-dependent manner such that is fixed or increased instead of decreased upon downscaling, namely . However, Eq (16) is still the limit beyond which this fails, as ι then diverges at that point.

Note that the limit to the in-degree scaling also implies a limit on the reduction in the number of neurons for which the scaling equations derived here allow the correlation structure to be preserved, as a greater reduction of N compared to K increases the number of common inputs neurons receive and thereby the deviation from the assumptions of the diffusion approximation. This is shown by the thin curves in Fig 3C,3D.

Now consider correlation-preserving scaling of LIF networks. Reduced K with constant JK does not affect mean inputs [cf. Eq (3)] but increases the internal variance according to Eq (4). To maintain the working point (μ, σ), it is therefore necessary to reduce the variance of the external drive. When the drive consists of excitatory Poisson input, one way of keeping the mean external drive constant while changing the variance is to add an inhibitory Poisson drive. With K′ = K/ι and J′ = ιJ, the change in internal variance is , where is the internal variance due to input currents in the full-scale model. This is canceled by an opposite change in by choosing excitatory and inhibitory Poisson rates (17) (18) where re,0 is the Poisson rate in the full-scale model, and the excitatory and inhibitory synapses have weights Jext and −g Jext, respectively. Eqs (17) and (18) match Eq (E.1) in [45] except for the 1 + g in the denominator, which was there erroneously given as 1+g2. Since downscaling K implies ι > 1, it is seen that the required rate of the inhibitory inputs is negative. Therefore, this method only allows upscaling. An alternative is to use a balanced Poisson drive with weights Jext and − Jext, choosing the rate of both excitatory and inhibitory inputs to generate the desired variance, and adding a DC drive Iext to recover the mean input, (19) (20) In this manner, the network can be downscaled up to the point where the variance of the external drive vanishes. Substituting this condition into Eq (15), the same expression for the minimal in-degree scaling factor Eq (16) is obtained as for the binary network.

Robustness of correlation-preserving scaling

In this section, we show that the scaling J ∝ 1/K, which maintains the population-level feedback quantified by the effective connectivity, can preserve correlations (within the bounds given in “Limit to in-degree scaling”) under fairly general conditions. To this end, we consider two types of networks: 1. a multi-layer cortical microcircuit model with distributed in- and out-degrees and lognormally distributed synaptic strengths (cf. “Network structure and notation”); 2. a two-population LIF network with different mean firing rates (parameters in Table 2). For both types of models, we contrast the scaling J ∝ 1/K with , in each case maintaining the working point given by Eqs (3) and (4). Fig 4 illustrates that the former closely preserves average pairwise cross-covariances in the cortical microcircuit model, whereas the latter changes both their size and temporal structure.

thumbnail
Table 2. Full-scale parameters of the two-population spiking networks used to demonstrate the robustness of J ∝ 1/K scaling to mean firing rates.

The two networks are distinguished by their external drives.

https://doi.org/10.1371/journal.pcbi.1004490.t002

thumbnail
Fig 4. Within the restrictive bounds imposed by Eq (16), preserving effective connectivity can preserve correlations also in a complex network.

Simulation results for the cortical microcircuit at full scale and with in-degrees reduced to 90%. Synaptic strengths are scaled as indicated, and the external drive is adjusted to restore the working point. Mean pairwise cross-covariances are shown for population 2/3E. Qualitatively identical results are obtained within and across other populations. The simulation duration is 30 s and covariances are determined with a resolution of 0.5 ms. To enable downscaling with J ∝ 1/K, the excitatory Poisson input of the original implementation of [58] is replaced by balanced inhibitory and excitatory Poisson input with a DC drive according to Eqs (19) and (20). A Scaling synaptic strengths as changes the mean covariance. Light green curve: stretching the covariance of the scaled network along the vertical axis to match the zero-lag correlation of the full-scale network shows that not only the size but also the temporal structure of the covariance is affected. B Scaling synaptic strengths as J ∝ 1/K closely preserves the covariance of the full-scale network. However, note that this scaling is only applicable down to the in-degree scaling factor given by Eq (16), which for this example is approximately 0.9.

https://doi.org/10.1371/journal.pcbi.1004490.g004

Fig 5 demonstrates the robustness of J ∝ 1/K scaling to the firing rate of the network. In this example, both the full-scale network and the downscaled networks receive a balanced Poisson drive producing the desired variance, while the mean input is provided by a DC drive. By changing the parameters of the external drive, we create two networks each with irregular spiking but with widely different mean rates (3.3 spikes/s and 29.6 spikes/s). Downscaling only the number of synapses but not the number of neurons, both the temporal structure and the size of the correlations are closely preserved. Reducing the in-degrees and the number of neurons N by the same factor, the correlations are scaled by 1/N. Hence, the correlations of the full-scale network of size N0 can be estimated simply by multiplying those of the reduced network by N/N0. In contrast, changes correlation sizes even when N is held constant, and combined scaling of N and K can therefore not simply be compensated for by the factor N/N0. In the high-rate network, the spiking statistics of the neurons is non-Poissonian, as seen from the gap in the autocorrelations (insets in Fig 5B, 5D). Nevertheless, J ∝ 1/K preserves the correlations more closely than , showing that the predicted scaling properties hold beyond the strict domain of validity of the underlying theory.

thumbnail
Fig 5. Scaling synaptic strengths as J ∝ 1/K can preserve correlations in networks with widely different firing rates.

Results of simulations of a LIF network consisting of one excitatory and one inhibitory population (Table 2). Average cross-covariances are determined with a resolution of 0.1 ms and are shown for excitatory-inhibitory neuron pairs. Each network receives a balanced Poisson drive with excitatory and inhibitory rates both given by , where is chosen to maintain the working point of the full-scale network. The synaptic strengths for the external drive are 0.1 mV and −0.1 mV for excitatory and inhibitory synapses, respectively. A DC drive with strength μext is similarly adjusted to maintain the full-scale working point. All networks are simulated for 100 s. For each population, cross-covariances are computed as averages over all neuron pairs across two disjoint groups of 𝓝 × 1000 neurons, where 𝓝 is the scaling factor for the number of neurons (a given pair has one neuron in each group). Autocovariances are computed as averages over 100 neurons in each population. A, B Reducing in-degrees K to 50% while the number of neurons N is held constant, J ∝ 1/K closely preserves both the size and the shape of the covariances, while diminishes their size. C, D Reducing both N and K to 50%, covariance sizes scale with 1/N for J ∝ 1/K but with a different factor for . Dashed curves represent theoretical predictions. The insets show mean autocovariances for time lags Δ ∈ (−30, 30) ms.

https://doi.org/10.1371/journal.pcbi.1004490.g005

Zero-lag correlations in binary network

Although it is not generally possible to keep mean activities and correlations invariant upon downscaling, transformations may be found when only one aspect of the correlations is important, such as their zero-lag values. We illustrate this using a simple, randomly connected binary network of N excitatory and γN inhibitory binary neurons, where each neuron receives K = pN excitatory and γK inhibitory inputs. The parameters are given in Table 3. The linearized effective connectivity matrix for this example is (21)

When the threshold θ is ≤ 0, the network is spontaneously active without external inputs. In the diffusion approximation and assuming stationarity, the mean zero-lag cross-covariances between pairs of neurons from each population can be estimated from Eq (41) (see also [52]) (22) where the subscripts e and i respectively denote excitatory and inhibitory populations. Moreover, We is the effective excitatory coupling, (23) with S the susceptibility as defined in Eq (43). Furthermore, a is the variance of the single-neuron activity, (24) which is identical for the excitatory and inhibitory populations. The mean input to each neuron is given by [cf. Eq (3)], (25) and, under the assumption of near-independence of the neurons, the variance of the inputs is well approximated by the sum of the variances from each sending neuron [cf. Eq (4)], (26) Finally, the mean activity can be obtained from the self-consistency relation Eq (39).

Eq (22) shows that, when excitatory and inhibitory synaptic weights are scaled equally, the covariances scale with 1/N as long as the network feedback is strong (We ≫ 1), (for this argument, we assume that ⟨n⟩ is held constant, which may be achieved by adjusting a combination of θ and the external drive). Hence, conventional downscaling of population sizes tends to increase covariances.

We use Eq (22) to perform a more sophisticated downscaling (cf. Fig 6). Let the new size of the excitatory population be N′. Eq (22) shows that the covariances can only be preserved when a combination of We, γ, and g is adjusted. We take γ constant, and apply the transformation (27) Solving Eq (22) for f and g′ yields (cf. Fig 6B) (28) (29) The change in We can be captured by Kf K as long as the working point (μ, σ) is maintained. This intuitively corresponds to a redistribution of the synapses so that a fraction f comes from inside the network, and 1 − f from outside (cf. Fig 6A). However, the external drive does not have the same mean and variance as the internal inputs, since it needs to make up for the change in g. The external input can be modeled as a Gaussian noise with parameters (30) (31) independent for each neuron.

thumbnail
Fig 6. Binary network scaling that approximately preserves both mean activities and zero-lag covariances.

A Increased covariances due to reduced network size can be countered by a change in the relative inhibitory synaptic weight combined with a redistribution of the synapses so that a fraction comes from outside the network. Adjusting a combination of the threshold and external drive restores the working point. B Scaling parameters versus relative network size for an example network. Since γ = 1 in this example, the scaling only works down to g = 1 (indicated by the horizontal and vertical dashed lines): Lower values of g only allow a silent or fully active network as steady states. C, E The mean activities are well preserved both by the conventional scaling in Eq (1) with an appropriate adjustment of θ (panel C), and by the method proposed here (panel E). D, F Conventional scaling increases the magnitude of zero-lag covariances in simulated data (panel D), while the proposed method preserves them (panel F). Dark colors: full-scale network. Light colors: downscaled network. Crosses and dots indicate zero-lag correlations in the full-scale and downscaled networks, respectively.

https://doi.org/10.1371/journal.pcbi.1004490.g006

An alternative is to perform the downscaling in two steps: First change the relative inhibitory weights according to Eq (29) but keep the connection probability constant. The mean activity can be preserved by solving Eq (39) for θ, but the covariances are changed. The second step, which restores the original covariances, then amounts to redistributing the synapses so that a fraction comes from inside the network, and from outside, where the external (non-modeled) neurons have the same mean activity as those inside the network. This mean activity is negative, as the balanced regime implies stronger inhibition than excitation. Note that , since We changes already in the first step.

The requirement that inhibition dominate excitation places a lower limit on the network size for which the scaling is effective. The reason is that g decreases with network size, so that a bifurcation occurs at g = 1/γ, beyond which the only steady states correspond to a silent network or a fully active one.

Symmetric two-population spiking network

We have seen that the one-to-one relationship between effective connectivity and correlations does not hold in certain degenerate cases. Here we consider such a degenerate case and perform a scaling that preserves mean activities as well as both the size and the temporal structure of the correlations under reductions in both the number of neurons and the number of synapses. The network consists of one excitatory and one inhibitory population of LIF neurons with a population-independent connection probability and vanishing transmission delays. Due to the appearance of the eigenvalues in the numerator of the expression for the correlations in LIF networks [cf. Eqs (70) and (71)], such networks are subject to a reduced number of constraints when W has a zero eigenvalue, as this leaves a freedom to change the corresponding eigenvectors. Furthermore, identically vanishing delays greatly simplify the equations for the covariances.

The single-neuron and network parameters are as in Table 2 except that, here, N = 10,000, J = 0.2 mV, and the external drive is chosen such that the mean and standard deviation of the total input to each neuron are μ = 15 mV, σ = 10 mV. Furthermore, the delay is chosen equal to the simulation time step to approximate d = 0, which we assume here. The effective connectivity matrix for this network is (32) where w = ∂rtarget/∂rsource is the effective excitatory synaptic weight obtained as the derivative of Eq (67). Here, we take into account the dependence of w on J to quadratic order. The inhibitory weight is approximated as gw to allow an analytical expression for the relative inhibitory weight in the scaled network to be derived. The left and right eigenvectors are corresponding to eigenvalue L = w K(1 − γ g) and corresponding to eigenvalue 0. The normalization is chosen such that the bi-orthogonality condition Eq (47) is fulfilled.

A transformed connectivity matrix should have the same eigenvalues as W, and can thus be written as (33) (34) Denote the new population sizes by N1 and N2. Equating the covariances before and after the transformation yields using Eq (71) and Ajk = vjT A vk [cf. Eq (49)], (35) In Eq (35) we have assumed that the working points, and thus a1 and a2, are preserved, which may be achieved with an appropriate external drive as long as the corresponding variance remains positive. The four equations are simultaneously solved by (36) where wK′ may be chosen freely. Thus, the new connectivity matrix reads (37) which may also be cast into the form (38) where γ′ = N2/N1 and .

When the populations receive statistically identical external inputs, we have a1 = a2 = r, since the internal inputs are also equal. Fig 7 illustrates the network scaling for the choice w′ = w. Results are shown as a function of the relative size N1/N of the excitatory population. External drive is provided at each network size to keep the mean and standard deviation of the total inputs to each neuron at the level indicated. The mean is supplied as a constant current input, while the variability is afforded by Poisson inputs according to Eqs (17) and (18) (Fig 7D). It is seen that the transformations (Fig 7B) are able to reduce both the total numbers of neurons and the total number of synapses (Fig 7C) while approximately preserving covariance sizes and shapes (Fig 7E,7F). Small fluctuations in the theoretical predictions in Fig 7E are due to the discreteness of numbers of neurons and synapses, and deviations of the effective inhibitory weight from the linear approximation g w. The fact that the theoretical prediction in Fig 7F misses the small dips around t = 0 may be due to the approximation of the autocorrelations by delta functions, eliminating the relative refractoriness due to the reset. The numbers of neurons and synapses increase again below some N1/N, and diverge as g′ becomes zero. This limits the scalability despite the additional freedom provided by the symmetry.

thumbnail
Fig 7. Spiking network scaling that approximately preserves mean firing rates and covariances.

A Diagram illustrating the network and indicating the parameters that are adjusted. B Excitatory in-degrees K′, relative inhibitory synaptic weight g′, and relative number of inhibitory neurons γ′ versus scaling factor N1/N. The dashed vertical line indicates the limit below which the scaling fails. C Total number of neurons Ntotal = (1+γ′)N1 and total number of synapses Nsyn = (1+γ′)2 KN1 versus scaling factor. D Rates of external excitatory and inhibitory Poisson inputs necessary for keeping firing rates constant. Average firing rates are between 23.1 and 23.5 spikes/s for both excitatory and inhibitory populations and all network sizes. E Integrated covariances, corresponding to zero-frequency components in the Fourier domain. Crosses: simulation results, dots: theoretical predictions. F Average covariance between excitatory-inhibitory neuron pairs for different network sizes. The dashed curve indicates the theoretical prediction for N = 10,000. Each network was simulated for 100 s.

https://doi.org/10.1371/journal.pcbi.1004490.g007

Discussion

By applying and extending the theory of correlations in asynchronous networks of binary and networks of leaky integrate-and-fire (LIF) neurons, our present work shows that the scalability of numbers of neurons and synapses is fundamentally limited if mean activities and pairwise averaged activity correlations are to be preserved. We analytically derive a limit on the reducibility of the number of incoming synapses per neuron, K (the in-degree), which depends on the variance of the external drive, and which indirectly restricts the scalability of the number of neurons. Within these restrictive bounds, we propose a scaling of the synaptic strengths J and the external drive with K that can preserve mean activities and the size and temporal structure of pairwise averaged correlations. Mean activities can be approximately preserved by maintaining the mean and variance of the total input currents to the neurons, also referred to as the working point. The temporal structure of pairwise averaged correlations depends on the effective connectivity, a measure of the effective influence of source populations on target populations determined both by the physical connectivity and the working point of the target neurons. When the dependence of the effective connectivity on the synaptic strengths J is linearized, it can be written as SJK, where S is the susceptibility of the target neurons (quantifying the change in output activity for a unit change in input). Scalings and analytical predictions of pairwise averaged correlations are tested using direct simulations of randomly connected networks of excitatory and inhibitory neurons.

Our most important findings are:

  1. The population-level effective connectivity matrix and pairwise averaged correlations are linked by a one-to-one mapping except in degenerate cases. Therefore, with few exceptions, any network scaling that preserves the correlations needs to preserve the effective connectivity.
  2. The most straightforward way of simultaneously preserving mean activities and pairwise averaged correlations is to change the synaptic strengths in inverse proportion to the in-degrees (J ∝ 1/K), and to adjust the variance of the external drive to make up for the change in variance of inputs from within the network. Other scalings, such as , can in principle also preserve both mean activities and pairwise averaged correlations, but then change the working point (hence the neuronal susceptibility determining the strength of stimulus responses, and the degree to which the activity is mean- or fluctuation-driven), and are analytically intractable for LIF networks due to the complicated dependence of the firing rates and the impulse response on the mean and variance of the inputs.
  3. When downscaling the in-degrees K and scaling synaptic strengths as J ∝ 1/K, the variance of inputs from within the network increases, so that the variance of external inputs needs to be decreased to restore the working point. This is only possible up to the point where the variance of the external drive vanishes. The minimal in-degree scaling factor equals the ratio between the variance of inputs coming from within the network, and the total input variance due to both internal inputs and the external drive. The same limit to in-degree scaling holds more generally for scalings that simultaneously preserve mean activities and correlations. Thus, in the absence of a variable external drive, no downscaling is possible without changing mean activities, correlations, or both.
  4. Within the identified restrictive bounds, the scaling J ∝ 1/K, where the external variance is adjusted to maintain the working point, can preserve mean activities and pairwise averaged correlations also in asynchronous networks deviating from the assumptions of the analytical theory presented here. We show this robustness for an example network with distributed in- and out-degrees and distributed synaptic weights, and for a network with non-Poissonian spiking.
  5. For a sufficiently large change in in-degrees, a scaling that affects correlations can push the network from the linearly stable to an oscillatory regime or vice versa.
  6. Transformations derived using the diffusion approximation are able to closely preserve the relevant quantities (mean activities, correlation shapes and sizes) in simulated networks of binary and spiking neurons within the given bounds. Reducing the number of neurons only increases correlation magnitudes without affecting their structure in this approximation.However, strong deviations from the assumptions of the diffusion approximation can cause also correlation structure to change in simulated networks under scalings originally constructed to maintain correlation structure. This occurs for instance when a drastic reduction in network size is coupled with a less than proportional reduction in in-degrees, leading to large numbers of common inputs and increased synchrony. Thus, the scalability of the number of neurons with available analytical results is indirectly limited by the minimal in-degree scaling factor.

In conclusion, we have identified limits to the reducibility of neural networks, even when only considering first- and second-order statistical properties. Networks are inevitably irreducible in some sense, in that downscaled networks are clearly not identical to their full-scale counterparts. However, mean activity, a first-order macroscopic quantity, can usually be preserved. The present work makes it clear that non-reducibility already sets in at the second-order macroscopic level of correlations. This does not imply a general minimal size for network models to be valid, merely that each network in question needs to be studied near its natural size to verify results from any scaled versions.

Our analytical theory is based on the diffusion approximation, in which inputs are treated as Gaussian noise, valid in the asynchronous irregular regime when activities are sufficiently high and synaptic weights are small. Moreover, external inputs are taken to be independent across populations, and delays and time constants are assumed to be unchanged under scaling. A further assumption of the theory is that the dynamics is stationary and linearly stable.

The one-to-one correspondence between effective connectivity and correlations applies with a few exceptions. For non-identical populations with different impulse responses, an analysis in the frequency domain demonstrates the equivalence under the assumption that the correlation matrix is invertible. An argument that assumes a diagonalizable effective connectivity matrix extends the equivalence to identical populations apart from cases where the effective connectivity matrix has eigenvalues that are zero or degenerate.

The equivalence of correlations and effective connectivity ties in with efforts to infer structure from activity, not only in neuroscience [59, 60, 61, 62, 63, 64, 65, 66] but also in other disciplines [67, 68, 69], as it implies that one should in principle be able to find the only—and therefore the real—effective connectivity that accounts for the correlations. Within the same framework as that used here, [65] shows that knowledge of the cross-spectrum at two distinct frequencies allows a unique reconstruction of the effective connectivity matrix by splitting the covariance matrix into symmetric and antisymmetric parts. The derivation considers a class of transfer functions (the Fourier transform of the neuronal impulse response) rather than any specific form, but the transfer function is taken to be unique, whereas the present work allows for differences between populations. Furthermore, we here present a more straightforward derivation of the equivalence, not focused on the practical aim of network reconstruction, and clarify the conditions under which reconstruction is possible.

In practice, using our results to infer structure from correlations may not be straightforward, due to both deviations from the assumptions of the theory and problems with measuring the relevant quantities. For instance, neural activity is often nonstationary [70], transfer functions are normally not measured directly, and correlations are imperfectly known due to measurement noise. Furthermore, inference of anatomical from functional connectivity (correlations) is often done based on functional magnetic resonance imaging (fMRI) measurements, which are sensitive only to very low frequencies and therefore only allow the symmetric part of the effective connectivity to be reliably determined [66]. The presence of unobserved populations providing correlated input to two or more observed populations can also hinder inference of network structure. Thus, high-resolution measurements (e.g., two-photon microscopy combined with optogenetics to record activity in a cell-type-specific manner [71, 72]) of networks with controlled input (e.g., in brain slices) hold the most promise for network reconstruction from correlations.

The effects on correlation-based synaptic plasticity of scaling-related changes in correlations may be partly compensated for by adjusting the learning parameters. For instance, an increase in average correlation size with factor 1/N without a change in temporal shape may be to some extent countered by reducing the learning rate by the same factor. Changes in the temporal structure of the correlations are more difficult to compensate for. When learning is linear or slow, so that the learning function can be approximated as constant (independent of the weights), the mean drift in the synaptic weights is determined by the integral of the product of the correlations and the learning function [73, 74]. Therefore, this mean drift may be kept constant under a change in correlation shapes by adjusting the learning function such that this product is preserved for all time lags. However, given that the expression for the correlations is a complicated function of the network parameters, the required adjustment of the learning function will also be complex. Moreover, the effects of this adjustment on precise patterns of weights are difficult to predict, since the distribution of correlations between neuron pairs may change under the proposed scalings, and this solution does not apply when learning is fast and weight-dependent.

The groundbreaking work of [46] identified a dynamic balance between excitation and inhibition as a mechanism for the asynchronous irregular activity in cortex, and showed that can robustly lead to a balanced state in the limit N → ∞ for constant K/N. However, it is not necessary to scale synaptic weights as in order to obtain a balanced network state, even in the limit of infinite network size (and infinite K). For instance, J ∝ 1/K can retain balance in the infinite size limit in the sense that the sum of the excitatory and inhibitory inputs is small compared to each of these inputs separately. To retain irregular activity with this scaling one merely needs to ensure a variable external drive, as the internal variance vanishes for N → ∞. Moreover, in binary networks with neurons that have a Heaviside gain function (a hard threshold) identical across neurons, one does not even need a variable drive in order to stay in a balanced state [46, p. 1360]. This can be seen from a simple example of a network of N excitatory and γN inhibitory neurons with random connectivity with probability p, where J = J0/N > 0 is the synaptic amplitude of an excitatory synapse, and −gJ the amplitude of an inhibitory synapse. The network may receive a DC drive, which we absorb into the threshold θ. The summed input to each cell is then μ = pNJ(1 − γg) n, where n ∈ [0, 1] is the mean activity in the network. For a balanced state to arise, the negative feedback must be sufficiently strong, so that the mean activity n settles on a level where the summed input is close to the threshold μθ. This will always be achieved if pJ0(1 − γg) < θ < 0: in a completely activated network (n = 1) the summed input is below threshold, and in a silent network (n = 0) the summed input is above threshold, hence the activity will settle close to the value nθ/[pJ0(1 − γg)]. As the variance of the synaptic input decreases with network size, the latter estimate of the mean activity will become exact in the limit N → ∞. The underlying reason for both 1/K and scaling to lead to a qualitatively identical balanced state is the absence of a characteristic scale on which to measure the synaptic input: the threshold is hard. Only by introducing a characteristic scale, for example distributed values for the thresholds, the 1/K scaling with a DC drive will in the large N limit lead to a freezing of the balanced state due to the vanishing variance of the summed input, while with either scaling, or 1/K scaling with a fluctuating external drive, the balanced state is conserved.

In [46], refers not only to a comparison between differently-sized networks, but also to the assumption that approximately excitatory synapses need to be active to reach spike threshold. However, this is also not a necessary condition for balance, which can arise for a wide range of synaptic strengths relative to threshold, as long as inhibition is sufficiently strong compared to excitation. As discussed in “Correlation-preserving scaling”, with appropriately chosen external drive, J even drops out of the mean-field theory for binary networks with a Heaviside gain function altogether [52]. The difficulty in the interpretation of the [46] results illustrates a more general point: The primary goal of scaling studies is to identify the mechanisms governing network dynamics. Nevertheless, these studies usually also specify requirements on the robustness of the mechanism, leading to scaling laws for network parameters that may be more restrictive than a description of the mechanism per se. An example is the robustness to strong synapses, defined such that activation of excitatory synapses suffices to reach threshold in the absence of an external drive [46, p. 1324]. This scenario was considered in order to create a condition under which dynamic balance is clearly necessary for achieving asynchronous irregular activity in balanced random networks, since combined inputs would otherwise far exceed the threshold. However, dynamic balance can arise also with weak synapses, e.g., with strength ∼ 1/K of the distance to threshold. Without questioning the value of scaling studies, which can distill essential mechanisms and are sometimes possible where finite-size analytical descriptions are intractable, this shows that scaling laws need to be interpreted with care.

The issue of the interrelation between network size, synaptic strengths, numbers of synapses per neuron, and activity is embedded in the wider context of anatomical and physiological scaling laws observed experimentally. In homeostatic synaptic plasticity, synaptic strengths are adjusted in a manner that keeps the activity of the postsynaptic neurons within a certain operating range [75, 76, 77]. Since postsynaptic activity depends not only on the strength of inputs but also on their number, this may induce a correlation between synaptic strengths and in-degree. In line with this hypothesis, excitatory postsynaptic currents (EPSCs) at single synapses were found to be inversely related to the density of active synapses onto cultured hippocampal neurons [78], and the size of both miniature EPSCs and evoked EPSCs between neurons decreased with network size and with the number of synapses per neuron in patterned cultures [79], although contrasting results have also been reported [80, 81]. In the development of a mammal, the neuronal network grows by orders of magnitude and is continuously modified. For instance, the amplitude of miniature EPSCs is reduced in a period of heightened synaptogenesis in rat primary visual cortex [82]. During such developmental processes, some functions are conserved and new functions emerge. This balance between stability and flexibility is an intriguing theoretical problem. Here, network scaling is deeply related to biological principles. Our results open up a new perspective for analyzing and interpreting such biological scaling laws.

Certainly, most network models will not fit neatly into the categories considered here, and detailed models often provide valuable insights regardless of whether they are scaled in a systematic manner. Nevertheless, it is usually possible to at least mention whether and how a particular model is scaled. When the results are not amenable to mathematical analysis, we suggest investigating through simulations of networks of different sizes how essential characteristics depend on numbers of neurons and synapses (the relevant characteristics depend on the model at hand, and do not necessarily include mean activities or correlations). Thus, while both the investigation of the infinity limit and the exploration of downscaled networks remain powerful methods of computational neuroscience, we argue for a more careful approach to network scaling than has hitherto been customary, making the type of scaling and its consequences explicit. Fortunately, in neuroscience full-scale simulations are now becoming routinely possible due to the technological advances of recent years.

Methods

Software

We verify analytical results for networks of binary neurons and networks of spiking neurons using direct simulations performed with NEST [83] revisions 10711 and 11264 for the spiking networks and revision 11540 for the binary networks. For simulating the multi-layer microcircuit model, PyNN version 0.7.6 (revision 1312) [84] was used with NEST 2.6.0 as back end, single-threaded on 12 MPI processes on a high-performance cluster. All simulations have a time step of 0.1 ms. Spike times in the microcircuit model are constrained to the grid. The other spiking network simulations use precise spike timing [85]. In part, Sage was used for symbolic linear algebra [86]. Pre- and post-processing and numerical analysis were performed with Python.

Network structure and notation

For both the binary and the spiking networks, we derive analytical results where both the number of populations Npop and the population-level connectivity are arbitrary. Specific examples are given of networks with a single, inhibitory population, or with two populations (one excitatory, one inhibitory) with either population-specific or population-independent connectivities. In addition, we discuss a multi-layer spiking cortical microcircuit model consisting of 77,169 neurons with approximately 3 × 108 synapses, with eight populations (2/3E, 2/3I, 4E, 4I, 5E, 5I, 6E, 6I) and population-specific connection probabilities [58], slightly adjusted to enhance the asynchrony of the activity. The adjustments consist of replacing normally by lognormally distributed weights with the same mean and with coefficient of variation 3; and using 4.5 instead of 4 as the relative strength of synapses from 4I to 4E compared to excitatory synaptic strengths. Besides distributed synaptic strengths, the model has binomially distributed in- and out-degrees, and normally distributed delays (clipped at the simulation time step), thereby deviating from the assumptions of our analytic theory. It thus serves to evaluate the robustness of our analytical results to such deviations from the underlying assumptions.

In all cases, pairs of populations are randomly connected. In the binary and one- and two-population LIF network simulations, in-degrees are fixed and multiple directed connections between pairs of neurons (multapses) are disallowed. In the multi-layer microcircuit model, in-degrees are distributed and multapses are allowed. In case of population-specific connectivities, we denote the (unique or mean) in-degree for connections from population β to population α by Kαβ, and synaptic strengths by Jαβ. Population sizes are denoted by Nα. For the example networks with population-independent connection probability, we denote the size of the excitatory population by N, the in-degree from excitatory neurons by K = pN, and the size of the inhibitory relative to the excitatory population by γ, so that the inhibitory in-degree is γK. Synaptic strengths are also taken to only depend on the source population, and are written as J for excitatory and −gJ for inhibitory synapses.

Binary network dynamics

We denote the activity of neuron j by nj(t). The state nj(t) of a binary neuron is either 0 or 1, where 1 indicates activity, 0 inactivity [7, 42, 87]. The state of the network of N such neurons is described by a binary vector n = (n1, …, nN) ∈ {0,1}N. We denote the mean activity by ⟨nj(t)⟩t, where the average ⟨⟩t is over time and realizations of the stochastic activity. The neuron model shows stochastic transitions (at random points in time) between the two states 0 and 1. In each infinitesimal interval [t, t + δt), each neuron in the network has the probability to be chosen for update [88], where τ is the time constant of the neuronal dynamics. We use an equivalent implementation in which the time points of update are drawn independently for all neurons. For a particular neuron, the sequence of update points has exponentially distributed intervals with mean duration τ, i.e., update times form a Poisson process with rate τ−1. The stochastic update constitutes a source of noise in the system. Given that the j-th neuron is selected for update, the probability to end in the up state (nj = 1) is determined by the gain function Fj(n(t)) = Θ(∑k Jjk nk(t) − θ) which in general depends on the activity n of all other neurons. Here θ denotes the threshold of the neuron and Θ(x) the Heaviside function. The probability of ending in the down state (nj = 0) is 1 − Fj(n). This model has been considered previously [42, 87, 89], and here we follow the notation introduced in [87] that we also employed in our earlier works. We skip details of the derivation here that are already contained in [9].

First and second moments of activity in the binary network

The combined distribution of large numbers of independent inputs can be approximated as a Gaussian 𝓝(μ, σ2) by the central limit theorem. The arguments μ and σ are the mean and standard deviation of the synaptic input noise, together referred to as the working point [cf. Eqs (3) and (4)]. The stationary mean activity of a given population of neurons then obeys [7, 9, 46, 52] (39) This equation needs to be solved self-consistently because ⟨n⟩ influences μ, σ through interactions within the population itself and with other populations.

When network activity is stationary, the covariance of the activities of a pair (j, k) of neurons is defined as cjk(Δ) = ⟨δnj(t + Δ)δnk(t)⟩t, where δnj(t) = nj(t) − ⟨nj(t)⟩t is the deviation of neuron j’s activity from expectation, and Δ is a time lag. Instead of the raw correlation ⟨nj(t + Δ)nk(t)⟩t, here and for the spiking networks we measure the covariance, i.e., the second centralized moment, which is also identical to the second cumulant. To derive analytical expressions for the covariances in binary networks in the asynchronous regime, we follow the theory developed in [7, 9, 42, 52, 53]. We first consider the case of vanishing transmission delays d = 0 and then discuss networks with delays.

Let (40) be the covariance averaged over disjoint pairs of neurons in two (possibly identical) populations α, β, and the population-averaged single-neuron variance aj(Δ) = ⟨δnj(t + Δ)δnj(t)⟩t. Note that for α = β there are only Nα(Nα − 1) disjoint pairs of neurons, so cαα differs from the average pairwise cross-correlation by a factor (Nα − 1)/Nα, but we choose this definition because it slightly simplifies the population-level equations. For sufficiently weak synapses and sufficiently high firing rates, and when higher-order correlations can be neglected, a linearized equation relating these quantities can be derived for the case d = 0 ([42] Eqs (9.14)–(9.16); [7] supplementary material Eq (36), [9] Eq (10)), (41) Here, we have assumed identical time constants across populations, and (42) is the linearized effective connectivity. The susceptibility S is defined as the slope of the gain function averaged over the noisy input to each neuron [9, 52, 53], reducing for a Heaviside gain function to (43)

With the definitions (44) (45) Eq (41) is recognized as a continuous Lyapunov equation (46) which can be solved using known methods. Let vj,uk be the left and right eigenvectors of W, with eigenvalues λj and λk, respectively. Choose the normalization such that the left and right eigenvectors are biorthogonal, (47) Then multiplying Eq (46) from the left with vjT and from the right with vk yields (48) Define (49) for . Then solving Eq (48) for gives (50) as can be verified using Eq (47). This provides an approximation of the population-averaged zero-lag correlations, including contributions from both auto- and cross-correlations.

To determine the temporal structure of the population-averaged cross-correlations, we start from the single-neuron level, for which the correlations approximately obey ([53] Eq (29)) (51) where wij is the neuron-level effective connectivity (wij = Si Jij if a connection exists and wij = 0 otherwise). This equation also holds on the diagonal, j = k. To obtain the population-level equation, we use Eqs (40) and (44) and count the numbers of connections, which yields a factor Kαβ for each projection. Eq (51) then becomes (52) This step from the single-neuron to the population level constitutes an approximation when the out-degrees are distributed, but is exact for fixed out-degree [8, 53]. The correlations for Δ < 0 are determined by . With the definition Eq (49), Eq (52) yields (53) Using the initial condition for from Eq (50) and multiplying Eq (53) by uj ukT, summing over j and k, we obtain the solution (54) The shape of the autocovariances is well approximated by that for isolated neurons, , with corrections due to interactions being O(1/N) [42]. Substituting this form in Eq (54) leads to (55) equivalent to [42] Eq (6.20). Note that this equation still needs to be solved self-consistently, because the variance of the inputs to the neurons, which goes into S(μ, σ), depends on the correlations. However, correlations tend to contribute only a small fraction of the input variance in the asynchronous regime (cf. [9] Fig 3D). The accuracy of the result Eq (55) is illustrated in Fig 3A for a network with parameters given in Table 1 by comparison with a direct simulation. Note that the delays were not zero but equal to the simulation time step of 0.1 ms, sufficiently small for the correlations to be well approximated by Eq (55).

Now consider arbitrary transmission delay d > 0, and let both d and the input statistics be population-independent. This case is most easily approached from the Fourier domain, where the population-averaged covariances including autocovariances can be approximated as [53] (56) Here, H(ω) is the transfer function (57) which is equal for all populations under the assumptions made. The transfer function is the Fourier transform of the impulse response, which is a jump followed by an exponential relaxation, (58) where Θ is the Heaviside step function.

For the case of population-independent H(ω), Fourier back transformation to the time domain is feasible, and was performed in [53] for symmetric connectivity matrices. Here, we consider generic connectivity (insofar as consistent with equal H(ω)), and again use projection onto the eigenspaces of W to obtain a form similar to Eq (55), i.e., insert the identity matrix (59) both on the left and on the right of Eq (56), and Fourier transform to obtain (60) In the third line of Eq (60), we used Ajk = vjT A vk and collected the frequency-dependent terms for clarity. The exponential eΔ does not have any poles, so the only poles stem from fjk, which we denote by zl(λj) and the corresponding residues by Resj,k[zl(λj)]. We only need to consider Δ ≥ 0, since the solution for negative lags follows from . The equation can then be solved by contour integration over the upper half of the complex plane, as the integrand vanishes at ω → +i∞. Stability requires that the poles of the first term of Eq (60) lie only in the upper half plane (note that the linear approximation we have employed only applies in the stable regime). The poles of the second term correspondingly lie in the lower half plane and hence need not be considered. For d > 0, the locations of the poles are given by [53] Eq (12), (61) where Wl is the lth of the infinitely many branches of the Lambert-W function defined by x = W(x)eW(x) [90]. For d = 0, the poles are . Using the residue theorem thus brings Eq (60) into the form (62) where I(γ) = 1 is the winding number of the contour γ around the poles. To see that Eq (62) reduces to Eq (55) when d = 0, substitute the poles in the upper half plane with residues [(2 − λjλk)]−1 and note that .

When the input statistics and hence transfer functions are population-specific, Eq (56) becomes (63) (64) where Mαβ(ω) = Hαβ(ω)Wαβ.

Spiking network dynamics

The spiking networks consist of single-compartment leaky integrate-and-fire neurons with exponential current-based synapses. The subthreshold dynamics of neuron i is given by (65) where we have set the resting potential to zero without loss of generality, and absorbed the membrane resistance into the synaptic current Ii, in line with previous works [45, 91]. Bringing back the corresponding parameters, the dynamics reads (66) Thus, our scaled synaptic amplitudes Jij in terms of the amplitudes of the synaptic current due to a single spike are . Here, τm and τs are membrane and synaptic time constants, EL is the leak or resting potential, Rm is the membrane resistance, d is the transmission delay, is the total synaptic current, and are the incoming spike trains. When Vi reaches a threshold θ, a spike is assumed, and the membrane potential is clamped to a level Vr for a refractory period τref. Threshold and reset potential in physical units are shifted by the leak potential (, ), showing that the assumption EL = 0 in Eq (65) does not limit generality. The intrinsic dynamics of the neurons in the different populations are taken to be identical, so that population differences are only expressed in the couplings.

First and second moments of activity in the spiking network

An approximation of the stationary mean firing rate of LIF networks with exponential current-based synapses was derived in [91], (67) where the summed synaptic input is characterized by a Gaussian noise with first moment μ and second moment σ2 based on the diffusion approximation, and ζ is the Riemann zeta function.

For the covariances, we follow and extend the theory developed in [45, 53], starting with the average influence of a single synapse. Assuming that the network is in the asynchronous state, and that synaptic amplitudes are small, the synaptic influences can be averaged around the mean activity rj of each neuron j. These influences are characterized by linear response kernels hjk(t, t′) defined as the derivative of the density of spikes of spike train sj(t) of neuron j with respect to an incoming spike train sk(t′), averaged over realizations of the remaining incoming spike trains s\sk that act as noise. In the stationary state, the kernel only depends on the time difference tt′, giving (68) where δsjsjrj is the j-th centralized (zero-mean) spike train. Here, wjk is the integral of hjk(tt′), and h(tt′) is a normalized function capturing its time dependence, which may be source- and target-specific. The dimensionless effective weights wjk are determined nonlinearly by the synaptic strengths Jjk, the single-neuron parameters, and the working point (μj,σj) (cf. [45] Eq (A.3) but note that β as given there has a spurious factor J). We approximate the impulse response by the form Eq (58), where τ is now an effective time constant depending on the working point (μj,σj) and the parameters of the target neurons. This form of the impulse response, corresponding to a low-pass filter, appears to be a good approximation in the noisy regime when the neuron fires irregularly. In the mean-driven regime (μσ) the transfer function of the LIF neuron is known to exhibit resonant behavior with a peak close to its firing rate. In this regime a single exponential response kernel is expected to be a poor approximation (see, e.g., [92] Fig 1). In general, the source population dependence of Eq (58) comes in through the delay d, and the target population dependence through both τ and d.

As for binary networks with delays, the average pairwise covariance functions cij(Δ) ≡ ⟨δsi(t + Δ)δsj(t)⟩t are most conveniently derived starting from the frequency domain. In case of identical transfer functions for all populations, the matrix of average cross-covariances is given by [53] Eq (16) minus the autocovariance contribution, (69) Here, W contains the effective weights of single synapses from population β to population α times the corresponding in-degrees, wαβ Kαβ; and A contains the population-averaged autocovariances, which we approximate as , with rα the mean firing rate, as also done in [45]. In [53], Eq (69) was written using a more general diagonal matrix instead of A, to help clarify close similarities between binary and LIF networks and Ornstein-Uhlenbeck processes or linear rate models; however, for LIF networks, this diagonal matrix corresponds precisely to the autocovariance matrix. We chose the form Eq (69) because it separates terms that vanish at either ωi∞ or ω → −i∞ depending on Δ. This facilitates Fourier back transformation, as contour integration with an appropriate contour can be used for each term.

To perform the Fourier back transformation, we apply the same method as used for the binary network. Let vj,uj be the left and right eigenvectors of the connectivity matrix W, and λj the corresponding eigenvalues. Insert ∑j uj vjT = 𝟙 into Eq (69) on the left and right, and Fourier transform, (70) As for the binary case, we only need to consider Δ ≥ 0, as the solution for Δ < 0 is given by c(Δ) = cT(−Δ). The contour can then be closed over the upper half plane, where the term containing only H(−ω) has no poles due to the stability condition. When Δ < d, the contour for the term containing only H(ω) can also be closed in the lower half plane where it has no poles, so that the corresponding integral vanishes. Analogously, the integral of the term with only H(−ω) vanishes when 0 > Δ > −d. Therefore, the second and third terms represent ‘echoes’ of spikes arriving after one transmission delay [53]. For Δ = 0 and d > 0, only the first term contributes, and the contour can be closed in either half plane. As before, the poles are given by Eq (61) for d > 0, and by for d = 0. The residue theorem yields a solution of the form Eq (62), the only difference being the precise form of the residues, and the fact that we here consider c as opposed to .

In the absence of delays, an explicit solution can again be derived. For Δ > 0, the poles inside the contour are corresponding to the terms with H(ω)−1. The residue corresponding to is , and the term is finite and evaluates at the pole to . Using Ajk = vjT A vk we get (71) which is reminiscent of but not identical to Eq (55) for the binary network. Note that Eq (71) for the LIF network corresponds to spike train covariances with the dimensionality of 1/t2 due to [Ajk] = [1/t] and the factor 1/τ, whereas the covariances for the binary network are dimensionless.

The population-specific generalization of Eq (69) reads (72) where M(ω) has elements Hαβ(ω)Kαβ wαβ, as before. The covariance matrix including autocovariances can be more simply written as (73) The only difference compared to the expression Eq (63) for the binary network is the form of the diagonal matrix, here analogous to white output noise in a linear rate model, whereas the binary network resembles a linear rate model with white noise on the input side, which is passed through the transfer function before affecting the correlations [53].

Fluctuating rate equation and stability condition

An alternative description of the spiking dynamics can be obtained by considering a system of linear coupled rate equations that produces the same moments to second order as the spiking dynamics [53]. The convolution equation (74) with pairwise uncorrelated white noises xj and the response kernel hjk given by Eq (68) can be shown to yield a cross-covariance matrix of the form Eq (69) by considering the Fourier transform of Eq (74), written in matrix notation as (75) We can expand the latter equation into eigenmodes by multiplying from the left with the left-sided eigenvector vk of W and by writing the general solution as a linear combination of right-sided eigenmodes Y(ω) = ∑j ηj(ω) uj to obtain (with the bi-orthogonality relation vkT uj = δkj) (76) The latter equation shows that the same poles z(λk) that appear in the covariance function Eq (70) also determine the evolution of the effective rate equation. Moreover, transforming Eq (76) back to the time domain, we see with that the eigenmodes have a time evolution determined by eiz(λk)t. Hence the imaginary part of the pole z(λk) controls whether the mode is exponentially growing (Im(z) < 0) or decaying (Im(z) > 0), while the real part determines the oscillation frequency.

Supporting Information

S1 Text. Derivation of one-to-one relationship between effective connectivity and correlations for binary networks and networks consisting of populations with identical response properties.

https://doi.org/10.1371/journal.pcbi.1004490.s001

(PDF)

Author Contributions

Analyzed the data: SJvA MH MD. Wrote the paper: SJvA MH MD. Conceived and designed the study: SJvA MH MD. Developed the theory: SJvA MH MD. Wrote the simulation and analysis code: SJvA MH. Performed the simulations: SJvA MH.

References

  1. 1. van Albada SJ, Kunkel S, Morrison A, Diesmann M (2014) Integrating brain structure and dynamics on supercomputers. In: Grandinetti L, Lippert T, Petkov N, editors, Brain-Inspired Computing, Springer. pp. 22–32.
  2. 2. Helias M, Kunkel S, Masumoto G, Igarashi J, Eppler JM, et al. (2012) Supercomputers ready for use as discovery machines for neuroscience. Front Neuroinform 6: 26. pmid:23129998
  3. 3. Khan M, Lester D, Plana L, Rast A, Jin X, et al. (2008) SpiNNaker: mapping neural networks onto a massively-parallel chip multiprocessor. In: 2008 International Joint Conference on Neural Networks (IJCNN 2008). Hong Kong: IEEE Press, pp. 2849–2856.
  4. 4. Brüderle D, Petrovici M, Vogginger B, Ehrlich M, Pfeil T, et al. (2011) A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems. Biol Cybern 104: 263–296. pmid:21618053
  5. 5. Sharp T, Petersen R, Furber S (2014) Real-time million-synapse simulation of rat barrel cortex. Front Neurosci 8: 131. pmid:24910593
  6. 6. Kunkel S, Schmidt M, Eppler JM, Masumoto G, Igarashi J, et al. (2014) Spiking network simulation code for petascale computers. Frontiers in Neuroinformatics 8: 78. pmid:25346682
  7. 7. Renart A, De La Rocha J, Bartho P, Hollender L, Parga N, et al. (2010) The asynchronous state in cortical circuits. Science 327: 587–590. pmid:20110507
  8. 8. Tetzlaff T, Helias M, Einevoll G, Diesmann M (2012) Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 8: e1002596. pmid:23133368
  9. 9. Helias M, Tetzlaff T, Diesmann M (2014) The correlation structure of local cortical networks intrinsically results from recurrent dynamics. PLoS Comput Biol 10: e1003428. pmid:24453955
  10. 10. Wilson M, Bower JM (1992) Cortical oscillations and temporal interactions in a computer simulation of piriform cortex. J Neurophysiol 67: 981–995. pmid:1316954
  11. 11. Tsodyks MV, Sejnowski T (1995) Rapid state switching in balanced cortical network models. Network: Comput Neural Systems 6: 111–124.
  12. 12. Hill S, Tononi G (2005) Modeling sleep and wakefulness in the thalamocortical system. J Neurophysiol 93: 1671–1698. pmid:15537811
  13. 13. Izhikevich EM, Edelman GM (2008) Large-scale model of mammalian thalamocortical systems. Proc Natl Acad Sci USA 105: 3593–3598. pmid:18292226
  14. 14. Winslow RL, Kimball AL, Varghese A, Noble D (1993) Simulating cardiac sinus and atrial network dynamics on the connection machine. Physica D 64: 281–298.
  15. 15. Morris M, Kretzschmar M (1997) Concurrent partnerships and the spread of HIV. AIDS 11: 641–648. pmid:9108946
  16. 16. Ten Tusscher KHWJ, Panfilov AV (2006) Cell model for efficient simulation of wave propagation in human ventricular tissue under normal and pathological conditions. Phys Med Biol 51: 6141–6156. pmid:17110776
  17. 17. Bisset KR, Chen J, Feng X, Kumar VSA (2009) EpiFast: a fast algorithm for large scale realistic epidemic simulations on distributed memory systems. In: Proceedings of the 23rd international conference on Supercomputing. pp. 430–439.
  18. 18. Crook S, Bednar J, Berger S, Cannon R, Davison A, et al. (2012) Creating, documenting and sharing network models. Network: Comput Neural Systems 23: 131–149.
  19. 19. Amit DJ, Brunel N (1997) Dynamics of a recurrent network of spiking neurons before and following learning. Network: Comput Neural Syst 8: 373–404.
  20. 20. Amit D, Tsodyks M (1991) Quantitative study of attractor neural networks retrieving at low spike rates: II. low-rate retrieval in symmetric networks. Network: Comput Neural Systems 2: 275–294.
  21. 21. Gerstner W, van Hemmen JL (1992) Universality in neural networks: the importance of the ‘mean firing rate’. Biol Cybern 67: 195–205. pmid:1498186
  22. 22. Romo R, Brody CD, Hernandez A, Lemus L (1999) Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399: 470–473. pmid:10365959
  23. 23. Ahissar M, Sosnik R, Haidarliu S (2000) Transformation from temporal to rate coding in somatosensory thalamocortical pathway. Nature 406: 302–306. pmid:10917531
  24. 24. Hubel DH, Wiesel TN (1968) Receptive fields and functional architecture of monkey striate cortex. J Neurophysiol 195: 215–243.
  25. 25. Georgopoulos A, Schwartz A, Kettner R (1986) Neuronal population coding of movement direction. Science 4771: 1416–1419.
  26. 26. Steriade M, Timofeev I, Grenier F (2001) Natural waking and sleep states: a view from inside neocortical neurons. J Neurophysiol 85: 1969–1985. pmid:11353014
  27. 27. Roelfsema P, Engel A, König P, Singer W (1996) The role of neuronal synchronization in response selection: A biologically plausible theory of structured representations in the visual cortex. J Cogn Neurosci 8: 603–625. pmid:23961987
  28. 28. van Albada S, Robinson P (2009) Mean-field modeling of the basal ganglia-thalamocortical system. I: Firing rates in healthy and parkinsonian states. J Theor Biol 257: 642–663. pmid:19168074
  29. 29. Perkel DH, Gerstein GL, Moore GP (1967) Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains. Biophys J 7: 419–440. pmid:4292792
  30. 30. Aertsen AMHJ, Gerstein GL, Habib MK, Palm G (1989) Dynamics of neuronal firing correlation: modulation of ‘effective connectivity’. J Neurophysiol 61: 900–917. pmid:2723733
  31. 31. Kilavik BE, Roux S, Ponce-Alvarez A, Confais J, Gruen S, et al. (2009) Long-term modifications in motor cortical dynamics induced by intensive practice. J Neurosci 29: 12653–12663. pmid:19812340
  32. 32. Schneidman E, Berry MJ, Segev R, Bialek W (2006) Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440: 1007–1012. pmid:16625187
  33. 33. Ito J, Maldonado P, Singer W, Grün S (2011) Saccade-related modulations of neuronal excitability support synchrony of visually elicited spikes. Cereb Cortex 21: 2482–2497. pmid:21459839
  34. 34. Riehle A, Grün S, Diesmann M, Aertsen A (1997) Spike synchronization and rate modulation differentially involved in motor cortical function. Science 278: 1950–1953. pmid:9395398
  35. 35. Vaadia E, Haalman I, Abeles M, Bergman H, Prut Y, et al. (1995) Dynamics of neuronal interactions in monkey cortex in relation to behavioural events. Nature 373: 515–518. pmid:7845462
  36. 36. Sompolinsky H, Yoon H, Kang K, Shamir M (2001) Population coding in neuronal systems with correlated noise. Phys Rev E 64: 51904.
  37. 37. Zohary E, Shadlen MN, Newsome WT (1994) Correlated neuronal discharge rate and its implications for psychophysical performance. Nature 370: 140–143. pmid:8022482
  38. 38. Izhikevich EM, Desai NS (2003) Relating STDP to BCM. Neural Comput 15: 1511–1523. pmid:12816564
  39. 39. Morrison A, Aertsen A, Diesmann M (2007) Spike-timing dependent plasticity in balanced random networks. Neural Comput 19: 1437–1467. pmid:17444756
  40. 40. Lindén H, Tetzlaff T, Potjans TC, Pettersen KH, Grün S, et al. (2011) Modeling the spatial reach of the LFP. Neuron 72: 859–872. pmid:22153380
  41. 41. Nir Y, Fisch L, Mukamel R, Gelbard-Sagiv H, Arieli A, et al. (2007) Coupling between neuronal firing rate, gamma LFP, and BOLD fMRI is related to interneuronal correlations. Current Biology 17: 1275–1285. pmid:17686438
  42. 42. Ginzburg I, Sompolinsky H (1994) Theory of correlations in stochastic neural networks. Phys Rev E 50: 3171–3191.
  43. 43. van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits. Neural Comput 10: 1321–1371. pmid:9698348
  44. 44. Hertz J (2010) Cross-correlations in high-conductance states of a model cortical network. Neural Comput 22: 427–447. pmid:19842988
  45. 45. Helias M, Tetzlaff T, Diesmann M (2013) Echoes in correlated neural systems. New J Phys 15: 023002.
  46. 46. Van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits. Neural Comput 10: 1321–1371. pmid:9698348
  47. 47. Aertsen A, Preißl H (1990) Dynamics of activity and connectivity in physiological neuronal networks. In: Schuster HG, editor, Nonlinear Dynamics and Neuronal Networks. VCH, Proceedings of the 63rd W. E. Heraeus Seminar Friedrichsdorf 1990, pp. 281–301.
  48. 48. Friston KJ (2011) Functional and effective connectivity: a review. Brain Connectivity 1: 13–36. pmid:22432952
  49. 49. Lindner B, Doiron B, Longtin A (2005) Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Phys Rev E 72: 061919.
  50. 50. Pernice V, Staude B, Cardanobile S, Rotter S (2011) How structure determines correlations in neuronal networks. PLoS Comput Biol 7: e1002059. pmid:21625580
  51. 51. Trousdale J, Hu Y, Shea-Brown E, Josic K (2012) Impact of network structure and cellular response on spike time correlations. PLoS Comput Biol 8: e1002408. pmid:22457608
  52. 52. Grytskyy D, Tetzlaff T, Diesmann M, Helias M (2013) Invariance of covariances arises out of noise. AIP Conf Proc 1510: 258–262.
  53. 53. Grytskyy D, Tetzlaff T, Diesmann M, Helias M (2013) A unified view on weakly correlated recurrent networks. Front Comput Neurosci 7: 131. pmid:24151463
  54. 54. Okun M, Lampl I (2008) Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activities. Nature Neuroscience 11: 535–537. pmid:18376400
  55. 55. Graupner M, Reyes AD (2013) Synaptic input correlations leading to membrane potential decorrelation of spontaneous activity in cortex. The Journal of Neuroscience 33: 15075–15085. pmid:24048838
  56. 56. van Albada SJ, Schrader S, Helias M, Diesmann M (2013) Influence of different types of downscaling on a cortical microcircuit model. BMC Neuroscience 14: P112.
  57. 57. Bronstein IN, Semendjajew KA, Musiol G, Mühlig H (1999) Taschenbuch der Mathematik. Verlag Harri Deutsch, 4th edition.
  58. 58. Potjans TC, Diesmann M (2014) The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cereb Cortex 24: 785–806. pmid:23203991
  59. 59. Kamiński M, Ding M, Truccolo WA, Bressler SL (2001) Evaluating causal relations in neural systems: Granger causality, directed transfer function and statistical assessment of signicance. Biol Cybern 85: 145–157. pmid:11508777
  60. 60. Friston K, Harrison L, Penny W (2003) Dynamic causal modelling. NeuroImage 19: 1273–1302. pmid:12948688
  61. 61. Nykamp DQ (2007) A mathematical framework for inferring connectivity in probabilistic neuronal networks. Math Biosci 205: 204–251. pmid:17070863
  62. 62. Timme M (2007) Revealing network connectivity from response dynamics. Phys Rev Lett 98: 224101. pmid:17677845
  63. 63. Roudi Y, Hertz J (2011) Mean field theory for nonequilibrium network reconstruction. Physical Review Letters 106: 048702. pmid:21405370
  64. 64. Pernice V, Rotter S (2012) Reconstruction of connectivity in sparse neural networks from spike train covariances. Front Comput Neurosci Conference Abstract: Bernstein Conference 2012.
  65. 65. Grytskyy D, Helias M, Diesmann M (2013) Reconstruction of network connectivity in the irregular firing regime. In: Proceedings 10th Göttingen Meeting of the German Neuroscience Society. pp. 1192–1193.
  66. 66. Robinson PA, Sarkar S, Pandejee GM, Henderson JA (2014) Determination of effective brain connectivity from functional connectivity with application to resting state connectivities. Phys Rev E 90: 012707:1–6.
  67. 67. D’haeseleer P, Liang S, Somogyi R (2000) Genetic network inference: from co-expression clustering to reverse engineering. Bioinformatics 16: 707–726. pmid:11099257
  68. 68. Steuer R, Kurths J, Fiehn O, Weckwerth W (2003) Observing and interpreting correlations in metabolomic networks. Bioinformatics 19: 1019–1026. pmid:12761066
  69. 69. Psorakis I, Roberts SJ, Rezek I, Sheldon BC (2012) Inferring social network structure in ecological systems from spatio-temporal data streams. J R Soc Interface: rsif20120223.
  70. 70. Tyrcha J, Roudi Y, Marsili M, Hertz J (2013) The effect of nonstationarity on models inferred from neural data. Journal of Statistical Mechanics: Theory and Experiment 2013: P03005.
  71. 71. Helmchen F (2009) Two-photon functional imaging of neuronal activity. In: Frostig R, editor, In Vivo Optical Imaging of Brain Function, Boca Raton (FL): CRC Press, chapter 2 2nd edition.
  72. 72. Akemann W, Sasaki M, Mutoh H, Imamura T, Honkura N, et al. (2013) Two-photon voltage imaging using a genetically encoded voltage indicator. Scientific Reports 3.
  73. 73. Kempter R, Gerstner W, van Hemmen JL (1999) Hebbian learning and spiking neurons. Phys Rev E 59: 4498–4514.
  74. 74. Kunkel S, Diesmann M, Morrison A (2011) Limits to the development of feed-forward structures in large recurrent neuronal networks. Front Comput Neurosci 4. pmid:21415913
  75. 75. Turrigiano GG (2008) The self-tuning neuron: synaptic scaling of excitatory synapses. Cell 135: 422–435. pmid:18984155
  76. 76. Turrigiano GG, Leslie KR, Desai NS, Rutherford LC, Nelson SB (1998) Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature 391: 892–896. pmid:9495341
  77. 77. Burrone J, Murthy VN (2003) Synaptic gain control and homeostasis. Curr Opin Neurobiol 13: 560–567. pmid:14630218
  78. 78. Liu G, Tsien RW (1995) Properties of synaptic transmission at single hippocampal synaptic boutons. Nature 375: 404–408. pmid:7760934
  79. 79. Wilson NR, Ty MT, Ingber DE, Sur M, Liu G (2007) Synaptic reorganization in scaled networks of controlled size. J Neurosci 27: 13581–13589. pmid:18077670
  80. 80. Ivenshitz M, Segal M (2010) Neuronal density determines network connectivity and spontaneous activity in cultured hippocampus. J Neurophysiol 104: 1052–1060. pmid:20554850
  81. 81. Medalla M, Luebke JI (2015) Diversity of glutamatergic synaptic strength in lateral prefrontal versus primary visual cortices in the rhesus monkey. J Neurosci 35: 112–127. pmid:25568107
  82. 82. Desai NS, Cudmore RH, Nelson SB, Turrigiano GG (2002) Critical periods for experience-dependent synaptic scaling in visual cortex. Nat Neurosci 5: 783–789. pmid:12080341
  83. 83. Gewaltig MO, Diesmann M (2007) NEST (NEural Simulation Tool). Scholarpedia 2: 1430.
  84. 84. Davison A, Brüderle D, Eppler J, Kremkow J, Muller E, et al. (2008) PyNN: a common interface for neuronal network simulators. Front Neuroinformatics 2.
  85. 85. Hanuschkin A, Kunkel S, Helias M, Morrison A, Diesmann M (2010) A general and efficient method for incorporating precise spike times in globally time-driven simulations. Front Neuroinform 4: 113. pmid:21031031
  86. 86. Stein W, et al. (2013) Sage Mathematics Software (Version 5.9). The Sage Development Team. http://www.sagemath.org.
  87. 87. Buice MA, Cowan JD, Chow CC (2009) Systematic fluctuation expansion for neural network activity equations. Neural Comput 22: 377–426.
  88. 88. Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 79: 2554–2558. pmid:6953413
  89. 89. Hertz J, Krogh A, Palmer RG (1991) Introduction to the Theory of Neural Computation. Perseus Books.
  90. 90. Corless RM, Gonnet GH, Hare DEG, Jeffrey DJ, Knuth DE (1996) On the Lambert W function. Advances in Computational Mathematics 5: 329–359.
  91. 91. Fourcaud N, Brunel N (2002) Dynamics of the firing probability of noisy integrate-and-fire neurons. Neural Comput 14: 2057–2110. pmid:12184844
  92. 92. Brunel N, Chance FS, Fourcaud N, Abbott LF (2001) Effects of synaptic noise and filtering on the frequency response of spiking neurons. Phys Rev Lett 86: 2186–2189. pmid:11289886