The authors have declared that no competing interests exist.
It is a common and good practice in experimental sciences to assess the statistical significance of measured outcomes. For this, the probability of obtaining the actual results is estimated under the assumption of an appropriately chosen null-hypothesis. If this probability is smaller than some threshold, the results are deemed statistically significant and the researchers are content in having revealed, within their own experimental domain, a “surprising” anomaly, possibly indicative of a hitherto hidden fragment of the underlying “ground-truth”. What is often neglected, though, is the actual
Systems neuroscience aims at gaining an understanding of how neural networks process information to implement specific functions in sensory, motor, and cognitive processing. To this end, the activities of multiple neurons are recorded simultaneously and analyzed to extract potentially relevant aspects about the task-related interactions among these neurons. If the analysis reveals statistically significant modulations of the recorded neuronal activity
However, the methods used to identify and measure the statistical significance of these patterns do actually not justify any claim regarding their
Let us consider a hypothetical experiment, in which neuronal activity is recorded from a certain brain area and the data is preprocessed to extract spike trains of 900 single neurons over a period of a few seconds (
(A) Rasterplot of excitatory (1–700) and inhibitory (701–900) neurons recorded in the simulation experiment. (B) Rows are sorted such that neurons with similar rate modulations appear together. Evidently, a subgroup of neurons fires action-potentials in a correlated manner during certain epochs in time (short black lines near bottom of the frame). (C) Schematic depiction of the underlying network from which neural activity was sampled. (D) The same network reorganized graphically using a force vector algorithm (cf.
It is tempting, at first sight, to conclude that the statistically significant elevations of firing rates and increased correlations among the recorded neurons will have an impact on the dynamics and function of the network. To test whether this is justified, we investigated the topology of the network from which the spiking activity was recorded (
The subpopulation of neurons exhibiting correlated activity in our example, in fact, stems from the smaller subnetwork. The transient increase in firing rates and correlation strengths during certain epochs is the result of a brief activation of the hubs that were designed to have strong uni-directional projections to the smaller subnetwork. Therefore, by construction, the activity of this subnetwork per se does not have any impact on the dynamics of the larger network or the hubs. Thus, knowledge of the network structure reveals that the observed statistically significant events are essentially an
Note that this is not meant to say that the activity of the small subnetwork is irrelevant or epiphenomenal in general. Rather, the message is that not all observed activity modulations of neurons in a task are relevant for the specific task itself, i.e., the subject's performance in the task and the neural computations underlying it (here, the task reduced to the desired hand movement required from the subject). Of course, the activity modulations in the small subnetwork could be relevant for some other aspect, not essential for the task itself—e.g., vision, memory, etc.
This observation has important implications for the understanding of the local network computations. If we assume, for example, that the larger network is part of an area in the motor cortex that controls a limb movement (
In fact, the above scenario is not just a Gedankenexperiment. In human subjects performing a hand motor task, we recently observed that head movement was correlated with hand movement (
Another revealing example comes from studies by Riehle and colleagues investigating neural activity in the monkey motor cortex
These three examples clearly illustrate that statistical significance of recorded neural events is only a
Here, we provide a formal definition of embeddedness. For this we distinguish between structural and effective embeddedness:
“Structural embeddedness” indicates the way neurons are physically embedded in their surrounding network. It can be characterized by graph-theoretical measures such as centrality, betweenness, k-shell index, etc.
“Effective embeddedness” is the influence neurons have on the activity of the surrounding network. Effective embeddedness is determined by structural embeddedness as well as by synaptic and cellular properties, ongoing activity, presence of neuromodulators, etc.
The concept of embeddedness has been initially used for socio-economic networks
The importance of the relative position of task-related neurons in the topological space of the network is not restricted to networks with a specific wiring. To test this, we performed a systematic analysis in which we investigated 100 different networks covering a wide range of topologies with variable characteristics (
All networks with
For each network we performed multiple simulations, selectively applying a stimulus to a different subpopulation of 250 excitatory neurons to artificially render the correlations among them statistically significant. Subsequently, we estimated the effect of these statistically significant events on the entire network activity in terms of the peri-stimulus-time-histogram (PSTH) of the network activity (
(A) Network response (PSTH) for identical stimulation of 30 different subpopulations of 250 neurons each (thin blue lines) in one example network. Observe that peak, onset, and rise times of responses of each subpopulation differ greatly. The thick blue lines depict the smallest and the biggest response, respectively. (B) Rasterplot of the network when the subpopulation of neurons with the lowest degree of embeddedness was stimulated. Light blue dots denote spikes from all neurons, dark blue dots those from stimulated ones. Inset: Magnified cut-out around 600 ms for neurons 4000–6000. Activation of weakly embedded neurons does not spread much in the network. (C) As in (B), but now the subpopulation with the highest average degree of embeddedness was stimulated, leading to a much bigger impact on the network activity. Activation of these strongly embedded neurons lead to a spreading of activity throughout the network. Moreover, feedforward inhibition suppressed the network activity entirely. (D) Response of all stimulated subpopulations (250 neurons each) and all networks pooled together (pale blue dots). On average, there was a positive correlation between out-degree and total network activity (
This finding demonstrates that it matters which neurons in the network participate in the correlated events. In the networks used here, all stimulated neurons had identical intrinsic properties. Moreover, all their outgoing connections were of equal strength. Thus, the decisive factor determining the impact of a particular neuron on the overall network activity was the way it was embedded in the network. This degree of embeddedness of a node in the network can be quantified by different metrics from graph theory
To investigate the relationship between out-degree and network activity, we computed for each network the population response as a function of the average out-degree of all stimulated groups and all networks pooled together (
Apart from the out-degree, however, other topological properties also affected the response. This is evident in cases where groups of neurons with comparable out-degrees had a quite different impact on the network activity (
It may not be surprising that both the out-degree and the k-shell-out index of the stimulated neurons more or less adequately describe the neurons' impact on network activity. After all, both descriptors quantify the outreach of a neuron within the network. At the same time, our findings demonstrate that the combination of activated nodes (neurons) and topological properties of the network, irrespective of the method used to quantify them, do influence the network response and, therefore, should be considered in the analysis and interpretation of the recorded network activity.
In the networks investigated here, we observed that the out-degree of a neuron was highly correlated with the impact this neuron had on the network activity. In this case, where neurons with regular-firing properties were used, the out-degree predicted a neuron's influence on the overall network dynamics quite well. However, in certain other, also biologically plausible scenarios, higher-order network metrics, such as the k-shell-out index mentioned above, could be a better estimator of neuron embeddedness.
We illustrate this scenario with a simple toy-network (
Example of a toy-network illustrating that the degree to which any given metric of neuron embeddedness predicts the neurons' impact on the population response may depend on single neuron properties. The small numbers next to each node indicate the corresponding k-shell-out index. (A,B) Neurons exhibited regular firing behavior. (A) A sufficiently strong input activating neuron 5 will yield propagation of activity to neurons 7–14. (B) If the same stimulus arrives in neuron 1, activity will only spread to neurons 2–4 and 6. In this case, the out-degree correctly predicts that the impact of neuron 5 is bigger than that of neuron 1. (C) Neurons exhibited bursting behavior. As previously, neuron 1 will activate neurons 2–4 and 6. However, the bursting response of these neurons may be sufficient to activate their post-synaptic targets as well, leading to spreading of activity over the entire network. Here, the impact of neuron 1 is clearly larger than that of neuron 5. This effect is not grasped by the widely used out-degree measure. However, higher-order network metrics, like the k-shell-out index, correctly assign a higher value to neuron 1, as compared to neuron 5. (D) Total network response in the three cases depicted in panels A–C. Note the higher impact of neuron 1 under some conditions (curve C), compared to that of neuron 5 (curve A).
In this example, simple out-degree-based methods would fail to predict the impact of a neuron. By contrast, the k-shell-out index would be more informative, because it is designed to address cases like the one illustrated here
One of the dominant approaches in systems neuroscience to understand the functioning of the brain is to record the activity of neurons under different stimulus and/or behavioral conditions, and to correlate the recorded activity with details of the task (stimuli, behavior). Indeed, since the seminal work of Adrian
However, successfully decoding neuronal activity does not imply an understanding of the actual computations performed by the underlying network. That is, statistical significance may be a
Here, we argue that an additional step towards unraveling the neural code, albeit not a sufficient one either as was elegantly demonstrated by Marom et al.
Finally, we point out that calculated distributions, spectra, or various other measures of network activity, such as pairwise and higher-order correlations
In addition, knowledge of network topology can be used to determine whether increased activity in a neuron is a consequence of local network activity or whether it is simply input driven. Furthermore, the stimulus response shown in
Our results and their implications are not restricted to a particular measure of network response (here: population rate, measured by PSTH). Other descriptors of network activity, e.g., pairwise and higher-order correlations, would have led to similar conclusions. Although we examined a variety of network topologies, we used homogeneous synaptic weights and neuron properties for each network. Studying these properties in topologically diverse networks is an interesting endeavor in its own right and worth exploring further. For instance, as we have discussed above, the spiking behavior of neurons affects how well any specific measure of embeddedness predicts a neuron's impact on the network activity (
In turn, the degree of embeddedness of any given neuron could restrict the impact specific neuron properties may have on the network. That is, although some neurons could exhibit “exotic” firing patterns, these may not have any effect on the network activity, if the associated neurons' embeddedness is low. This suggests that additional knowledge about single neuron properties becomes only meaningful once the degree of embeddedness of the neurons is known.
Embeddedness may be less important in classical random networks with a homogeneous topological space (
A number of properties of network connectivity have been shown to be important determinants for network activity dynamics
Moreover, properties of individual neurons, e.g., those defining their firing patterns, may influence the effective connectivity in the network (
We already mentioned k-shell decomposition as an example of a metric that goes beyond standard in- and out-degree measures. Other algorithms have been proposed to incorporate negative interactions between nodes
This theoretical work needs to be paralleled by experimental approaches aiming at ways to measure the structural embeddedness of neurons in vivo. Evidently, knowledge of the full “connectome”
In such experiments, modulation of extracellular activity (spikes and LFP) in a network would provide an estimate of the postsynaptic (suprathreshold) embeddedness of the stimulated neurons. In fact, such selective stimulation experiments would be similar to the ones we have shown and discussed in
In an ideal scenario, the brain area under examination could be scanned, before performing the actual experiment, to identify potential neurons to be recorded, based on their structural embeddedness. This would increase the chances of recording from those neurons that are involved in the local network computations in the investigated brain area. Alternatively, in an experiment where calcium imaging is possible, a wide array of stimuli could be used to obtain an average effective connectivity map of the area being recorded
In neurophysiological experiments we see a continuing debate on the choice of appropriate null-hypotheses for testing the statistical significance of recorded spatiotemporal activity patterns
To infer the function of networks in the brain from recorded activity of their member neurons, we need to differentiate between two issues: (1) how network structure and network activity affects a neuron's activity, and (2) how a neuron's activity affects network activity (and, perhaps, structure). The first of these two is increasingly becoming a research issue (see e.g., the Research Topic on “Structure, dynamics and function of brains: Exploring relations and constraints” in
Here, we argue that fulfilling statistical significance alone is not enough to stipulate a role of the recorded neurons in the computations performed by the network in the experimental task. This is precisely the point in the second issue mentioned above. It is here that we argue that structural and functional significance cannot be ignored. In fact, as our examples demonstrate, knowledge of the structural significance of the neurons participating in statistically significant activity events is indispensable. Thus, developing tools and methods to extract such information will in the long run facilitate our understanding of neural network functioning. This may eventually lead to the development of more appropriate null-hypotheses, where the statistical significance of expected activity modulations can be estimated, taking the network topology and its activity dynamics into account.
Finally, we emphasize that our results are not restricted to systems neuroscience. Rather, their implications permeate into every scientific discipline where networks are used as a conceptual and mathematical tool to examine and understand the observed activation phenomena. For instance, in epidemic research, the spread of diseases will be significantly influenced by the structural embeddedness of infected (humans) nodes. Here, the spread could be controlled by identifying and isolating highly embedded nodes, thereby removing the potentially high impact of these nodes on the evolution of the spread. Likewise, embeddedness could actually be used in controlling the dynamics of complex networks
For the generation of the different network topologies, we used an in-house Python implementation of the multifractal network generator proposed by
The k-shell-out index of nodes in our networks was calculated by using the k-shell (also known as k-core) decomposition algorithm
The network simulations were performed with NEST
We thank Stefano Cardanobile, Volker Pernice, and Moritz Deger for providing a Python implementation of the multi-fractal network generator. We also thank Clemens Boucsein for helpful discussions. Moreover, we thank all the reviewers for their useful criticism that helped us to improve the quality of the manuscript. All simulations were carried out using the NEST simulation software (