Advertisement
Research Article

Segregation of the Brain into Gray and White Matter: A Design Minimizing Conduction Delays

  • Quan Wen,

    Affiliations: Department of Physics and Astronomy, State University of New York at Stony Brook, Stony Brook, New York, United States of America, Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America

    X
  • Dmitri B Chklovskii mail

    To whom correspondence should be addressed. E-mail: mitya@cshl.edu

    Affiliation: Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America

    X
  • Published: December 30, 2005
  • DOI: 10.1371/journal.pcbi.0010078

Abstract

A ubiquitous feature of the vertebrate anatomy is the segregation of the brain into white and gray matter. Assuming that evolution maximized brain functionality, what is the reason for such segregation? To answer this question, we posit that brain functionality requires high interconnectivity and short conduction delays. Based on this assumption we searched for the optimal brain architecture by comparing different candidate designs. We found that the optimal design depends on the number of neurons, interneuronal connectivity, and axon diameter. In particular, the requirement to connect neurons with many fast axons drives the segregation of the brain into white and gray matter. These results provide a possible explanation for the structure of various regions of the vertebrate brain, such as the mammalian neocortex and neostriatum, the avian telencephalon, and the spinal cord.

Synopsis

Vertebrate brains generally contain two kinds of tissue: gray matter and white matter. Gray matter contains local networks of neurons that are wired by dendrites and mostly nonmyelinated local axons. White matter contains long-range axons that implement global communication via often myelinated axons. What is the evolutionary advantage of segregating the brain into white and gray matter rather than intermixing them? In this study, the authors postulate that brain functionality benefits from high synaptic connectivity and short conduction delays—the time required for a signal from one neuron soma to reach another. Using this postulate, they show quantitatively that the existence of many fast, long-range axons drives the segregation of the brain into gray and white matter. The theory not only provides a possible explanation for the structure of various brain regions such as cerebral cortex, neostriatum, and spinal cord, but also makes several testable predictions such as the scaling estimate of the cortical thickness.

Introduction

A ubiquitous feature of the vertebrate brain is its segregation into white and gray matter (http://www.brainmuseum.org). Gray matter contains neuron somata, synapses, and local wiring, such as dendrites and mostly nonmyelinated axons. White matter contains global, and in large brains mostly myelinated, axons that implement global communication. What is the evolutionary advantage of such segregation [1]? Networks with the same local and global connectivity could be wired so that global and local connections are finely intermixed. Since such design is not observed, and invoking an evolutionary accident as an explanation has agnostic flavor, we searched for an explanation based on the optimization approach [26], which is rooted in the evolutionary theory [79].

We started with the assumption that evolution “tinkered” with brain design to maximize its functionality. Brain functionality must benefit from higher synaptic connectivity, because synaptic connections are central for information processing as well as learning and memory, thought to manifest in synaptic modifications [10,11]. However, increasing connectivity requires adding wiring to the network, which comes at a cost. The cost of wiring is due to metabolic energy required for maintenance and conduction [1215], guidance mechanisms in development [16], conduction time delays and attenuation [17,18], and wiring volume [6].

Two pioneering studies, by Ruppin et al. [19] and Murre and Sturdy [20], have proposed that the segregation of white and gray matter could be a consequence of minimizing the wiring volume. They modeled the brain by a network consisting of local and global connections, which give rise to gray and white matter correspondingly. Although wiring volume minimization is an important factor in the evolution of brain design, their results remain inconclusive because predictions of the volume minimization model for the present problem are not robust and are difficult to compare with empirical observations (see Discussion).

In this paper, we adopted the model of connectivity introduced in Ruppin et al. [19] and Murre and Sturdy [20], including local and global connections, but minimized the conduction delay, i.e., the time that takes a signal (such as action potential and/or graded potential) to travel from one neuron's soma to another. To see that high connectivity and short conduction delay are competing requirements, note that adding wiring to the network increases not only its volume, but also the distance between neurons. In turn, this requires longer wiring, which, for the same conduction velocity, introduces longer delays. Longer delays are detrimental because fewer computational steps can be performed within the time frame imposed on animals by the environment, making the brain a less powerful computational machine [12].

We show that the competing requirements for high connectivity and short conduction delay may lead naturally to the observed architecture of vertebrate brain as seen in mammalian neocortex and bird telencephalon. As in any other theoretical analysis, we make several major assumptions. First, given that exact connectivity is not known, we characterized the interneuronal connectivity statistically by requiring a fixed number of connections per neuron. Second, although conduction delays are known to differ between connections, we minimized the mean conduction delay. Finally, it is likely that, in the course of evolution, minimization of conduction delay was accompanied by the increase in connectivity. However, it is not known how to quantify the benefits of increased connectivity in comparison with conduction delay increase. Therefore, we adopted a mathematically sound approach of minimizing conduction delay while keeping network connectivity fixed.

To obtain quantitative results, we used two analytical (nonnumerical) tools borrowed from theoretical physics. First, most of the derivations were done using the scaling approach. In this approach, a relationship between variables takes the form of proportionality rather than equality. In other words, numerical factors of order one are ignored. One can manipulate and combine such proportionality relationships and still get an estimate that is correct by an order of magnitude. A long history of successful applications of the scaling approach supports its validity. Second, we used a perturbation theory approach, which is helpful when the exact analytical solution to a problem is unavailable. In this approach, a simpler problem is solved exactly. Then the exact solution is modified to fit the actual problem by taking advantage of the fact that such modification is minor. Again, the long history of this approach supports its validity, as long as the difference between the exactly solvable and the actual problem is characterized by a parameter that is much smaller than one.

We present our theory in Results, which is organized into seven sections. In the first, we consider competing requirements between small conduction delays and high connectivity in local circuits. We show that local conduction delay limits the size of the local network with all-to-all potential connectivity to the size of the cortical column. The second section models full brain architecture as a small-world network, which combines high local connectivity with small conduction delay. We derive a simple estimate of conduction delay in global connections as a function of the number of neurons. In the third section, we consider spatially integrating local and global connections. We argue that mixing local and global connections substantially increases local conduction delay, while the global conduction delay may be unaffected. In the fourth section, by minimizing local conduction delay we derive a condition under which white/gray matter segregation reduces conduction time delays. The fifth section gives a necessary condition for the segregated design to be optimal, and an example of such design is given in the sixth section. Finally, the seventh section restates our results in terms of the numbers of neurons, interneuronal connectivity, and axon diameter.

Results

Conduction Delays Limit the Size of a Highly Connected Network

We begin by considering the time delay in the local circuits of neocortex, because their mode of operation—thought to involve recurrent computations [21,22]—seems most sensitive to the detrimental impact of time delay. We derive a scaling relationship between local conduction delay and the number of neurons that can have all-to-all potential connectivity. By assuming that the tolerable delay is on the order of a millisecond, we show that the maximum size of such network is close to that of the cortical column.

Local cortical circuits may be viewed as a network of n neurons with all-to-all potential synaptic connectivity, meaning that the axons and dendrites of most neurons come close enough to form a synapse [2325]. In the following we do not distinguish between axons and dendrites in local circuits, and we refer to them as “local wires.” Mathematical symbols used in this paper are shown in Table 1. The mean conduction delay t in local circuits is given by the average path length between two connected neurons (via potential synapses), ℓ, divided by the conduction velocity, s:

thumbnail

Table 1.

Mathematical Symbols Used in the Main Text

doi:10.1371/journal.pcbi.0010078.t001

Experimental measurements [26,27] and theoretical arguments [28,29] suggest that conduction velocity, s, scales sublinearly with the diameter, d, of local wires (nonmyelinated axons and dendrites):


where β is a constant coefficient and θ is a positive power smaller than one (however, see [30]). By combining Equations 1 and 2, we arrive at the expression for the conduction delay:


Equation 3 may give an impression that the conduction delay decreases monotonically with wire diameter d. But this is not necessarily the case because ℓ can be a function of d. The following argument [17] shows that the conduction delay, t, as a function of wire diameter, d, has a minimum (provided 0 < θ < 1), which defines the optimal wire diameter. Given the branching structure of axons and dendrites and uniform distribution of neurons, ℓ can be approximated by the linear size of the network [6], which can be easily estimated in the two limiting cases. In the limit when the wire diameter approaches zero, all the nonwire components (such as synapses) are compressed together and take up the space vacated by shrinking wires. Because the volume of the network approaches the volume of the nonwire components, which is constant, the conduction delay diverges as 1/d θ according to Equation 3 [17].

In the opposite limit when the wire diameter is large, the network volume is determined mostly by the wiring [17]. Because wires run in all directions, they must get longer as they get thicker, and the linear size of the network grows proportionally to the wire diameter. Then, according to Equation 3, the conduction delay increases as d1-θ. Therefore, conduction delay is minimized by the optimal wire diameter, for which the nonwire occupies a fixed fraction of the neuropil volume [17] (see also the first section in Materials and Methods). As a result, the optimal volume of the network is of the same order as the nonwire volume. Assuming that nonwire consists mostly of synaptic components, such as axonal boutons and spine heads, the optimal network volume is of the same order as the total synaptic volume. Therefore, the local network volume is given by:


where vs is the average synapse volume and n is the total number of neurons in the local network. (In a network with all-to-all connectivity, n is also the number of local connections made by a neuron via potential synapses.) For the sake of clarity, we ignore the fact that only a fraction (0.1–0.3) of potential synapses are converted into actual synapses [23]. Such numerical factors are ignored in the equations of the main text of this paper, but can be included straightforwardly (see the first section in Materials and Methods). One consequence of Equation 4 is that the optimal wire diameter is on the same order of magnitude as the synaptic linear size, consistent with anatomical observations [31]:


By using Equations 35 and assuming θ = 1/2, suggested by the cable theory [28,29], we find that the smallest possible mean conduction delay in local networks is given by


As the smallest possible conduction delay grows with the number of neurons in the network, fixing conduction delay imposes a constraint on the maximum size of the network. It seems reasonable to assume that the biggest tolerable conduction delay is on the order of a millisecond, a time scale corresponding to physiological events such as the extent of an action potential and the rise-time of an excitatory postsynaptic potential [32]. This time scale could be dictated by the metabolic costs [33]. If we approximate the synaptic volume at a fraction of a cubic micrometer, and β ~ 1 m/s μm−1/2 [28,34,35], the maximum number of neurons in the all-to-all connected network is on the order of 104. This corresponds to roughly the size of a cortical column, which is then the largest network that can combine all-to-all potential synaptic connectivity with tolerable conduction delay.

Small-World Network Combines High Local Connectivity with Small Conduction Delay

Human neocortex contains about 1010 neurons—many more than could possibly be wired in an all-to-all fashion with a physiologically tolerable conduction delay. In particular, substituting this neuron number into Equation 6, we find that the delay would be on the order of seconds—clearly too slow. Given that the brain is too large to combine high interconnectivity with short conduction delay [36,37] how can it maintain high functionality? In this section, we consider the architecture of the brain as a whole and show that much shorter global conduction delay can be achieved by sacrificing all-to-all connectivity.

Anatomical evidence suggests that the brain maintains short conduction delays by implementing sparse global interconnectivity while preserving high local interconnectivity [31]. Such design resembles the small-world network [38], as has been noticed by several authors [3942]. In a small-world network, a high degree of clustering (the probability of a connection between two neighbors of one neuron) is combined with a small network diameter (the average number of synapses on the shortest path connecting any two neurons). In a neurobiological context this means a combination of high computational power in local circuits with fast global communication [31,36,37,39,40,42]. Thus it is not surprising that evolution adopted this architecture when the size of the network made all-to-all connectivity impractical [36,39,4345].

How fast could global connections be? Global conduction delay T in a connection of length L with conduction velocity S is given by


Here and below, upper-case letters are reserved for parameters of global connections and lower-case letters for parameters of local connections. In big brains, global axons are mostly myelinated as would be expected, given higher demand on their conduction velocity (unpublished data) [28]. In myelinated axons, conduction velocity S scales linearly with diameter [28,46], D as


where B is a proportionality coefficient. Combining Equations 7 and 8, we find that the conduction delay is given by


The average length of global connections is given by


where V is brain volume. In turn, brain volume can be estimated by adopting the following model. Based on anatomical data [31], we assume that most neurons send one global connection to another local network in the brain. Initially, we ignore the volume occupied by local connections. We denote the number of neurons in the brain as N, which can be much larger than the number of local connections (via potential synapses) per neuron, n. Global connections have length L and diameter D. Thus the total volume of the brain can be approximated as


Combining Equations 10 and 11, we find


Substituting this expression into Equation 9, we obtain


Equation 13 can be used to estimate conduction delay in global axons. By substituting B ~ 5 m/s μm−1 [46,47] and the number of neurons in human neocortex, N ~ 1010, we find that the delay is around 20 ms. Compared with the several-second delay expected in a human brain if it had all-to-all connectivity, this is a significant improvement. For the mouse neocortex, by substituting N ~ 107 we find that the delay is around 0.6 ms. This is much better than the 50-ms delay expected, according to Equation 6, if the mouse cortex had all-to-all connectivity. As these estimates are based on the scaling approach, they are reliable only up to an order of magnitude. Yet, they demonstrate that sparse global connections can be much faster than a fully connected network with a comparable number of neurons.

Combining Local and Global Connections Increases Conduction Delays

After having considered conduction delays in local and global connections separately, now we are in a position to analyze how they are combined in the brain. Here we argue that the main difficulty in integration arises when introducing global connections into local networks.

We adopt a model combining both local and global connections proposed by Ruppin et al. [19] and Murre and Sturdy [20]. In this model, each neuron connects (via potential synapses in our case) with n local neurons and sends a global axon to another arbitrarily chosen local network in the brain. For simplicity, we neglect specificity and assume that local connections are made with nearest n neurons located in a sphere of radius ℓ centered on a given neuron, where ℓ is given by Equation 4. Although local and global connections may be highly specific [22,4850], this approximation is sufficient to understand brain segregation into white and gray matter.

The effect of combining local and global connections on the conduction delays can be analyzed in two steps. First, consider the effect of introducing local connections into the network of global connections. This leads to the swelling of the brain volume beyond that in Equation 11. Thus, global axons must be longer, and Equation 13 gives only the lower bound for global conduction delay (see the second section in Materials and Methods). Yet the increase in the global conduction delay caused by the swelling of network can be offset via speeding up global axons by making them thicker, (Equation 8). We show in the second section in Materials and Methods that the global network can absorb local connections and preserve the required global conduction delay.

Second, introduction of global connections into local circuits increases local conduction delay and is impossible to compensate by making local connections thicker (see the third section in Materials and Methods). While conduction velocity depends linearly on the global myelinated axon diameter (Equation 8), it scales sublinearly with the local wire diameter (Equation 2). Thus, the smallest possible mean local conduction delay increases when more global connections are mixed with local connections. To describe this quantitatively, we introduce the ratio of global axon volume that is finely intermixed with local connections to the initial unperturbed gray matter (i.e., total local circuits) volume, λ. When λ is much smaller than one, we can argue that the initial minimum local conduction delay is only slightly affected by the penetration of global connections in the gray matter. As shown in the third section in Materials and Methods, because of intermixing global connections and local connections, the increase in local conduction delay, Δt, is proportional to the ratio λ:


where t is conduction delay in unperturbed local circuits given by Equation 6. As before, numerical factors are neglected in the spirit of the scaling estimate.

According to our original assumption, brain functionality is maximized when conduction delay is minimized. According to Equation 14, the smallest possible conduction delay in local circuits is achieved when λ = 0, i.e., when global and local connections are fully segregated. But full segregation does not lead to a feasible design, because global connections originate and terminate on neurons in local circuits. Thus, we must find a design that spatially integrates local and global connections.

We note that minimization of local and global conduction delays are competing desiderata, as can be illustrated by varying the global axon diameter, D. Increasing D speeds up signal propagation along global connections and, therefore, reduces global conduction delay. Yet, thicker global axons are detrimental for local conduction delay because of an increase in λ (Equation 14). As the relative contributions to functionality of conduction delays in local and global connections are unknown, we searched for the optimal design that minimizes local conduction delay as a function of D. Our analysis begins with considering small values of D, i.e., λ ≪ 1.

Comparison of the Homogeneous Design and Designs with Gray and White Matter Segregation

In order to determine the optimal design we need to compare local conduction delays in different designs combining gray and white matter. In general, this problem is difficult to solve analytically. Yet, when global connections that are intermixed with the gray matter take less volume than does local, i.e., λ ≪ 1, the perturbation theory approach allows us to compare local conduction delays in homogeneous design (HD), in which gray matter and white matter are finely intermixed, to designs in which gray and white matter are segregated.

In HD, local and global connections are finely and uniformly intermixed (Figure 1). Then, according to Equation 14, the relative conduction delay increase due to the penetration of global axons of diameter D in the gray matter is given by

thumbnail

Figure 1. Homogeneous Design

In HD, local and global connections are uniformly and finely intermixed. Inset shows a typical local network containing local axons (thin gray lines) and dendrites (gray and black tree-like structures), and global axons (thick, light-blue lines spanning the whole circle) that perforate gray matter. When the volume of global axons is small, the linear size of the network can be approximated as G1/3.

doi:10.1371/journal.pcbi.0010078.g001

where N is the total number of neurons in the network. In this expression, we use Equation 11 for the volume of global connections and the fact that the average length of global axons is given by the linear size of the network, which, for a small λ, is given by the linear size of gray matter, G1/3. We note that the perturbation approach remains valid while the relative conduction increase in HD is less than one, i.e., ND2G2/3.

Another contribution to the mean local conduction delay comes from the boundary effect. Recall that the model requires each neuron in the gray matter to establish connections with n nearest neighbors. If a neuron is far from the boundary of the gray matter, these connections can be implemented in a sphere of radius ℓ given by Equation 4 (Figure 2). Yet neurons within distance ℓ of the gray matter boundary cannot find n neighbors within the sphere of the same size. Therefore, the radius of the local connections sphere must be expanded to find n nearest neighbors (Figure 2).

thumbnail

Figure 2. Boundary Effects in the Gray Matter

The red full circle illustrates the local connection sphere of a neuron that does not experience the boundary effect. Neurons near external boundary must inflate their local connection sphere to implement the required local connectivity, as illustrated by thin yellow semicircle. Neurons near white matter tracts penetrating gray matter must also inflate their local connection sphere to implement the required local connectivity, as illustrated by the thick red semicircle. Blue line with arrowhead shows typical routing of global axons. R is the size of gray matter modules, where global and local connections are finely intermixed.

doi:10.1371/journal.pcbi.0010078.g002

Expanding the range of local connections for neurons near the boundary increases average local conduction delay. The fraction of neurons that experience the boundary effect is proportional to the volume within distance ℓ from the boundary. As the boundary area in HD is given by G2/3, the fraction of affected neurons is given by ℓG2/3/G ~ ℓ/G1/3, which is less than one because the linear size of the gray matter G1/3 ≫ ℓ. Since the relative increase in delay for each neuron in the affected volume is of order one, this expression also gives a relative increase in the average local conduction delay. As this boundary effect is determined by the external boundary, it is independent of the design and can be ignored. Yet, the logic of this calculation will be used in the following to estimate the effect of gray and white matter boundary on local conduction delay.

Can segregation of gray and white matter reduce local conduction delay in HD? In HD, global axons are straight and are finely intermixed with the local connections. The contribution of global axons to local conduction delays could be reduced by decreasing the length of global axonal segments within the gray matter, according to Equation 14. Rather than connecting neurons with a straight axon, a typical global axon would go toward the nearest white matter tract (region occupied only by global axons) and travel in the white matter until it is close to the target neuron. Then the axon would leave the white matter and traverse the gray matter toward its target (Figure 2). Such routing may increase the length of global axons, but it would minimize impact on local conduction delays.

To calculate the relative local delay increase in the segregated design we estimate the relative volume of global axons in the gray matter, λ. We introduce the mean distance between a neuron and the nearest white matter tract, R, which also gives the linear size of gray matter modules (Figure 2). Then the relative volume of nonfasciculated global axons inside the gray matter in the segregated design is given by


Comparing Equation 16 with Equation 15, one can see that segregation may be advantageous to HD if RG1/3. In other words, introducing a sufficient number of white matter tracts into the gray matter may reduce the length of nonfasciculated global axonal segments in the gray matter and, hence, the local conduction delay.

Although segregation of gray and white matter may reduce local conduction delay, it has a disadvantage compared to HD in that it may induce a larger boundary effect because of the white matter tracts inside the gray matter. This effect is similar to the external boundary effect in HD, but it cannot be ignored, because it is different for different designs. If a neuron is far from the gray and white matter interface, its local connections can be implemented in the sphere of radius ℓ (Equation 4; Figure 2). If a neuron is close to the interface, the white matter occupies part of the sphere, meaning that the local sphere radius ℓ must be expanded so that a neuron can still find its n nearest neighbors (Figure 2). Therefore, whether the segregated design is preferred or not depends on whether the relative local conduction delay increase through the boundary effect is much smaller than the local delay increase in HD (Equation 15).

To evaluate the mean local conduction delay increase through the boundary effect in the segregated design, we need to specify the geometry of the white matter tracts, because the boundary effect generally depends on the surface area of the tracts. For a typical tract that spans the whole brain (i.e., has length L), we can relate its minimal surface area At to its cross-sectional area, Φ:


In turn, the cross-sectional area of a tract depends on the global axon diameter D, and one may conjecture that whether the segregated designs are advantageous or not depends on D. Indeed, we can formulate the following theorem, which is valid to the first order of ND2/G2/3 (Equation 15) and while our perturbation approach is valid (i.e., provided ND2G/ℓ, as will be shown later).

Theorem 1.

In the regime ND2 ≪ ℓ2, local conduction delays in the optimal segregated design and HD are equivalent. In the regime ND2 ≫ ℓ2, there is at least one segregated design with local delays less than those in HD.

To prove the first part of the theorem, we calculate the local conduction delay through the boundary effect in the segregated designs and compare it with HD. The length of the global tract segment inside the local sphere is ℓ. The other two dimensions of global tracts are much less than ℓ (Figure 3A), as the minimal boundary effect is achieved by the minimal surface area in Equation 17. Since the total cross-sectional area of the global tracts is ND2 ≪ ℓ2, each tract's cross-sectional area, Φi, is much less than the cross-sectional area of the local connection sphere (Figure 3A). Inclusion of such a tract into a local sphere increases its radius to (ℓ2 + Φi)1/2. Then, the relative increase in the local conduction delay for neurons in that sphere is [(ℓ2 + Φi)1/2 − ℓ]/ℓ ≃ Φi /ℓ2 ≪ 1.

thumbnail

Figure 3. Boundary Effect Induced by White Matter Tracts with Different Cross-Sectional Areas

(A) In the case Φ ≪ ℓ2, two dimensions of the white matter tracts (shown in white) can be much smaller than ℓ. Red circle illustrates local connection sphere of a neuron.

(B) In the case Φ ≫ ℓ2, neurons within distance ℓ from the white matter tract experience the boundary effect.

doi:10.1371/journal.pcbi.0010078.g003

Now we add up conduction delays contributed by all the tracts to neurons in affected spheres. As the number of spheres affected by one tract is given by L/ℓ, the fraction of neurons experiencing the boundary effect induced by one tract is given by ℓ2L/G, and the relative local conduction delay increase is given by (Φi/ℓ2)ℓ2L/G ~ Φi/G2/3. The total relative increase in local delay is the sum of the boundary effects induced by different tracts,


Notice that even if there are multiple tracts within the local connection sphere (i.e., the sphere radius can be larger than ℓ), the above result is still correct.

By comparing local conduction delay increase for segregated designs (Equation 18) with that for HD (Equation 15), one can see that they are the same. Therefore, when ND2 ≪ ℓ2, the optimal segregated designs and HD are equivalent to the first order of ND2/G2/3.

To prove the second part of the theorem (the ND2 ≫ ℓ2 regime), we specify a segregated design with smaller local delays than that in HD. In such a design, global axons belong to M (M ≫ 1) tracts with cross-sectional area Φ ≫ ℓ2 each and length L ~ G1/3. The distance between two tracts is much larger than ℓ. Then, the total affected neuropil volume through the boundary effect is the product of the total surface area of the tracts, MΦ1/2G1/3, and ℓ. For a typical neuron within the affected volume, a fraction of its local connection sphere with volume ~ℓ3 is occupied by the white matter tract, as illustrated in Figure 3B. To implement the required local connectivity, the local sphere radius ℓ should expand by a numerical factor of order one.

Next, we add up the relative local delay increase induced by all global tracts affecting all the neurons in a volume, given by ℓMΦ1/2G1/3/G. Because the total cross-sectional area MΦ ~ ND2, the relative local delay increase is


By comparing relative conduction delay in segregated design (Equation 19) with that in HD (Equation 15), one can see that because Φ ≫ ℓ2 as specified, segregated design is advantageous in the regime ND2G2/3.

Although in the regime ND2G2/3 we do not have a closed-form expression for the local conduction delay in HD, we can still show that it has longer conduction delays than the segregated design. We show in the third section in Materials and Methods that local conduction delay in HD is a monotonically increasing function of λ, and hence is a monotonically increasing function of ND2. Thus, the relative delay increase in HD exceeds one when ND2G2/3. Yet, in the regime ND2G2/3, the relative local delay increase in a segregated design can still be much smaller than one. To prove this, we note that in a segregated design, the local conduction delay increase because of the nonfasciculated global axons intermixed with gray matter, i.e., λ ~ ND2R/G (Equation 16), can be much smaller than one, if RG1/3.

In addition, the relative local delay increase through the boundary effect can also be much smaller than one. To see this, we specify the tracts in such a way that the total surface area of the white matter tracts is the surface area of the gray matter G/R. Then, using an analysis similar to that illustrated in Figure 3B, the relative local delay increase through the boundary effect is given by ℓG/(RG), which can be much smaller than one if ℓ/R ≪ 1. We note that λ ≪ 1 and R ≫ ℓ could both be satisfied if ND2G/ℓ. Thus, when ND2G2/3 and ND2G/ℓ, there is at least one segregated design with a local delay less than that in HD.

Having considered both the ND2G2/3 regime and ND2G2/3 regime, we have proven the second part the theorem.

Optimality Condition for Segregated Designs

In the previous section, we showed that in the regime ND2 ≫ ℓ2, there is at least one segregated design with local conduction delay shorter than that in HD. However, we did not specify which design is the optimal one. In this section, we give a necessary condition for a segregated design to be optimal in the regime ND2 ≫ ℓ2 and ND2G/ℓ.

As the advantage of segregation becomes apparent when the total cross-section of global axons ND2 ~ ℓ2, it is natural to expect that a similar condition defines the optimal gray matter module size R0, which minimizes local conduction delays. In other words, the number of neurons in the gray matter module is such that the total cross-sectional area of their global axons is given by ℓ2. As the number of neurons in the sphere of radius R0 is ℓ2/D2 and the number of neurons in the sphere of radius ℓ is n, we have


Thus, we can formulate the following theorem:

Theorem 2.

In the regime ND2 ≫ ℓ2 and ND2 ≪ G/ℓ, the minimum local conduction delay is achieved by the segregated design with the gray matter module containing ℓ2/D2 neurons.

To prove this theorem, we consider designs with gray matter module size smaller and greater than R0, and show that they have a local conduction delay greater than that in the design with module size R0.

In the case R0R, by applying Theorem 1 to any module one can see that converting that module from HD to segregated designs can reduce local conduction delay. For example, fasciculating global axons within that module into multiple tracts would reduce local conduction delay.

In the other case, if modules with size R0 contain only global axons from the neurons inside the module, by applying Theorem 1 one can see that any optimal segregated designs containing modules with size RR0 is equivalent to designs containing modules with size R0.

Moreover, if the tracts inside the module of size R0 contain external global axons (i.e., global axons that do not belong to the neurons inside the module with size R0 and/or do not innervate the neurons inside the module), converting segregated designs with module size RR0 to designs with module size R0 reduces the local conduction delay. This happens because merging all the tracts within the module of size R0 into one reduces the boundary effect. To see this, note that the minimal surface area of the big tract inside the module with size R0 is on the order of (∑Φi)1/2R0 ≪ ∑(Φi1/2)R0, where Φi is the mean cross-sectional area of a small tract containing external global axons, and ∑(Φi1/2)R0 is the total surface area of the smaller tracts inside the module with size R0. Even if the tracts run in different directions, most of the tracts can be merged together at the scale R0, because the typical length of a tract is much greater than that, and a small curvature would not affect the total length by an order of magnitude.

Taken together, by considering the two possible cases, we have proven that the minimum conduction delay in segregated designs is achieved with module size R0. Such designs may be further classified by the relative dimensions of the gray matter. The total boundary area between gray and white matter (i.e., the total surface area of the white matter tracts), A, could satisfy either A ~ G/R0 or AG/R0. As the local conduction delay through the boundary effect grows with A, the latter design has the shorter delay. In the following, we call segregated designs satisfying AG/R0 the perforated design (PD).

Branching Pipe Design—An Example of Perforated Design

In the previous section, we have shown that in the optimal segregated designs, the size of the module, in which global and local connections are finely intermixed, is given by R0. However, Theorem 2 does not specify other dimensions of the segregated design, such as the total surface area of the white matter tracts. In this section, by considering a specific example, which we name the branching pipe design, we show that the condition AG/R0 can be satisfied in the regime in which our perturbation approach is valid. In other words, we prove that PD exists in the regime ND2G/ℓ.

We specify the branching pipe design as follows (Figure 4). Global axons belong to several cylindrical white matter pipes perforating the gray matter. Higher-order branches split off lower-order pipes at regular intervals. Different order branches have different lengths and different pipe diameter. The length of the zeroth-order branches (i.e., the main pipes) is given by the linear size of the brain. The length of k + 1st-order branches is given by the interpipe distance among the kth order branches, forming a space-filling structure. The interpipe distance among the finest branches is given by R0 in Equation 20 (Figure 4).

thumbnail

Figure 4. Branching Pipe Design

Schematic illustration of branching pipe design with three orders of branches. The distance between kth order branches determines the length of the k + 1st-order branches. The distance between highest-order branches is given by R0.

doi:10.1371/journal.pcbi.0010078.g004

Although we can calculate the total surface area of the branching pipes for any given order k (as discussed in the fourth section in Materials and Methods), for simplicity we present the main results from the branching pipe design in which only first-order branches exist. We minimize the total surface area of such branching pipes and the local conduction delay by searching for the optimal length and the diameter of the first-order branches and the optimal diameter of zeroth-order branches.

We find that the expression for the minimal total surface area of the first-order branching pipes A depends on whether the total white matter volume is greater than the total gray matter volume or not. In the regime ℓ2ND2G2/3, the gray matter occupies most of the brain volume, and A is calculated (see the fourth section in Materials and Methods) as:


In turn, λ can be found by substituting G ~ (N/n) ℓ3 and optimal R ~ R0 (Equation 20) into Equation 16:


Then the minimal local conduction delay is given by


This dependence of Δt/t on ND2 is plotted on log-log scale in Figure 5 (represented by the thick blue line).

thumbnail

Figure 5. Local Conduction Delay As a Function of Global Axon Diameter in HD and PD

Local conduction delay is calculated for specific values ℓ = 0.5 mm, N = 108, and G = 103 mm3 and plotted in log-log coordinates. Thin red line, local conduction delay in HD; thick blue line, local conduction delay in PD. Delay in PD is calculated for the branching pipe design containing only first-order branches.

doi:10.1371/journal.pcbi.0010078.g005

In the regime G2/3ND2G/ℓ, white matter occupies most of the volume, and the specified segregated design has a different appearance: The gray matter is confined to a thin sheet. Sheet thickness is given by the length of the highest-order branches. Then, the minimum surface area of the branching pipes (as calculated in the fourth section in Materials and Methods) is given by


In this regime, the minimal local conduction delay is given by (Figure 5)


As λ ≪ 1 is equivalent to ND2G/ℓ (to see this, substitute G ~ (N/n)ℓ3 into ND2G/ℓ and compare it with Equation 22), we show that for such a branching pipe design, AG/R0 in the regime where our perturbation approach is valid. In other words, we verify the existence of PD in the regime ND2G/ℓ.

We note that when λ is approaching one, according to Equations 20 and 22, R02 ~ ℓ2 ~ nD2, meaning that the total surface area of the gray matter with size ℓ is taken up by the global axons. Therefore, when λ → 1, we must have A ~ G/R0 ~ G/ℓ ~ ND2. This can also be seen from the expressions for A in the branching pipe design, i.e., Equations 21 and 24. Moreover, λ ~ 1 (i.e., ND2 ~ G/ℓ) is when our perturbation approach to calculating the local conduction delay in PD breaks down (Figure 5).

When ND2G/ℓ, i.e., λ ≫ 1, we may consider clusters with discrete spatial arrangement, and each cluster has n neurons to implement local connectivity. In this case, we can estimate the lower limit of the cluster size, given by n1/2D, assuming that cluster volume is filled by tightly packed global axons. Because of local connections, the actual cluster size must be even greater. Alternatively, clusters may abut each other to form a sheet, and the sheet thickness could be much smaller than ℓ. In this case, however, we cannot determine the necessary conditions for the design to be optimal. Fortunately, existing anatomical data suggest that actual brains are not even close to the regime where λ ≫ 1, as will be shown later.

Phase Diagram of Optimal Designs

In previous sections we derived conditions under which various designs are optimal in terms of minimizing conduction delays. Specifically, HD is optimal if ND2 ≪ ℓ2 and PD is optimal if ND2 ≫ ℓ2 and λ ≪ 1. Next, we illustrate these results on a phase diagram (Figure 6) in terms of basic network parameters such as the local wire diameter d, the number of local connections (via potential synapses) per neuron n, global axon diameter D, and the total number of neurons in the brain N. To obtain the phase diagram, in the first-order perturbation theory, we substitute the expression for ℓ (Equations 4 and 5) into ND2 ≫ ℓ2, and find that PD is optimal when (N/n)1/2D/n1/6d ≫ 1. In the linear-log space of Figure 6, this expression corresponds to the regime above the thick green line.

thumbnail

Figure 6. Phase Diagram of Optimal Designs

In this phase diagram, we show parameter regimes in which HD or PD are optimal in terms of the global axon diameter D, local wire diameter d, total neuron number N, and the number of local connections per neuron n. We assume n = 104 and d = 1 μm for all empirical data points. Values of D in mammalian brains are from S. S. H. Wang (personal communication) and [60], and values of N in the neocortex are from [44]. Value of N in rat neostriatum is from [62]. For birds, we assume N = 107.

doi:10.1371/journal.pcbi.0010078.g006

Next, we estimate where perturbation theory fails by setting λ to one. By substituting Equations 4 and 5 into the expression for λ (Equation 22), we find that λ can be rewritten as


Then condition λ ~ 1 is equivalent to n1/6d/D ~ 1, corresponding to the thin red line in Figure 6.

Discussion

We have shown that the segregation of the brain into gray and white matter may be a natural consequence of minimizing conduction delay in a highly interconnected neuronal network. We related the optimal brain design to the basic parameters of the network, such as the numbers of neurons and connections between them, as well as wire diameters. Although we do not know whether competing desiderata of short time delay and high interconnectivity were crucial factors driving evolution of vertebrate brains, our theory makes testable predictions. Below, we compare these predictions with known anatomical facts.

Scaling Estimate of the Cortical Thickness

As fasciculated fibers are usually not observed in neocortical gray matter (according to Nissl and myelin stains), we identify cortical thickness with gray matter module size, R. Our prediction for the optimal module size R0 (Equation 20) can be rewritten by using Equations 4 and 5


Using n ~ 104 [22,31], d ~ 1 μm [31], and D ~ 1 μm [31] (also measured in the corpus callosum of macaque monkey; S. S.-H Wang, personal communication), we predict cortical thickness R0 ~ 1 mm. This estimate agrees well with the existing anatomical data [45,51,52], despite being derived using scaling. By substituting these values into Equation 26, we find that λ is smaller than one, justifying our perturbation theory approach.

Next, we apply our results to the allometric scaling relationship between cortical thickness, R0, and brain volume, V. We assume that n and D both increase with brain size [8,39,40] according to the following power laws: n ~ V1/3 [8,39,40,44] and D ~ V1/6 (see the fifth section in Materials and Methods). Then, by using Equation 27 and the constancy of the optimal local wire diameter d across different species [31], we predict that R0 ~ V4/27. This prediction agrees well with the empirically obtained power law relationship (with exponent 1/9) between cortical thickness and brain volume [39,45,5153]. Thus, our theory explains why the cortical thickness changes little while brain volume varies by several orders of magnitude between different species.

Two previous studies [39,53] also discussed the nature of the scaling law between cortical thickness and brain volume. One study [39] relies on the assumption that the number of neurons in a module of the neocortex is constant. The volume of the module might be cubic R0. Because the neuronal density may scale inversely as the cubic root of brain volume (see the fifth section in Materials and Methods), R0 should scale as one-ninth of the brain volume to ensure that the number of neurons in a module is independent of brain volume. The other study [53] relies on the assumption that the number of such modules scales as two-thirds of the total gray matter volume. Hence, the volume of the module scales as one third of the gray matter volume. As the total cortical gray matter volume may scale linearly with the brain volume (see the fifth section in Materials and Methods), the size of the module scales as one-ninth of the brain volume. In this paper, we take a different approach by deriving the expression for the cortical thickness based on the optimization principle. However, we obtain a scaling exponent close to, but not exactly equal to, one-ninth.

Comparison of the Cortical Structure and PD

Neocortex has a sheet-like appearance, and the total area of the gray and white matter boundary is given by A ~ G/R0, where G is the total gray matter volume. According to our theory, such design is optimal when λ becomes close to one, which may be the case in big brains. Cortical convolutions may correspond to the geometry expected in the pipe design. However, when λ ≪ 1, our theory predicts that the optimal design satisfies AG/R0. This prediction does not seem to be consistent with empirical observations from small brains, such as the smooth, sheet-like mouse cortex. It would be interesting to know whether different requirements on connectivity or other developmental and/or functional constraints could resolve this discrepancy.

Comparison of Mammalian Neostriatum and PD

Neostriatum is named for its striated appearance (in Nissl- and myelin-stained material [54,55]) caused by axons of neostriatal neurons gathering into fiber fascicles and perforating the gray matter [56]. Areas with higher cell density, or lower global fiber density (myelin-poor [54,55]), are called striosomes or patches [5759]. Because this structure resembles PD, we identify patch size with R0 (Equation 27). In a typical rodent (rat or mouse) neostriatum, each principal neuron may locally contact thousands other neurons [56]. Taking n ~ 103, d ~ 1 μm [31], and D ~ 0.6 μm [60], we estimate that R0 ~ 300 μm. This estimate agrees well with existing anatomical data [61]. In addition, we may estimate the average axonal fascicle size. Given the total number of neurons in the rat neostriatum is about 106 [62], we find that the fascicle diameter is of the same order of ℓ, approximately 100 μm (see Equation 58 in the fourth section in Materials and Methods). This estimate agrees well with fascicle size [55] (see also http://www.hms.harvard.edu/research/brai​n/atlas.html).

Comparison of the Avian Telencephalon and PD

Bird brains also exhibit segregation into gray and white matter and may resemble PD. Distinct fiber fascicles have been identified that connect different brain regions (see http://avianbrain.org/boundaries.html), such as the connections from HVC to RA in songbirds, which are presumably myelinated axons [63]. Interestingly, unlike in mammals, which have a large cortex on the top of other brain structures, in birds the white matter fascicles can be scattered throughout the whole forebrain. However, more precise data would be desirable, such as measurements of large-scale myelin distribution in serial sections of bird telencephalons.

Comparison of the Spinal Cord and PD

While the inner core of the spinal cord contains gray matter, the outer shell contains the white matter consisting of long axons from spinal and cortical neurons [56]. According to our theory, such organization is optimal if the inner core diameter is on the same order as R0. To see if this is the case, note that a principal (motor) neuron in the spinal cord has a very large arbor span [56,64] and may receive 105–106 potential connections. Given n ~ 105, d ~ 1 μm, and D ~ 1 μm, we find R0 ~ 8 mm according to Equation 27, which is on the same order as the inner core diameter [56].

Related Work

Our work builds upon several insights from recent studies. In particular, the idea of minimizing conduction delay has been used to explain why axons and dendrites take a certain fraction of the neuropil [17]. The main result in that paper is further extended in this study to show that local conduction delay must increase after mixing gray and white matter (see Materials and Methods). Also, in our model local circuits are approximated by the network with all-to-all connectivity, which relies on the concept of potential synapses [23]. Adopting this model allowed us to derive explicit results for the total length of local connections (see the first section in Materials and Methods) [6].

We benefited from several previous studies of anatomical and functional connectivity between different cortical areas. These studies helped conceptualize network connectivity by revealing many interesting features of the network [6571], such as hierarchal [72], clustering [73], and small-world properties [41,74], which helped to generate new models to address functional specialization and integration [7580].

We adopted (with the potential synapse caveat) the connectivity model used by Ruppin et al. [19] and Murre and Sturdy [20]. These authors applied the wiring optimization approach to explain the segregation of white and gray matter in the brain. Given a network with local and global connections, they searched for a design having minimum total wiring volume. They attempted to show that a segregated cortex-like design has a smaller volume than does a homogeneous structure.

Murre and Sturdy [20] used the scaling approach to calculate network volume for several network connectivity patterns and layouts. We verified their calculation of the interior (homogeneous) structure volume. However, their calculation of the external (cortex-like) structure volume does not seem to be self-consistent. The volume of axons in the external structure was calculated by using the expression that was unjustifiably adapted from the internal structure calculation, thus undermining their conclusion.

Ruppin et al. [19] did not rely on scaling arguments and calculated the volume of brain structures given their geometric characteristics, under reasonable assumptions of connectivity parameters. These authors showed that segregation of the network into the inner core organization, which has an inner core of gray matter surrounded by white matter, does not lead to volume efficiency compared to a homogeneous structure. They also showed that the external sheet (cortex-like) structure has a smaller volume than the inner core organization. However, this does not prove that the cortex-like structure has a smaller volume than the homogeneous structure, a conclusion relying on a fine balance of numerical factors.

We analyzed the advantages of gray and white matter segregation from the conduction delay perspective. Our results complement previous studies in some respects but differ in many others. Here, we summarize several novel points. First, we showed that the segregation of white and gray matter is consistent with minimizing conduction delay. Second, we determined the maximum number of neurons in the all-to-all connected network with a reasonable conduction delay and showed that local cortical networks are close to that limit. Third, we proposed a possible explanation for the thickness of the neocortex, which varies surprisingly little among mammalian species. Unlike Murre and Sturdy [20], who suggested that cortical thickness is determined by the maximum density of incoming and outgoing global axons (condition indicated by the thin red line in Figure 6), we argue that in most brains it is the result of minimizing local conduction delay. Fourth, our theory is based on the scaling approach and yields a phase diagram of optimal designs for a wide range of parameters. This allowed us to apply the theory to several different structures other than the neocortex. Derived scaling relationships can be tested by future experimental measurements.

Wiring Volume and Conduction Delay Minimization

As features of brain design have been explained by minimizing both the total volume and the conduction time delay, it is natural to wonder how these approaches relate to each other. In general, the evolutionary cost is likely to include both the volume and the time delay. Hopefully, such unified framework will emerge eventually. In the meanwhile, since the exact form of the cost function is not known, we sought to construct theories to explain features of brain architecture based on the simplest possible assumptions. Next, we proposed how time delay and volume can be related based on the current theory.

In our model, conduction delay in local circuits is minimal when the local wire diameter is at its optimal value, which corresponds to an optimum gray matter volume. (For details, see the first section in Results.) The local conduction delay increases when the local wire diameter d is smaller than the optimum value. In this case, volume cost and conduction delay cost are competing requirements. In the opposite case, when the local wire diameter is thicker than the optimal value, invoking additional conduction delay cost is accompanied by additional volume cost. Therefore, as long as the gray matter volume is greater than its optimal volume, e.g., because of intermixing global axons with gray matter, we may associate the additional conduction delay cost with the volume cost, named the effective volume cost.

However, in the white matter, the relationship between volume and delay is different. Increasing white matter volume by making the global axon diameter thicker does not increase the global conduction delay (see the second section in Materials and Methods). Thus, the effective volume cost of white matter is just the tissue cost. From this perspective, we propose that gray matter has a greater effective volume cost than does white matter. This may have several biological implications: (1) Initial segments of axons originating from pyramidal neurons head straight toward (and are perpendicular to) the boundary between the white and gray matter. Once axons cross the white/gray matter border, they change direction. Although such design may increase the length of global axons, it largely reduces the effective volume cost of gray matter, because the volume of global axons in the gray matter is minimal. (2) Another implication of differential effective volume costs in the gray and white matter is that the global axons in gray matter may be thinner than in white matter. Such variation in diameter could preserve short conduction delays in local and global connection. Of course, global axons cannot be made infinitesimally small without sacrificing global conduction delay. Further exploration of this effect would require more experimental measurements of diameter changes at the white/gray matter border. (3) In abutting topographically organized cortical sensory areas, the maps are mirror reflections of each other relative to the border of the areas. The purpose of such organization remains unclear, because interarea connections in the white matter do not benefit from this organization. In particular, placing two cortical areas next to each other (without mirror reflection) would not increase the length of interarea connections in the white matter. Yet, according to our theory, neurons close to the border would be at a disadvantage, because their local connections would have to reach further to find appropriate targets. Mirror-reflecting maps relative to the interarea border would eliminate a discontinuity in a map and place neurons with similar receptive fields closer to each other. Such an arrangement would benefit intracortical connections.

Materials and Methods

Minimization of conduction delay in a local network with branching axon and dendrite design.

Here we revisit the analysis from [17] using more specific information about the network. Consider wiring up a local network of n neurons with all-to-all potential connectivity. The mean conduction delay in local circuits is given by


where d is the local wire diameter; v1/3, the linear size of the local network, approximates the average path length between two potentially connect two neurons. We assume a sublinear relationship between local wire diameter and conduction velocity, and β is a proportionality coefficient. From Equation 28, we want to find the minimal local conduction delay and the corresponding optimal local network volume. Therefore, we have to eliminate wire diameter d from the previous equation and rewrite it as a function of local network volume. To get this expression, we first notice that the total volume of the local network is given by


where vn is the nonwire volume, which is assumed to be a constant, and χ is the total wire length per neuron. Second, for an all-to-all potentially connected network, by applying the branching axon and dendrite design [6], we also have


This expression is derived as follows [6]. First, the local network volume, v, is divided into cubes of volume, d3, i.e., into v/d3 voxels. Then, the number of potential contacts between an axon and a dendrite is given by the number of voxels that contain them both. Each axon occupies χ/d voxels, the same number as a dendrite. The fraction of voxels containing the axon is (χ/d)/(v/d3), the same as the fraction containing the dendrite. Then, the fraction of voxels containing both the axon and the dendrite is the product of the two fractions, χ2d4/v2. By multiplying this fraction by the total number of voxels, we find the number of voxels containing axon and dendrite, χ2d/v. Then, the condition for having at least one potential contact is given by Equation 30. Combining Equation 29 with Equation 30 and excluding χ yields


By combining Equation 28 with Equation 31, we obtain


In Equation 32, by setting the first derivative of v to zero, we find the optimal network volume, or gray matter volume, should be


And the minimal local conduction delay is given by


We assume that nonwire consists mostly of synaptic components, such as axonal boutons and spine heads. In addition, only a fraction, f(0.1–0.3), of potential synapses are actual synapses [23]. Therefore, the nonwire volume can be estimated as


where vs is a single synapse volume. Assuming that θ = 1/2 from classical cable theory and substituting it into Equations 34 and 35, we find the minimal local conduction delay is proportional to


For simplicity, after neglecting f, this expression is used in Equation 6. Furthermore, the optimal wire diameter can also be calculated by combining Equations 31, 33, and 35, which gives


After neglecting f, this expression also appears in Equation 5.

Global conduction delay can be preserved after intermixing gray and white matter.

After introducing the local connections (gray matter) into the global connections, the total network volume swells and Equation 11 changes to


where G is the total gray matter volume. After substituting L ~ V1/3, D ~ L/(BT), i.e., Equation 9 and 10, into Equation 38, the expression for V can be rewritten as


After substituting Equation 39 into D ~ L/(BT) ~ V1/3/(BT), we find the global axon diameter is given by


Therefore, as long as T > N1/2/B, we can find the corresponding global axon diameter D.

Local conduction delay increases after intermixing gray and white matter.

Consider again the network described in above with n neurons and all-to-all potential connectivity. After white matter perforates the neuropil, its volume inside gray matter can be expressed by vλ, where v is the unperturbed optimal local gray matter volume given by Equation 33 and λ is a positive dimensionless parameter. After such perturbation, the volume of the local network, i.e., Equation 29, changes to


Second, for an all-to-all potentially connected network, by applying the branching axon and dendrite design [6], Equation 30 changes to


By combining Equations 28, 41, and 42 and excluding χ and d, we can express the local conduction delay as a function of the total local network volume v′:


Equation 43 shows that t′ is a monotonically increasing function of λ, and we recover the expression for t in Equation 32 as λ = 0. Moreover, when λ ≪ 1, the local network is still close to the unperturbed optimal state, i.e., v′ ≃ v, and we can expand Equation 43 to the first order of λ, which yields


After combining Equation 44 with Equations 32 and 33, we obtain the expression for local conduction delay from the perturbation theory,


or


After neglecting the numerical coefficient in the spirit of scaling estimate, the last expression also appears in Equation 14.

Local conduction delay and surface area in the branching pipe design.

We will address stepwise the process by which we developed this design; first, we present general considerations; second, we develop the first-order branching design; and third, we describe the nonbranching pipes design.

First, to calculate the local conduction delay in the branching pipes, we consider a general model in which the white matter pipes have total J branching orders. A branch at order k (0 ≤ kJ) has length Lk and pipe diameter Pk. The total number of kth order branches within the neuropil with linear size Lk is given by Mk. Then, we can evaluate the relative local conduction delay increase through the boundary effect introduced by the kth order branches. The affected neuropil volume through the boundary effect is given by the product of total pipe surface area, MkPkLk, and distance ℓ. This means that the ratio of the affected volume to the total gray matter volume, or the relative local conduction delay increase is given by


However, Equation 47 does not tell us what the total local conduction delay is, as different branching orders can have different branching length and diameter.

To examine this further, we assume that the branching structure has a space-filling feature. In particular, the length of the main branch L0 is given by the linear size of the network, G1/3, and the length of k + 1st order branch is given by the interpipe distance among the kth order branches. For the terminal branches k = J, the interpipe distance between them is given by R0 (Equation 20).

If the length of the k + 1st order branches is much larger than the diameter of the kth order branches, i.e., Lk + 1Pk, the interpipe distance between kth order branches is given by Lk/Mk1/2. Thus, we have


where LJ + 1 is the interpipe distance among the terminal branches, given by R0 (Equation 20). By denoting Nk as the number of neurons in the neuropil with linear size Lk, Nk and Nk + 1 also have the following relationship


according to Equation 48, where NJ + 1 is the total neuron number in the neuropil with linear size R0. In addition, because the pipes with length Lk contains the global axons from the neurons inside the neuropil with linear size Lk, we should also have


By substituting Equations 4850 into Equation 47, we find that


where ℓNJ + 11/2D/LJ + 12 ~ ℓ2/R02 ~ λ, because according to Theorem 2, ℓ2 is the total cross-sectional area of the global axons inside the module with size R0. Then, the total local conduction delay increase through the boundary effect is given by


This expression can be minimized as a function of Mk. As a result, we obtain


For J > 1, we also have


Given the total number of neurons in the gray matter N = N0 and the total branching orders J, by substituting Equations 53 and 54 into Equation 49, we can also obtain Mk explicitly. Next, by using Equations 4850, we can find the optimal branching length and diameter for different branching orders.

Second, we consider a simple branching model in which only the first-order branches exist. In this case, J = 1, and by substituting Equation 53 into Equation 49, we obtain


By substituting Equations 55 and 53 into Equation 52, the relative local conduction delay increase through the boundary effect is given by


where we neglect the numerical factor of the order of one in the spirit of the scaling estimate. The total local conduction delay increase is the sum of Equation 56 and the expression for relative local conduction delay increase due to intermixing nonfasciculated global axonal segments and gray matter, i.e., λ. However, for the scaling estimate, the second term could be ignored, and we obtain Equation 23.

Next, we calculate the total surface area of the branching pipes A. According to Equation 56 and Δt/t ~ ℓA/G, we then obtain the total surface area of the branching pipes


where the last expression uses the relationship ℓ2/R02 ~ λ. This expression also appears in Equation 21.

In addition, we can also estimate the diameter and length of the first-order branching pipes. P1 can be obtained by combining Equations 49 and 50, which yields


And according to Equation 48, L1 is given by


In the previous analysis, we assume that the length of the first-order branches is much larger than the diameter of the main branches, i.e., L1P0, which allows us to use Equation 48. This assumption holds when the total white matter volume is much smaller than the gray matter volume, i.e., ND2G2/3.

In the opposite regime, however, L1P0 must hold, as the volume of the main branching pipe is much larger than the gray matter volume surrounding it. To see this, note that the volume of the main branching pipe is given by P02L0, where L0 is the length of the main branch and the volume of the gray matter surrounding an individual pipe is given by (P0 + L1)2L0P02L0. Then, it is easy to check that if the gray matter volume is much larger than the white matter pipe volume, we have L1P0, while in the opposite case we have L1P0. Geometrically, when ND2G2/3, the gray matter resembles a sheet, and the sheet thickness is given by the length of the first-order branches.

As the pipe design exhibits a different configuration when ND2G2/3, we expect that the expressions for the total surface area of the pipes and the minimal local conduction delay are different from what we derived above. In this case, the total surface area of the main branching pipes is equal to the surface area of the gray matter sheet G/L1, and the relative local conduction delay increase through the boundary effect of the main branches is given by Δt0/t ~ ℓG/L1G ~ ℓ/L1.

To calculate the boundary effect induced by the terminal branches, we assume that R0P1, where P1 is the diameter of the terminal branches. This condition allows us to use Equations 4850. Later, we will confirm that R0P1 holds. Then, L1 ~ M11/2R0, P1 ~ M11/4ℓ, and the relative delay increase due to the terminal branches is given by Δt1/t ~ ℓP1M1/L12 ~ λM11/4. By adding up the local delay from the main and the first-order branches, we find that in the regime ND2G2/3, the total local conduction delay increase is given by


Minimizing this expression as a function of M1, we obtain M1 ~ λ−2/3, and Δt/t ~ λ5/6, as appeared in Equation 25.

Next, we calculate the total surface area of the pipes A. As Δt/t ~ ℓA/G ~ λ5/6, we then obtain the total surface area of the branching pipes


as appeared in Equation 24.

To check whether R0P1, we note that P1 ~ M11/4ℓ. Then, R0P1 requires R0λ−1/6ℓ, as M1 ~ λ−2/3. In turn, this requires λ ≪ 1 as ℓ/R0 ~ λ1/2. Thus, R0P1 if λ ≪ 1. This condition should always be satisfied for the PD.

Third, the nonbranching pipe model corresponds to J = 0. It does not belong to the PD, because AG/R0 does not always hold in such a design when λ ≪ 1, i.e., ND2G/ℓ. To see this, we note that in the regime ND2G2/3, A ~ G/R0 must hold in the nonbranching pipe model, because the pipe diameter P0 is much larger than R0. In other words, when G/ℓ ≫ ND2G2/3, the gray matter in the nonbranching pipe model resembles a sheet with thickness R0.

Scaling of the mammalian neocortex.

The theoretical framework developed in this paper allows us to derive several scaling laws for the neocortex. Provided our perturbation theory is valid, the total neocortical volume G should be proportional to the total nonwire volume. Assuming that nonwire contains mostly synapses, we have


First, from Equation 62, we find that the synaptic density, ρs, is a constant, since ρs ~ Nn/G ~ 1/vs, where the average synapse volume vs is assumed to be a constant in different cortical areas and across different species. The prediction of constant synapse density is supported by experimental observations [31,40,81,82] from a small number of taxa so far, and was used as a starting point to derive scaling laws of the mammalian brains in several theoretical papers [39,45].

Second, we find the neuronal density ρ ~ N/G ~ N/(Nnvs) ~ 1/n. Since ρ scales inversely as the cubic root of total brain volume V across different mammalian species (ρ ~ V−1/3) [40,83], and the cortical volume is loosely proportional to the brain volume (G ~ V) [84], we find n ~ V1/3, N ~ V2/3, and n ~ N1/2. We note that Braintenberg [31,44] has previously proposed the square-root relationship between n and N. He assumed that the cerebral cortex could be divided into N1/2 compartments and each compartment contains N1/2 neurons. The local connectivity within a compartment is almost all-to-all, and every compartment is connected to every other one by a global axon.

Third, we find that the global axon diameter D scales as V1/6. To see this, we note that the total white matter volume W is given by ND2V1/3, where the average length of global axons in the white matter is assumed to be proportional to the brain size, V1/3. Since N ~ V2/3, and it has also been reported that W ~ V4/3 across different mammalian species [3,39,8486], we find D ~ V1/6. This is consistent with recent measurements from corpus callosum, which indicates that the average diameter of global axons scales monotonically with the brain size [40]. Then, using n ~ V1/3, D ~ V1/6 and Equation 27, we obtain R0 ~ V4/27, an expression from the first section in Discussion.

Acknowledgments

We are grateful to Alexei Koulakov, Sen Song, and Samuel S. H. Wang for helpful discussions and to Georg Streidter for helpful comments on the manuscript. We also thank Maxim Nikitchenko for suggestions about the figures. This research was supported by the Swartz Foundation, the Klingenstein Foundation, and the National Institutes of Health/National Institute of Mental Health grant 69838.

Author Contributions

QW and DBC conceived the theory, performed the calculations, and wrote the paper.

References

  1. 1. Wright RD (1934) Some mechanical factors in the evolution of the central nervous system. J Anat 69: 86–88.
  2. 2. Mitchison G (1991) Neuronal branching patterns and the economy of cortical wiring. Proc R Soc Lond B Biol Sci 245: 151–158.
  3. 3. Zhang K, Sejnowski TJ (2000) A universal scaling law between gray matter and white matter of cerebral cortex. Proc Natl Acad Sci U S A 97: 5621–5626.
  4. 4. Van Essen DC (1997) A tension-based theory of morphogenesis and compact wiring in the central nervous system. Nature 385: 313–318.
  5. 5. Cherniak C (1992) Local optimization of neuron arbors. Biol Cybern 66: 503–510.
  6. 6. Chklovskii DB (2004) Synaptic connectivity and neuronal morphology: Two sides of the same coin. Neuron 43: 609–617.
  7. 7. Allman JM (1999) Evolving brains. New York: Scientific American Library. 235 p.
  8. 8. Striedter GF (2005) Principles of brain evolution. Donini G, editor. Sunderland (Massachusetts): Sinauer Associates. 436 p.
  9. 9. Jerison HJ (1973) Evolution of the brain and intelligence. New York: Academic Press. 496 p.
  10. 10. Squire LR, Kandel ER (2000) Memory: From mind to molecules. New York: Scientific American Library. 246 p.
  11. 11. Hebb DO (1949) The organization of behavior: A neuropsychological theory. New York: Wiley. 354 p.
  12. 12. Laughlin SB, Sejnowski TJ (2003) Communication in neuronal networks. Science 301: 1870–1874.
  13. 13. Lennie P (2003) The cost of cortical computation. Curr Biol 13: 493–497.
  14. 14. Levy WB, Baxter RA (1996) Energy efficient neural codes. Neural Comput 8: 531–543.
  15. 15. Attwell D, Laughlin SB (2001) An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab 21: 1133–1145.
  16. 16. Dickson BJ, Cline H, Polleux F, Ghosh A (2001) Making connections. Meeting: Axon guidance and neural plasticity. EMBO Rep 2: 182–186.
  17. 17. Chklovskii DB, Schikorski T, Stevens CF (2002) Wiring optimization in cortical circuits. Neuron 34: 341–347.
  18. 18. Chklovskii DB, Stepanyants A (2003) Power-law for axon diameters at branch point. BMC Neurosci 4: 18.
  19. 19. Ruppin E, Schwartz EL, Yeshurun Y (1993) Examining the volume efficiency of the cortical architecture in a multi-processor network model. Biol Cybern 70: 89–94.
  20. 20. Murre JM, Sturdy DP (1995) The connectivity of the brain: Multi-level quantitative analysis. Biol Cybern 73: 529–545.
  21. 21. Douglas RJ, Koch C, Mahowald M, Martin KA, Suarez HH (1995) Recurrent excitation in neocortical circuits. Science 269: 981–985.
  22. 22. Binzegger T, Douglas RJ, Martin KA (2004) A quantitative map of the circuit of cat primary visual cortex. J Neurosci 24: 8441–8453.
  23. 23. Stepanyants A, Hof PR, Chklovskii DB (2002) Geometry and structural plasticity of synaptic connectivity. Neuron 34: 275–288.
  24. 24. Chklovskii DB, Mel BW, Svoboda K (2004) Cortical rewiring and information storage. Nature 431: 782–788.
  25. 25. Kalisman N, Silberberg G, Markram H (2005) The neocortical microcircuit as a tabula rasa. Proc Natl Acad Sci U S A 102: 880–885.
  26. 26. Ritchie JM (1995) The axon: Structure, function, and pathophysiology. Physiology of axons. New York: Oxford University Press. pp. 68–96. pp.
  27. 27. Hodgkin AL (1954) A note on conduction velocity. J Physiol 125: 221–224.
  28. 28. Rushton WA (1951) A theory of the effects of fibre size in medullated nerve. J Physiol 115: 101–122.
  29. 29. Koch C (1999) Biophysics of computation: Information processing in single neurons. New York: Oxford University Press. 585 p.
  30. 30. Hoffmeister B, Janig W, Lisney SJ (1991) A proposed relationship between circumference and conduction velocity of unmyelinated axons from normal and regenerated cat hindlimb cutaneous nerves. Neuroscience 42: 603–611.
  31. 31. Braitenberg V, Schuz A (1998) Cortex: Statistics and geometry of neuronal connectivity. Berlin: Springer. 249 p.
  32. 32. Markram H, Lubke J, Frotscher M, Sakmann B (1997) Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275: 213–215.
  33. 33. Attwell D, Gibb A (2005) Neuroenergetics and the kinetic design of excitatory synapses. Nat Rev Neurosci 6: 841–849.
  34. 34. Swadlow HA, Waxman SG (1976) Variations in conduction velocity and excitability following single and multiple impulses of visual callosal axons in the rabbit. Exp Neurol 53: 128–150.
  35. 35. Griff ER, Greer CA, Margolis F, Ennis M, Shipley MT (2000) Ultrastructural characteristics and conduction velocity of olfactory receptor neuron axons in the olfactory marker protein-null mouse. Brain Res 866: 227–236.
  36. 36. Ringo JL (1991) Neuronal interconnection as a function of brain size. Brain Behav Evol 38: 1–6.
  37. 37. Ringo JL, Doty RW, Demeter S, Simard PY (1994) Time is of the essence: A conjecture that hemispheric specialization arises from interhemispheric conduction delay. Cereb Cortex 4: 331–343.
  38. 38. Watts DJ, Strogatz SH (1998) Collective dynamics of “small-world” networks. Nature 393: 440–442.
  39. 39. Changizi MA (2001) Principles underlying mammalian neocortical scaling. Biol Cybern 84: 207–215.
  40. 40. Harrison KH, Hof PR, Wang SS (2002) Scaling laws in the mammalian neocortex: Does form provide clues to function? J Neurocytol 31: 289–298.
  41. 41. Sporns O, Kotter R (2004) Motifs in brain networks. PLoS Biol 2: e369.. DOI: 10.1371/journal.pbio.0020369.
  42. 42. Karbowski J (2003) How does connectivity between cortical areas depend on brain size? Implications for efficient computation. J Comput Neurosci 15: 347–356.
  43. 43. Deacon T (1990) Rethinking mammalian brain evolution. Amer Zool 30: 629–705.
  44. 44. Braitenberg V (2001) Brain size and number of neurons: An exercise in synthetic neuroanatomy. J Comput Neurosci 10: 71–77.
  45. 45. Stevens CF (1989) How cortical interconnectedness varies with network size. Neural computation 1: 473–479.
  46. 46. Hursh J (1939) Conduction velocity and diameter of nerve fibers. Amer J Physiol 127: 131–139.
  47. 47. Boyd IA, Kalu KU (1979) Scaling factor relating conduction velocity and diameter for myelinated afferent nerve fibres in the cat hind limb. J Physiol 289: 277–297.
  48. 48. Stepanyants A, Tamas G, Chklovskii DB (2004) Class-specific features of neuronal wiring. Neuron 43: 251–259.
  49. 49. Carmichael ST, Price JL (1996) Connectional networks within the orbital and medial prefrontal cortex of macaque monkeys. J Comp Neurol 371: 179–207.
  50. 50. Felleman DJ, Van Essen DC (1991) Distributed hierarchical processing in the primate cerebral cortex. Cereb Cortex 1: 1–47.
  51. 51. Prothero JW, Sundsten JW (1984) Folding of the cerebral cortex in mammals. A scaling model. Brain Behav Evol 24: 152–167.
  52. 52. Hofman MA (1985) Size and shape of the cerebral cortex in mammals. I. The cortical surface. Brain Behav Evol 27: 28–40.
  53. 53. Prothero J (1997) Cortical scaling in mammals: A repeating units model. J Hirnforsch 38: 195–207.
  54. 54. Goldman-Rakic PS (1982) Cytoarchitectonic heterogeneity of the primate neostriatum: Subdivision into Island and Matrix cellular compartments. J Comp Neurol 205: 398–413.
  55. 55. Herkenham M, Edley SM, Stuart J (1984) Cell clusters in the nucleus accumbens of the rat, and the mosaic relationship of opiate receptors, acetylcholinesterase and subcortical afferent terminations. Neuroscience 11: 561–593.
  56. 56. Shepherd GM (1998) The synaptic organization of the brain. New York: Oxford University Press. 648 p.
  57. 57. Herkenham M, Pert CB (1981) Mosaic distribution of opiate receptors, parafascicular projections and acetylcholinesterase in rat striatum. Nature 291: 415–418.
  58. 58. Graybiel AM, Ragsdale CW Jr., Yoneoka ES, Elde RP (1981) An immunohistochemical study of enkephalins and other neuropeptides in the striatum of the cat with evidence that the opiate peptides are arranged to form mosaic patterns in register with the striosomal compartments visible by acetylcholinesterase staining. Neuroscience 6: 377–397.
  59. 59. Gerfen CR (1989) The neostriatal mosaic: Striatal patch-matrix organization is related to cortical lamination. Science 246: 385–388.
  60. 60. Partadiredja G, Miller R, Oorschot DE (2003) The number, size, and type of axons in rat subcortical white matter on left and right sides: A stereological, ultrastructural study. J Neurocytol 32: 1165–1179.
  61. 61. Brown LL, Feldman SM, Smith DM, Cavanaugh JR, Ackermann RF, et al. (2002) Differential metabolic activity in the striosome and matrix compartments of the rat striatum during natural behaviors. J Neurosci 22: 305–314.
  62. 62. Oorschot DE (1996) Total number of neurons in the neostriatal, pallidal, subthalamic, and substantia nigral nuclei of the rat basal ganglia: A stereological study using the cavalieri and optical disector methods. J Comp Neurol 366: 580–599.
  63. 63. Vates GE, Broome BM, Mello CV, Nottebohm F (1996) Auditory pathways of caudal telencephalon and their relation to the song system of adult male zebra finches. J Comp Neurol 366: 613–642.
  64. 64. Cullheim S, Fleshman JW, Glenn LL, Burke RE (1987) Membrane area and dendritic structure in type-identified triceps surae alpha motoneurons. J Comp Neurol 255: 68–81.
  65. 65. Friston KJ, Frith CD, Liddle PF, Frackowiak RS (1993) Functional connectivity: The principal-component analysis of large (PET) data sets. J Cereb Blood Flow Metab 13: 5–14.
  66. 66. Sporns O, Tononi G, Edelman GM (2000) Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cereb Cortex 10: 127–141.
  67. 67. Kotter R, Stephan KE, Palomero-Gallagher N, Geyer S, Schleicher A, et al. (2001) Multimodal characterisation of cortical areas by multivariate analyses of receptor binding and connectivity data. Anat Embryol (Berl) 204: 333–350.
  68. 68. Hilgetag CC, Grant S (2000) Uniformity, specificity and variability of corticocortical connectivity. Philos Trans R Soc Lond B Biol Sci 355: 7–20.
  69. 69. Stephan KE, Hilgetag CC, Burns GA, O'Neill MA, Young MP, et al. (2000) Computational analysis of functional connectivity between areas of primate cerebral cortex. Philos Trans R Soc Lond B Biol Sci 355: 111–126.
  70. 70. Stephan KE, Kamper L, Bozkurt A, Burns GA, Young MP, et al. (2001) Advanced database methodology for the collation of connectivity data on the macaque brain (CoCoMac). Philos Trans R Soc Lond B Biol Sci 356: 1159–1186.
  71. 71. Young MP, Scannell JW, O'Neill MA, Hilgetag CC, Burns G, et al. (1995) Non-metric multidimensional scaling in the analysis of neuroanatomical connection data and the organization of the primate cortical visual system. Philos Trans R Soc Lond B Biol Sci 348: 281–308.
  72. 72. Hilgetag CC, O'Neill MA, Young MP (2000) Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor. Philos Trans R Soc Lond B Biol Sci 355: 71–89.
  73. 73. Hilgetag CC, Burns GA, O'Neill MA, Scannell JW, Young MP (2000) Anatomical connectivity defines the organization of clusters of cortical areas in the macaque monkey and the cat. Philos Trans R Soc Lond B Biol Sci 355: 91–110.
  74. 74. Sporns O, Zwi JD (2004) The small world of the cerebral cortex. Neuroinformatics 2: 145–162.
  75. 75. Tononi G, Sporns O (2003) Measuring information integration. BMC Neurosci 4: 31.
  76. 76. Tononi G, Sporns O, Edelman GM (1994) A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proc Natl Acad Sci U S A 91: 5033–5037.
  77. 77. Friston K (2002) Functional integration and inference in the brain. Prog Neurobiol 68: 113–143.
  78. 78. Penny WD, Stephan KE, Mechelli A, Friston KJ (2004) Modelling functional integration: a comparison of structural equation and dynamic causal models. Neuroimage 23 Suppl 1: S264–S274.
  79. 79. Kotter R, Stephan KE (2003) Network participation indices: characterizing component roles for information processing in neural networks. Neural Netw 16: 1261–1275.
  80. 80. Passingham RE, Stephan KE, Kotter R (2002) The anatomical basis of functional localization in the cortex. Nat Rev Neurosci 3: 606–616.
  81. 81. Cragg BG (1967) The density of synapses and neurones in the motor and visual areas of the cerebral cortex. J Anat 101: 639–654.
  82. 82. Schuz A, Demianenko GP (1995) Constancy and variability in cortical structure. A study on synapses and dendritic spines in hedgehog and monkey. J Hirnforsch 36: 113–122.
  83. 83. Tower D (1954) Structural and functional organization of mammalian cerebral cortex: The correlation of neuronal density with brain size. Journal of Comparative Neurology 101: 19–52.
  84. 84. Hofman MA (1989) On the evolution and geometry of the brain in mammals. Prog Neurobiol 32: 137–158.
  85. 85. Hofman MA (1988) Size and shape of the cerebral cortex in mammals. II. The cortical volume. Brain Behav Evol 32: 17–26.
  86. 86. Bush EC, Allman JM (2003) The scaling of white matter to gray matter in cerebellum and neocortex. Brain Behav Evol 61: 1–5.