Reader Comments

Post a new comment on this article

on the benefits of fluctuations

Posted by tmasquelier on 22 Jun 2012 at 09:28 GMT

A very important property is that the precision of synchrony between trials, as estimated by the width of the SAC (Fig. 5E; see Methods), reflects the similarity of the input signals (measured by the signal to noise ratio), rather than the intrinsic timescale of the signal fluctuations (seen in the autocorrelation of the signal in Fig. 5C, right). In particular, when noise level goes to 0, precision converges to 0 ms rather than to the timescale of input fluctuations (Fig. 5E, left)
http://ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002561#article1.body1.sec2.sec6.p2

This is indeed a very important property, but as you said, it only works if the neuron is in the fluctuation-driven mode (a.k.a coincidence detection mode), as opposed to mean-driven (a.k.a integrator) mode.

In mean-driven mode with suprathreshold input, the jitter accumulates, and an arbitrarily small noise will end up completely desynchronizing the spikes (between trials or between identical neurons).

Now my point is: some stimuli might change too slowly to enable the fluctuation-driven mode per se. Think of a binary stimulus that is either up or down, but stays up for durations >> membrane time constant. The up state has to be suprathreshold (otherwise, no spike at all), and jitter will accumulate. This is where the shared oscillatory input you mention next (Hopfield) becomes useful. Actually, the shared input does not even need to be periodic. What matters is that it fluctuates with timescales of the order of the membrane time constant (or smaller). That way, by choosing an appropriate threshold, you can ensure the neurons are in the fluctuation-driven mode (with subthreshold average input).

Note that this leads to the situation you explore in Fig 6: spikes lock to the shared input, not to the stimulus onset, so plotting a single cell PSTH won't help. LFPs may be a way to estimate the shared input, and there is indeed evidence that spikes tend to lock to LFP oscillations, and their phases may encode information (eg phase precession in hippocampus). But in fine downstream neurons only care about relative spike times, not LFPs, so, as you mention in the conclusion, the best experimental technique to validate your theory is indeed multielectrode recordings and cross-correlograms.

Now if we take the example of natural vision, it seems that signals fluctuate rapidly enough so that the shared input is not necessary (Fig 3 of http://dx.doi.org/10.1007/s10827-011-0361-9). This is at least what I have found by exposing the Virtual Retina simulator (http://dx.doi.org/10.1007/s10827-008-0108-4) to natural videos (http://dx.doi.org/10.1007/s00422-003-0434-6). This, in accordance with your prediction, leads to a situation in which the relative spike times (in the model's retina and LGN) are much more precise than the timescales of the videos (my Fig 4), consistent with experimentation in cat LGN using the same videos (http://dx.doi.org/10.1038/nature06105 and http://dx.doi.org/10.1371/journal.pbio.0060324). I also modelled downstream V1 neurons, equipped with STDP, and I agree with your Fig 12D: the SRF of LGN neurons are indeed oriented edge. As a result, orientation selectivity progressively emerged among the V1 neurons, and responses were contrast invariant, in agreement with the hyperplane SRF (your Fig 4B).

Congrats on your paper. Very appealing theory.

Timothée Masquelier

No competing interests declared.