7/26/2006

Neural Noise and Information Theory

Neural activity in a given population of neurons is never the same twice, even when recording from exactly the same neurons, after exposure to exactly the same stimulus. Reassuringly, there is thought to be some static underlying "tuning curve" to which the neural population optimally responds, and any deviations from this ideal curve represent merely the noisy nature of neural processing - at least, many neuroscientists feel safe in assuming so.

Unfortunately, the deviations of neural population activity from this ideal "tuning curve" are correlated - in other words, the so-called neural "noise" actually appears to convey some kind of information. Consider the picture at the start of this post, in which the ideal tuning curve is represented as a solid line, with uncorrelated and correlated noise contrasted side by side. It's easy to see why this kind of stochastic behavior would cause a problem: sure, over time, neural activity averages out to this ideal curve, but on any given day the actual measured activity will deviate from the norm in some kind of systematic fashion. Furthermore, this noise is also temporally correlated at a smaller scale, so that any two measurements taken closer in time will tend to manifest similar deviations from the norm.

The focus of Averbeck, Latham, and Pouget's recent NRN review article is on the implications this fact has for neuroscience at large. They identify several important points:

First, one must know the correlations among individual neurons within a given population in order to determine how much and what kind of information is represented by that population; depending on the degree and direction of correlation, the information represented by a neural population can increase or decrease.

These correlations can also be stimulus-modulated, such that two neurons may show positive correlations in their firing rate for one stimulus but negative correlations for another.

Information theoretic analyses of how correlated noise affects the information communicated by neural populations consisting of only two neurons show that, on average, the effects are small, and can be positive or negative; whether this also holds for large populations is largely unknown. However, based on a simple simulation of the effect of both positively and negatively correlated noise within larger neural populations, the authors (not surprisingly) discover that the effect of noise on Shannon information is a nonlinear function of population size. Therefore, studies that shuffle the temporal order of trials may in some cases be grossly misstating the amount and kind of information contained in measured neural activity.

Finally, a titillating conclusion: if the noise correlation is positive, it's possible that the information capacity of large neural networks "saturates" or levels off relatively quickly. Does it saturate far below the information capacity of, say, the retina, or the cochlea? Unfortunately, in the absence of a technology capable of recording individual spike trains from thousands of neurons simultaneously, this questions is likely unanswerable.

1 Comments:

Blogger SCLin said...

This is very similar to this article by Schneidman et al. The authors studied pairwise correlation in large populations of simulnateously recorded neurons and showed that the information carried by the population does seem to saturate (~200 neurons), precisely because of the correlation structure between spike trains.

9/25/2006 05:02:00 PM  

Post a Comment

<< Home