Monday, November 12, 2012

Normalization as a canonical neural computation

Carandini, M. Heeger, DJ. (2012) Normalization as a canonical neural computation. Nature Reviews Neuroscience 13: 51-62.

The normalization equation:

R = D / (s + N)

D is the non-normalized drive of the neuron, s prevents divide by zero, and N is the normalization factor.

Normalization is seen in a ton of areas. It can be done in many different ways, and have different mechanisms. Here are the areas he talks about:

  • Invertabrate olfactory system (drosophila)
  • Retina
  • V1
  • MT, MST
He also include exponents in the normalization equation, which can change the shapes of the curves. He takes the equation and fits it to a large amount of data pretty nicely. What's interesting is that many of the figures he describes is not population-code normalization as purely as I've been modeling. He often describes  the normalization function as a right-ward shift in the IO function on a log scale. This means that the input can eventually reach the saturation level if strong enough (but has to be multiplicatively larger).


So you can see how in B and D that the IO functions aren't just being purely scaled. However, the normalization equation he describes fits the data quite well. 

He then talks about attention and gain control. Attention "multiplicatively enhances the stimulus drive before normalization". 

He makes a distinction between two types of gain control: "Contrast-gain" is right-left shift of IO function on log scale (horizontal stretching), "Response-gain" is up-down scaling of IO function.

There are clues that feed-forward and feed-back circuitry is involved in divisive processes.

He briefly mentions Holt and Koch. Then he says: "It is now agreed that the effect of conductance increases on firing rates is divisive, but only if the source of increased conductance varies in time" and cites Silver, Chance and Abbott.


So I think what they call contrast normalization and response normalization may be nice to have in the paper. I can also talk about the temporal gain control stuff more, as it is not needed in my model. 

Also, this makes me think about ARBMs. I was wondering what the effect of the top-down signals should be on the output, and if the top-down are equivalent to attention, then these papers say attention is gain increasing. So the top-down effects of the ARBMs should increase the gain of the population that they feedback to. Should look into if burst firing is like a multiplication somehow of spiking...

No comments:

Post a Comment