Wednesday, November 14, 2012

Neuronal arithmetic

Silver, RA. (2004) Neuronal airthmetic. Nature Review Neuroscience 11.

So, basically this is the guy all about the need for synaptic noise in order to get multiplicative effects from shunting inhibition.

But he has some basic good ideas about population codes and the need for making certain types of computations.




Figure 1 | The rate-coded neuronal input–output relationship and possible
arithmetic operations performed by modulatory inputs. a | For rate-coded neuronal
signalling, a driving input typically consists of asynchronous excitatory synaptic input from
multiple presynaptic neurons firing in a sustained manner (shown in red). A neuron may
also receive a modulatory input, such as inhibition (shown in green), that alters the way the
neuron transforms its synaptic input into output firing rate (shown in blue). b | The
input–output (I–O) relationship between the total (or mean) driving input rate (d) and the
response that is represented by the output firing rate (R). The arrow indicates the rheobase
(minimum synaptic input that generates an action potential). c | Rate-coded I–O
relationships can be altered by changing the strength of the modulatory input (m), which
may be mediated by a different inhibitory or excitatory input. If this shifts the I–O
relationship along the x-axis to the right or left, changing the rheobase but not the shape of
the curve, an additive operation has been performed on the input (shown by orange
curves). This input modulation is often referred to as linear integration because the synaptic
inputs are being summed. d | An additive operation can also be performed on output firing.
In this case a modulatory input shifts the I–O relationship up or down along the y-axis
(shown by orange curves). e,f | If the driving and modulatory inputs are multiplied together
by the neuron, changing the strength of a modulatory input will change the slope, or gain,
of the I–O relationship without changing the rheobase. A multiplicative operation can
produce a scaling of the I–O relationship along either the x-axis (input modulation; e)
or the y-axis (output modulation; f). Although both of these modulations change the gain of
the I–O relationship, only output gain modulation scales the neuronal dynamic range (f).

So he talks about "input gain modulation" where the max value doesn't change (e), and he talks about "output gain modulation" where the max value is scaled (f). And, so basically due to the sigmoidal shape, the slope is changed in both scenarios. And he says that this is gain control in both cases.

So yeah all the experimental work with shunting inhibition just based on ohms law and currents. He makes it sound amazing. But then he says, yeah, Holt and Koch showed it doesn't work. But then he makes an argument about how it could work if there was "synaptic noise", but he does show some experimental evidence to back it up. I need to go back and look at this.

But right his mechanism seems strange to me (and I think as of now useless, but let me explain it the best I can). The theoretical idea is explained in the Larry Abbott paper. But I think I'll give it a try and read that paper later. So... the idea is that you have balanced excitation and inhibition, and then basically what that means is that the noise is being increased. This results in a higher variance of current fluxes, and then due to like an integrate-and-fire mechanism, the higher variances results in spiking I-O function to get scaled.

But. Wouldn't a higher variance make it spike more often and not less? That would seem like backwards gain control - the network has a lot of activity so increase activity even faster? Hmm... but looking at their figure it sounds like it goes down... But they also show some data. And its like with dynamic clamp they add in excitatory and shunting currents and show that its like how they predict.

They basically explain the whole "derive Holt and Koch" thing in this paper. Its not as pretty as my derivation, but yeah they explain why it works that way mathematically. (they dont actually derive the linear equation though). But need to look at experimental work better.

Right, so I think the problem with this idea is that it isn't really controllable. How would one turn up or down the gain? Its like the circuit gets noisy and the gain goes down, but how would synapses keep that in shape - plasticity rules? I'm not sure, I'm confused thinking about it.




No comments:

Post a Comment