Izhikevich is the head of brain corp, and he used to be a mathematician at the Neurosciences Institute. He did a bunch of large-scale simulations, modeling and theoretical work. Polychronization is key to coming up with a spiking neural code. So, I decided to go back and reread this paper.
Notes:
Conduction delays can vary over a wide range <1ms to 44ms. Individual delays, however, are precise and reproducible.
Basic idea: different temporal activation of neurons can activate different populations based on conduction delays. This allows for a much higher dimensional representation space, and a lot more memory capacity. Firing is not synchronous, but it is time-locked = polychrony.
STDP automatically associates the neurons in the network into polychronous groups. There can be far more polychronous groups than neurons. He lets the network just settle for 24 hours - and these groups emerge spontaneously from the random connectivity initial conditions.
He gets gamma oscillations to emerge from the network directly. Not sure how, he does not say anything about exc-inh plasticity. He uses a FS model, so these cells would be prone to firing at 40Hz, but not sure how they get synchronized. We should remake this, and get all the FS cells coupled electrically. We also need an exc-inh learning rule.
With inputs new groups emerge, and links between groups form via STDP. The state representations should be considered based on these groups and linking groups.
Expansion:
I think this is a very powerful idea. I would change how it works in several different ways.
- Gamma is not an emergent phenomenon - the interneurons are designed to sync-up and cause the gamma activity. This forces the polychronous chains to stay time-locked as the pyramidal cells can only fire during the troughs of gamma.
- The gamma inhibition should be "multiplicative". Basically this will normalize the population, and prevent the network form exploding.
- This is the neural "clock". It is necessary to keep synchrony to maintain any neural code. Noise will cause slight drift which will get compounded over time without a synchronizing force.
- We will want to consider inverse learning phases. These polychronous groups will activate to remap their representation to the inputs that activate them. This is the generative part of the learning.
- The dendritic tree can be used to select the polychronous inputs more precisely. Can act as a full feed-forward neural network and make non-linear classifications.
- Two dendritic trees on pyramidal cells - one for the feed-forward input, and one for feed-back/recurrent. These can interact such that the feed-back connections are predicting the feed-forward inputs.
- Seperate inhibitory populations. Other inhibitory interneurons can contribute as the negative weights for the dendritic tree network. These would be additive, and would need their own learning rule.
No comments:
Post a Comment