The other idea about ART was that there exists an "orienting system". This system is necessary to provide the signals to learn new things. The idea of the orienting system is that with a novel stimulus the top-down will have a poor match with the bottom-up. So the orienting system basically "resets" the top-down matching signal. It simultaneously increases the excitability of some neurons such that newly activated neurons can be learned as part of the grouping.
So lets just start back with the resonances. Part of the idea about the resonance state is that learning is occurring, but also the resonance state is good - it means what you are doing worked. So I guess that makes sense, learn better the classification states that are good. And so this can be done probably quite rapidly in the brain. A good amount of synchronous bursting would, from what we see experimentally, probably become a pretty solid encoding in a short time. Remember resonances are part of consciousness, so we have to think about what we are counscious of and what it might mean for how the brain works.
So my "declarative memory" is my life experience. I can look around me and see and remember what is happening to me basically all the time. There is a tremendous amount of information being encoded in my memories all the time. Some of these memories fade, some are more attended to and code better, some are remembered soon after encoding - reinforcing the memory further. These have real biophysical correspondences in how are brain works. Plasticity rules in ART would suggest that these encodings are done throughout cortex and through plasticity driven by resonances (aka synchronous bursting). Attention and motivational system can perhaps increase and decrease the amount of plasticity - both through direct modulatory mechansims of plasticity (Dopamine-like) and through increased excitability leading to stronger resonances and more burstiness.
But how do you do the learning in the first place? How do you get to the really nice resonance states? We have a good rule for what to do while in the resonance states, but what about when we aren't in one? What happens if there is a poor match between top-down and bottom-up?
So its as if the orienting system is at first turning up the volume of the bottom-up inputs when the top-down activity is not very strong. Or maybe this could happen from inhibitory feedback being reduced from low activity. So soon the neurons begin to fire. As more and more neurons begin to fire, eventually it settles on a resonant state? I guess its like you are constantly increasing the dimensionality of the model and eventually you would activate enough neurons that some of the bottom-up neurons would begin to burst. Bursting = learning. So you could start to learn a resonance in that manner. The search process increases the dimensionality of the representation.
So the next time you experience a similar input, a similar pathway will be activated which now activates a large population pretty strongly. Perhaps strongly enough that divisive normalization quiets some down. But the input will be different in some ways. Some of the top-down is not activated. This would suggest that perhaps then new plasticity would form between the bursting neurons at the synapses that are bursting together.
First lets clarify, in a way we're trying to build something hierarchical. So we consider the input layer to be a layer of pyramidal cells who receive "bottom-up" input (from LGN, and I guess neighboring pyramidal cells). So when a stimulus arrives the bottom-up input will activate a population - this population is the "representation" space and should have all the information about the stimulus. These bottom up signals only cause spiking. The spike are relayed through a layer of tunable synaptic weights (think feed-forward neural network) to a "prototype" layer. This is again a layer of pyramidal cells, this layer gets its inputs in the basal tree from the first layer. (So lets call the representation layer L4 and the prototype layer L2/3 (also could be CA3/CA1). So again bottom up is from L4 and neighbors. (What if neighbors connected to apical tuft of self?). Ok, so L2/3 has more lateral connections and likes to excite itself. This will lead to a more pattern-completing type of layer. So then probably like more STDP between L2/3 cells (controlled by gamma oscillatory inhibition). Now, those signals feedback onto L4, and excite the apical dendrites of L4 neurons. This causes the L4 cells with both bottom-up and top-down inputs to begin bursting. Now the bursting is more of a signal that means do some learning and maybe listen to me a little more closely (how much more?).
But lets pay attention to learning. Ok so now we have L2/3 neurons spiking from input by L4, and now many of those L4 cells start bursting. This means that we have spiking in L2/3 and bursting in L4 learning at those synaptic weights (make the current pattern in L2/3 more likely given the pattern in L4). So lets say those weights get some LTP. LTD when there is a bursting presynapse, but no spiking in the post. Does L2/3 at this point begin bursting as well? I imagined that L2/3 would be modulated by some top-down signal. Perhaps the next L2/3 in the hierarchy (making two cortical hierarchies - pattern completing and pattern seperating), or possibly by L4 of the next level (making something that is interleaving(?)).
Well it would seem that there should be learning in the weights that feedback from L2/3 to L4. (I know Grossberg has an L6 inbetween, but just go along.) This would reinforce the prototype as having the characteristics of the neurons that are firing. So perhaps in the apical tree if there was a calcium spike and the neuron began bursting, then the active synapses should get stronger (making it more likely to be a burst next time the same prototype is drawn up). LTD if there was a calcium spike and no burst.
So L2/3 is two fighting feedback loops. The bottom-up inputs are all trying to excite the neurons by these learning rules, and the neurons are exciting each other. These synapses are being reinforced by positive feedback by inputs, except there is some normalization procedure - like homeostasis - in the synaptic weights. Still the neuron can be driven a lot. Now the neurons turn on each other as well as the inhibitory feedback gain-control. This forces the neurons to then also compete with each other. The gain control maximizes the allowable length of the population vector. This prevents L2/3 from just maxing out.