Circular convolution to perform compression - binding two vectors together - bind the current semantic pointer vector with a position. The position is an internally generated position index semantic pointer:
MemoryTrace = Position1 ⊗ Item1 + Position2 ⊗ Item2 ...
Position1 = Base
Position2 = Position1 ⊗ Base
Position3 = Position2 ⊗ Base
Conceptual semantic ponters for numbers are constructed similarly to positoin - circular convolution of the base and the operator addone:
One = Base
Two = One ⊗ AddOne
Three = Two ⊗ AddOne
The vectors are unitary - don't change length when convolved with themselves.
Reward evaluation is done via a dopamine like RL system.
Neurons are LIF. 20ms time constant, abs refractory of 2ms. random max firing rates between 100-200Hz. Encoding vectors are randomly chosen from the unit hyper-sphere. Most projections are 10ms AMPA. Recurrent projectsions are 50ms NMDA.
Model occupies 24GB of RAM. 2.5 Hours of processing for 1s of simulated time.
The main learning aspect of spaun is changing weights during the RL task. This is not changing the visual/motor hiearchies, but only weights that project to the value system, which are modulated by TD learning. More on learning. The learning requires an error signal, which he is not sure of how to implement (top-down like vs bottom-up like in basal and apical trees).
Dynamics of the model are highly constrained by the time-constants of the synapses.
Definitely more papers to read: visual system learning: Shinomoto (2003), Spike learning: MacNeil (2011), Normalization for probabilisitic inference: Eliasmith (2010)
Also, get his book.
Also, get his book.
No comments:
Post a Comment