Monday, December 10, 2012

Eliasmith 2012 supplemental I

Eliasmith et al. 2012 -  Supplemental.

"The central kinds of representation employed in the [Semantic Pointer Architecture] (SPA) are "semantic ponters". Semantic pointers are neurally realized representations of a vector space generated through a compression method." They are generated through compression of the data they represent. They carry info that is derived from their source. Point to more info. Lower dimensional representation of data that they point to. The compression can be learned or defined explicitly.

Interesting way of stating this. It sounds similar to the symbol-data binding. The pointer is the low-dimensional symbol that points to the high-dimensional data.

The Neural Engineering Framework is a set of methods that can compute functions on neural vectors (how to connect populations of neurons). Each neuron has a preferred direction vector. The spiking activity is written as:
a_i(x) = G_i [\alpha_i e_i x + J^bias_i]
a_i is spike train, G is nonlinearity, alpha is the gain, e is the preferred direction vector, J^bias is a bias current. He uses LIF neurons in spaun.

Then you can derive a linear decoder from the activity of a population. This can be optimized in a least-squares sense. Can take decoder to calculate weights to come up with transformation function.

The visual hierarchy in spaun is constructed by training RBM based auto-encoders on natural images. For spaun the first layer is 728 dimensional (28x28 pixel images). consecutive hidden layers of 1000, 500, 300 and 50 nodes. First layer is higher dimensional that actual input image. Learns many gabor-like filters in V1. In spaun the visual hierarchy does not have spiking neurons until IT (the top).

[on page 12, Working memory is next.]

No comments:

Post a Comment