Wednesday, December 12, 2012

Normalization for probabilistic inference with neurons

Eliasmith, C. Martens, J. (2011) Normalization for probabilistic inference with neurons. Biological Cybernetics 104:251-262.

A solution that maintains a probability density for inference that does not depend on division.



"the NEF approach:

1. Neural representations are defined by the combination of
nonlinear encoding (exemplified by neuron tuning curves,
and neural spiking) and weighted linear decoding (over
populations of neurons and over time).
2. Transformations of neural representations are functions
of the variables represented by neural populations. Trans-
formations are determined using an alternately weighted
linear decoding.
3. Neural dynamics are characterized by considering neural
representations as control theoretic state variables. Thus,
the dynamics of neurobiological systems can be analyzed
using control theory."

So he shows some math that converts between vector spaces and function spaces and shows how these can be considered equivalent. Basically you are parameterizing the function with a vector representation.

He derives a bias function that is supposed to compensate for the errors in the integral (the integral is supposed to be 1 for probabilities). It captures distortions of the representation from projecting it to the neuron-like encoding. Basically the bias gets factored into the connection strengths, and can account for the non-linearities. (Not so much what I thought this was going to be about).

Any this looks like an interesting paper for the gain control stuff: Ma et al. 2006

No comments:

Post a Comment