Wednesday, June 27, 2012

Spatial Scales

One of the challenges that makes the nervous system so hard to study is that it is spread across many orders of magnitude in spatial scale. The scales that we should focus on go from synapse to maps. I also want to add in a couple of in-between scales to consider. Here are some highlights of what we know from neuroscience, and some reasons they might be important:


  • Can change in strength over long-term - primary set of parameters to modify to make the system.
  • Can change in strength over the short-term - possible mechanism for working memory, could be a coding feature.
  • Modifications in strength described by some "Rules"
    • Hebb: Fire together, Wire together
    • STDP: Timing of inputs plays a role (NMDA receptors)
    • Dopamine Modulation: Reinforcement signal can influence plasticity rules.
  • Neurons are typically have only one type of output synapse - Excitatory (Glutamate), Inhbitory (GABA). There are many others, but most others are modulatory. 
    • Inhibitory plasticity rules are not well known (people have tried to study this, but it seems that its different all over the place). 
  • These can form a feed-forward neural network like structure. 
  • Regulated by tons of different channels
    • Voltage gated channels can create non-linearities. 
    • Dendrite 'weights' can be modified by other channels - Kv4.2.
  • Spatial arrangement of dendrites even have some temporal computational aspects
    • Some retina ganglion cells can detect direction due to the differences in the activation of their dendrites.
  • Pyramidal cells are main neurons responsible for representation of state - dominate cortex 80% of all cells. Pyramidal cells in cortex typically have two big branches of dendrties - basal and apical.
  • FS interneurons multiplicatively inhibit pyramdal cells. Responsible for controlling the gain of the system. 
  • Lots of other inhibitory interneuron types - anywhere between 5-30 different types depending on where you look. No one really knows quite what they are for. 
    • There are additive inhibitory interneurons (these are probably for making the equivalent of negative synaptic weights).
    • There are multiplicative interneurons (these are probably for gain control)
    • There are probably so many types of interneurons as they are combinatorically doing different functions. They target different output branches, they do add/mult, they get inputs from different areas.
  • Organizing prinipal of the brain and connections. 
  • Axons branch in specific layers, dendrites branch in specific layers. Those that branch together are likely connected with each other.
  • Thalamus enters cortex through Layer IV - this targets specific dendritic tree. (feed-forward)
  • Feedback connections from higher-level cortical areas come in through Layer I. These target the apical branches.
  • Recurrent connections also branch out into different layers, targeting specific trees.
  • Interneuron connections also organized by layers. Used to target specific trees, as well as the soma and axon. Interneurons that target soma/axon can inhibit neurons to the extent where they can be completely shut off.
Thalamo-Cortical Loops
  • There is extensive feedback between cortex and thalamus at all levels. Thalamus is the first data layer of input to the brain - it maintains are raw representation of the input signals. 
    • Low-dimensional, overcomplete. 
    • LGN (vision) is copy of retina.
  • Cortex is the model, and is trying to constantly predict what thalamus will look like. The differences between what cortex is predicting as the state of thalamus and what the thalamic state is will be a key signal for a learning rule. 
    • This is like the beginning of an RBM - LGN is the visual data, and V1 is the first layer. 
  • When cortex and thalamus are getting inputs, cortex is learning the statitistics of its inputs.
  • During sleep (or something analogous), cortex is doing inverse learning - it is remapping its feature representations back to thalamus. Each neuron in cortex, essentially knows what it is representing by this inverse learning.

No comments:

Post a Comment