Friday, June 29, 2012

Oscillations

When you record population activity in the brain you see many different types of oscillations. These oscillations essentially come from the pooled activity of a bunch of neurons in a circuit. The different circuit configurations will show up as different frequency oscillations. Here are a few highlights:

Gamma - 25-100 Hz (40 Hz typical)
  • Result of fast-spiking interneurons. These reach firing rates of about 40Hz.
  • These neurons are all coupled to each other through electrical synapses. These electrical synapses synchronize all of the action potentials across the whole population. 
  • FS neurons target the soma/axon of pyramidal cells. They cause inhibition, and act multiplicatively. This results in gain control of pyramidal cell activity (normalization of representation).
  • The oscillatory inhibition only allows pyramidal cells to fire during the phase where they are least inhibited. This synchronizes the action-potentials of pyramidal cells and allows for the use of a temporal code. 
  • This is the oscillation frequency of local, cortical computation.
Alpha - 8-12 Hz
  • Likely the result of thalamocortical feedback loop.
  • Seen during awake and wakeful sleep states. 
    • Sleep is probably where the brain does inverse learning, and this means activating the thalamocortical loops.
  • Long-range synchronization of cortical areas and thalamus. Can route information from different cortical areas through thalamus in this manner.
Theta - 4-8 Hz
  • Primarily in the hippocampus. Not sure of the biophysical mechanism.
  • Hippocampus has a map of your location that is encoded with place cells (neurons that fire when you are in a particular location). Across all of hippocampus there is one full cycle of the theta oscillation.
    • Standing wave that covers hippocampus.
    • Keeps the location information available at all times
    • Across hippocampus place cells change their spatial resolution
  • Place cells will fire in order throughout the theta cycle. The neurons whose receptive fields are activated most strongly will fire first, and those with weaker receptive fields will fire later.
    • This sets up a temporal coding mechanism. 
    • Something like STDP would be able to learn the sequences.

Thursday, June 28, 2012

Temporal Scales

The brain is also spread across many orders of magnitude in temporal scale. Figuring out the temporal ways in which the brain processes information, learns, and builds a generative model will be key to extending the deep learning structures that we have now. This is one of the biggest mysteries, and there are a lot of possibilities, so here are some highlights:

Action Potentials
  • Millisecond timescale
  • Main signaling mechanism, transfers information.
  • Fast sodium and potassium voltage-sensitive channels are mechanism. Tons of other modifications.
    • Adaptation
    • Refractory period
    • Bursting
Gap-Junction Signaling
  • Rapid communication between neurons. Purely electrical, so extremely fast. 
  • Probably important for synchronizing neurons (esp. FS cells). 
  • Some evidence is emerging that these can be regulated. Normally, they are thought of as resistors connecting two cells electrically. 
    • They can be uni-directional
    • Neurotransmitters/modulators may be able to open/close gap junctions.
Synaptic Signaling
  • Primary mechanism of computation. Holds majority of parameters of the system. Extremely modifiable.
  • Pre-synaptic neurons virtually always only have a single neuro-transmitter. 
    • Glutamate: primary excitatory transmitter
    • GABA: primary inhibitory transmitter
    • Dopamine: positive reinforcement modulator.
    • Tons of other transmitters: Glycein, Serotonin, 
  • Post-synaptic neurons, however, have receptors for almost every kind of neuro-transmitter. 
    • AMPA (Glutamate): primary excitatory transmitter. These get modulated for long-term learning.
    • NMDA (Glutamate): regulates plasticity. Triggers mechanisms for AMPA trafficking. Mechanism for STDP.
    • Chloride Channels (GABA): these are inhibitory. 
  • Can be as fast as 5ms to hundreds of ms.
Dendritic/Calcium Spikes
  • Active mechanisms in dendrites that perform computation.
  • NMDA, Volage-Gated Calcium Channels, Ca-Gated Calcium Channels, Voltage-Gated Sodium Channels all can contribute. 
  • Kv4.2 potassium channnels can regulate "strength" of dendrites.
  • Can last 10s to 100s of ms.
Short-term Plasticity
  • Changes in strength of synapses over short time-scales. Can be extremely rapid
  • Can be facilitating (each AP is stronger than previous), or depressing (each AP is weaker). Both of these can be in operation at different time-scales.
    • i.e. Depressing if APs are greater than 50Hz, Facilitating if 20-50Hz
  • 100s of ms to 10s of seconds
  • Could be a mechanism for working memory: Mongillo et al, 2008
Long-term Plasticity
  • Main mechanism for learning. AMPA receptors are recruited to or removed from synapse.
  • Hebb: fire-together, wire-together.
  • Spike-timing dependent plasticity
    • Pre just before post -> stronger (temporally causal).
    • Can be modulated by dopamine.
Neurogenesis
  • Neurons in hippocampus (dentate gyrus) are constantly being born. These neurons have been implicated as a way to create our declarative memories: Aimone, 2006
  • New-born neurons are very excitable and very plastic. These are storing your current memories, and easily associate with other new-born neurons.
  • Old neurons become less excitable, and lose plasticity. Keep the information of old memories.


Wednesday, June 27, 2012

Spatial Scales


One of the challenges that makes the nervous system so hard to study is that it is spread across many orders of magnitude in spatial scale. The scales that we should focus on go from synapse to maps. I also want to add in a couple of in-between scales to consider. Here are some highlights of what we know from neuroscience, and some reasons they might be important:

Synapses

  • Can change in strength over long-term - primary set of parameters to modify to make the system.
  • Can change in strength over the short-term - possible mechanism for working memory, could be a coding feature.
  • Modifications in strength described by some "Rules"
    • Hebb: Fire together, Wire together
    • STDP: Timing of inputs plays a role (NMDA receptors)
    • Dopamine Modulation: Reinforcement signal can influence plasticity rules.
  • Neurons are typically have only one type of output synapse - Excitatory (Glutamate), Inhbitory (GABA). There are many others, but most others are modulatory. 
    • Inhibitory plasticity rules are not well known (people have tried to study this, but it seems that its different all over the place). 
Dendrites
  • These can form a feed-forward neural network like structure. 
  • Regulated by tons of different channels
    • Voltage gated channels can create non-linearities. 
    • Dendrite 'weights' can be modified by other channels - Kv4.2.
  • Spatial arrangement of dendrites even have some temporal computational aspects
    • Some retina ganglion cells can detect direction due to the differences in the activation of their dendrites.
Neurons
  • Pyramidal cells are main neurons responsible for representation of state - dominate cortex 80% of all cells. Pyramidal cells in cortex typically have two big branches of dendrties - basal and apical.
  • FS interneurons multiplicatively inhibit pyramdal cells. Responsible for controlling the gain of the system. 
  • Lots of other inhibitory interneuron types - anywhere between 5-30 different types depending on where you look. No one really knows quite what they are for. 
    • There are additive inhibitory interneurons (these are probably for making the equivalent of negative synaptic weights).
    • There are multiplicative interneurons (these are probably for gain control)
    • There are probably so many types of interneurons as they are combinatorically doing different functions. They target different output branches, they do add/mult, they get inputs from different areas.
Layers
  • Organizing prinipal of the brain and connections. 
  • Axons branch in specific layers, dendrites branch in specific layers. Those that branch together are likely connected with each other.
  • Thalamus enters cortex through Layer IV - this targets specific dendritic tree. (feed-forward)
  • Feedback connections from higher-level cortical areas come in through Layer I. These target the apical branches.
  • Recurrent connections also branch out into different layers, targeting specific trees.
  • Interneuron connections also organized by layers. Used to target specific trees, as well as the soma and axon. Interneurons that target soma/axon can inhibit neurons to the extent where they can be completely shut off.
Thalamo-Cortical Loops
  • There is extensive feedback between cortex and thalamus at all levels. Thalamus is the first data layer of input to the brain - it maintains are raw representation of the input signals. 
    • Low-dimensional, overcomplete. 
    • LGN (vision) is copy of retina.
  • Cortex is the model, and is trying to constantly predict what thalamus will look like. The differences between what cortex is predicting as the state of thalamus and what the thalamic state is will be a key signal for a learning rule. 
    • This is like the beginning of an RBM - LGN is the visual data, and V1 is the first layer. 
  • When cortex and thalamus are getting inputs, cortex is learning the statitistics of its inputs.
  • During sleep (or something analogous), cortex is doing inverse learning - it is remapping its feature representations back to thalamus. Each neuron in cortex, essentially knows what it is representing by this inverse learning.

Tuesday, June 26, 2012

What is Cortex?

When we look at the brain, especially the human brain, what we see is something people call "isocortex". Isocortex became the name for cortex because it essentially looks the same all throughout the brain. What is remarkable about cortex, however, is the vast diversity of the functions it implements. Cortex handles all of the sensory processing - vision, audition, somatosensory, taste, small, proprioception (where your muscles are), interoception (what the rest of your body is telling you). And cortex does all the high-level stuff - planning, speaking, thinking, decision-making.

Considering all of these functions and the general uniformity of Cortex, we can see that Cortex is general-purpose and extremely flexible. So what is it doing? What is the commmon functionality that could be used for so many diverse tasks? I think the answer is that Cortex is building a generative-model of its inputs.

A generative model is a system which can take in inputs, builds a parameterized model of the inputs, and reproduce the inputs from the parameters. The reproducing of the inputs is key, as we can use the differences between the true inputs and the reproduced inputs as part of a learning rule. This is very analogous to RBMs, as essentially RBMs build a parameter set and try to regenerate the inputs. The weights of the RBM are modified based on the differences between the true inputs and the generated inputs. Cortex is in many ways a much-more powerful generalization of RBMs.

Now, Cortex isn't exactly the same throughout the human brain. I'm currently very interested in the evolution of Cortex, as it appears throughout the course of evolution Cortex has been slightly modified, and specialized for its various tasks. I think a plausible story about how the brain was evolved was that a simple, very general Cortex emerged and was extremely useful. Quickly it expanded to handle processing of all types of sensory modalities and other functions. Evolution is very good at copying structures and utilizing them for other tasks. Once Cortex started being used for all of these different purposes, it started specializing. Evolution figured out tricks to make the different areas of Cortex more efficient - allowing for more cortex and more processing power. It also figured out ways to keep Cortex stable and ways to have Cortex build its generative model more quickly and efficiently. Consider V1 as the first feature level representation of the world. Neurons in V1 are typically known to be selective to orientations and edges - this is the first level of a generative model of a 3D world. There is evidence, however, that V1 can produce these features without the need of any inputs. I think that evolution has set the parameters of V1 such that its initial conditions are close to correct for making a 3D model, so this would make cortex look the same even without any input. However, it is likely that these neuron's feature selectivity is very unrefined, and V1 almost surely needs inputs to be as accurate as it is. Evolution has just figured out a way to get V1's initial conditions close to the ideal local-minimum, so that it quickly settles into the correct format and doesn't get stuck in a different local-minimum.

Monday, June 25, 2012

Computational Dynamical Systems

The over-arching theory of the whole brain is something I like to call "Computational Dynamical Systems". CDS is essentially the theories that allow for computation through systems of differential equations. Everything about the brain can be modeled as a system of differential equations - and this can be taken down to the smallest details of the brain. We could model the states of every atom and channel with systems of differential equations. However, we won't need to go into that much detail. In the theoretical direction, we need to explore how you put together systems of differential equations in order to do computation.

There are certain concepts from dynamical systems that will be important in engineering computational dynamical systems. Computation requires a fine balance of reproducibility and malleability. We must be able to produce the same outputs for the same inputs. However, if the input information is changed only by a single bit, the system must be able to transition to a completely different state. Local-minima will be the driving causes of these states, and local-maxima will define the transitions between states. To be clear, local-minima and maxima do not need to be 1-dimensional points. I would like to consider stable limit-cycles as local minima, and unstable limit cycles as local-maxima as well.

If you consider the energy metaphor for dynamical systems - something like a marble rolling around a vast landscape of mountains and valleys (in high-dimensional space), then we can begin to draw a picture of how computation may be performed by dynamical systems. The state of the systems will be defined by the location of the marble. Computational time will be the time it takes for the marble to settle into a local-minima (or stable limit-cycle). But the actual computation will have to be done by changing the landscape. When these systems are doing computation, they are not directly changing the state of the system (we do not directly place the marble in certain locations). Rather, they are changing the landscape and the marble rolls down the hill to a new state. The landscape is defined by tons of parameters, and setting these parameters and how they change with different inputs will drive the computational process - the programming of these systems is setting these parameters. So if we want to transition to a new state, we must alter the landscape to form a new local-minima and make sure there is a gradient that drives the marble from its current state to the new local-minima.

Now the brain is a specific implementation of a computational dynamical system. The generalization of the theory leads to such a vast paramter space and possibilities that it is almost impossible to really consolidate everything into a unified thoery of CDS. For scope, every cell in the body is also a computational dynamical system. The DNA and protein networks that exist within all of your cells can also be thought of in this light. The mechanisms and parameter space of this system could be vastly different than those of the brain, but they each would fall under the realm of CDS theory. It will be important to keep these concepts in the back of our minds and try to make some theoretical progress, but this could be extremely hard and maybe even impossible due to the arbitrary nature of computation.