Sunday, March 31, 2013

Differential signaling via the same axon of neocortical pyramidal neurons

Markram, H. Wang, Y. Tsodyks, M. (1998) Differential signaling via the same axon of neocortical pyramidal neurons. PNAS 95: 5323-5328.

Somatosensory slice in rats, whole cell triple recordings. They model facilitation with the paramter u, which increases by U * (1-u) for each pre-spike and decays to U for its equilibrium. This gives them that facilitation, but its only truly facilitation over some regime.

L5 tufted Py to py depressed with different values. Same pyramidal neuron could show depression onto another Py and facilitation onto a bipolar cell.

Different pyramidal cells show slightly different types of facilitation onto the same interneuron:
He talks about a super-linear, linear, and sub-linear regime for the dynamics, which I think is a consequence of his model the data somewhat show that too.

So, yeah, in his model u goes through this sigmoidal shape depending on the pre-synaptic firing rate. This causes different levels of current build-up - that's his super-lin-sub (SLS). In SLS he just means the efficacy of each psp, not the total integral. His model looks quadratic over some regime, depending on all the parameters.

Friday, March 29, 2013

The temporal side of the rate model

So the spiking model derived in the gain paper is basically now completely capabable of implementing the rate models, but there is now a new dimension. There are all of the time constants that can be altered without changing the relationship between rate models and spiking models -- there is always some counter balance to the time constant, decreasing the tau could be compensated by increasing w.

So that opens up a lot of possibilities about how the time constants relate to all types of neuronal phenomenon and what that could mean in terms of what a rate model even would mean in spiking neurons. Adaptation is a nice illustration of something that would on the surface seems as a variability that would break the rate model, but actually it can stay consistent within the rate model terms. I(t) is a temporal variable, and our model was just the most simple equilibrium direct model -- there are a bunch of ways to maintain the linearity. The spiking-current relationship can stay linear while varying through time. The translation of current to spike is essentially as if you are running a filter through a temporal signal, this is actually just a linear transformation. Its kinda like the complex plane -- the Rate model is like the real valued numbers, but that is just a single line among an infinitely large set of other lines and that is the spiking model.


short-term plasticity: i have this idea about how to implement normalization with STP, i've gone pretty far on it already. its coming.


long-term plasticity: at this scale we have to ask what exactly are we trying to learn? This is a level above the general population-code, and onto the specifics about what exactly are the things the population code are representing or learning? The P-cells are just one of a small set of sensory neurons that all form synapses with some population of interneurons that likely greatly overlaps. The entire sensory environment seems to be projected as a vector in a high dimensional space onto these interneurons (like in the leech), which represent all the information is a population code. A particular sensory dimension does not point down a particular neurons axis, but the linear combination of multiple neurons. In an almost (or perhaps in a warped subspace or something?) orthogonal direction are vectors that represent the other dimensions of the sensory world. The various weights of the projection tuned to send the population in one direction in a high dimensional space. LTP sets up all of these weights and communication between neurons wires the circuit up to form these population representations. What do simple rules like STDP mean in these spiking models? How do the neurons differentiate themselves to form a full basis of the information?


neuromodulators: at an intermediate scale neuromodulators may play a role in maintaining the long-term and slower kinetics of the nervous system (all while maintaining the rate model). It is really interesting how many CPGs are modulated by neuromodulators, perhaps there is a high-dimensional neuro-modulator space that can activate many different types of CPGs. The CPGs likely use the neuromodulators as a positive feedback loop to maintain the patterns over larger, but also adjustable time-scales. If dopamine activates swimming, then there is likely a dopamine signal that could get regulated to adjust the length of the swim. I think the CPGs are wired up in feed-back circuits and perhaps have intrinsically rhythmic properties to generate the temporal patterns of the rhythms (i guess transmitter kinetics is definitely another time-scale). But these longer  10s to 100s of seconds time scales seem to be controlled by positive feedback-loops with neuro-modulators. These could have their own slower kinetics that drive seconds-level changes in behavioral transitions, and can obviously be modulated. The neuromodulators are just altering the population code and the transformation of the sensory world into behavioral output. If the sensory cells activated the interneuron population code to point in the direction that means "you just got shocked! start swimming!", then the synapses from the interneurons to the CPG/motor neurons would activate the swim CPG. With no feedback the sensory shock could set up a positive dopamine feedback loop that could take seconds to decay -- the sensory population code can settle back to its current sensory environment (since it is no longer being shocked, it will no longer be in the shock basin), and the pattern generator can just continue. The sensory environment can shut down the pattern generator if it wants, or leave it on if it needs by maintaining the neuromodulation -- and by direct shutdown of the motor neurons if it needs to end the pattern faster.

So the interganglionic connections would likely be sensory pop-code to sensory pop-code, and CPG/motor to CPG/motor, mainly. Perhaps sensory-pop to CPG, but less CPG to sensory-pop. It would seem like if there were feedback connections from CPG to sensory-pop then that would just be for like a reference signal, perhaps a predictive signal that could even modulate the sensory-pop, but likely would not drive it. It would be a nice signal to have to adjust the sensory feedback and close the loop.

Wednesday, March 27, 2013

Olshausen on vision

https://redwood.berkeley.edu/bruno/papers/CNS2010-chapter.pdf

Monday, March 18, 2013

http://www.nature.com/nmeth/journal/vaop/ncurrent/full/nmeth.2434.html

Thursday, March 14, 2013

Newsome and free will

Bill Newsome gave a talk today about his ideas of free will, and I thought they were quite good. The overall premise was that he doesn't quite like the notion of bottom-up determinism as the end-all to free will -- he didn't say that bottom-up determinism was incorrect, but that the problem doesn't just end at bottom-up determinism and that there is more to it.

In the end, his rationale was based on emergence. Free will only exists above a certain level of a physical causal hierarchy. Quantum mechanics is at the bottom, and ultimately all phenomenon are derived from the laws of quatum mechanics, but there are important laws that come out at each level. These laws have important information and cannot be seen through a lens that just focuses on the pieces. The organization of the pieces as well as the rules of the pieces come together to create the rules at a higher level. The organization has information that shapes the rules, and thus without the information about the organization the pieces truly do not sum to the whole. Each higher level is defined by the pieces from a lower level, each piece follows its own set of rules. Only when the pieces are organized correctly do they produce rules at the higher level. A lion can kill me, the pieces of the lion cannot, unless they are organized into a lion.

So the true definition of free will shouldn't be looked at as something that arises from the laws of physics -- there is a certain level in the hierarchy below which free will become meaningless. Free will exists at a higher level. To have free will one must have beliefs and thoughts and aspirations, and then be able to act on those things. These are rules at a higher level of the hierarchy, and yes these things are still fully determined by the laws that govern the lowest level, but a meaningful definition of free will can come out from this distinction of different levels of causality. This gives some form of free will from a legal/moral sense, although it is still hard to draw a solid line.

Causality and complexity then arise from a hierarchical structure in dynamical systems. At the lowest level, lets just say its atoms, there are a few equations that govern all of the atoms. These are essentially differential equations, and thus for each atom there is a state-space that is determined by the differential equations and the initial conditions. The initial conditions in a way describe the information about the organization of all the pieces. All the atoms in the Universe form a vast high-dimensional dynamical system which ultimately produces everything else. However, there are lower-dimensional sub-structures which share properties -- i.e. the full system's dimensionality can be reduced in a way that describes a sub-system that behaves as if it has its own set of rules.

So the higher levels share a commonality because they have a lower dimensional projections from the higher dimensional space of the level below, that essentially looks the same. In the high dimensional space they are always separated, but there is a low-dimensional projections where the same rules come out that govern the system. The software of a computer is information about a particular program. At the low level copying software from one computer to another is fundamentally different -- there are different transistors that are processing the information. So all of the transistors in the world make up the full transistor space, but their dynamics can be projected on to a common lower-dimensional subspace that looks the same. The information of the software determines the system, and the low dimensional projection has meaningful rules.