Friday, December 12, 2014

Microsoft Research Talk


Check out this video of my talk on my thesis work at Microsoft Research!

Actually, this video doesn't have the slides. Check out the video here for accompanying slides

Monday, November 3, 2014

Neural Turing Machines

Graves, A., Wayne, G., Danihelka, I. (2014) Neural Turing Machines. ArXiv

Paper from Deep Mind.

"Computer programs make use of three fundamental mechanisms: elementary operations
(e.g., arithmetic operations), logical flow control (branching), and external memory, which
can be written to and read from in the course of computation (Von Neumann, 1945)"

"Recurrent neural networks (RNNs) stand out from other machine learning methods
for their ability to learn and carry out complicated transformations of data over extended
periods of time. Moreover, it is known that RNNs are Turing-Complete (Siegelmann and
Sontag, 1995), and therefore have the capacity to simulate arbitrary procedures, if properly
wired."

"While the mechanisms of working memory remain somewhat
obscure at the level of neurophysiology, the verbal definition is understood to mean
a capacity for short-term storage of information and its rule-based manipulation (Baddeley
et al., 2009 [this is a textbook]). In computational terms, these rules are simple programs, and the stored
information constitutes the arguments of these programs. Therefore, an NTM resembles
a working memory system, as it is designed to solve tasks that require the application of
approximate rules to “rapidly-created variables.” Rapidly-created variables (Hadley, 2009)
are data that are quickly bound to memory slots, in the same way that the number 3 and the
number 4 are put inside registers in a conventional computer and added to make 7 (Minsky,
1967). An NTM bears another close resemblance to models of working memory since the
NTM architecture uses an attentional process to read from and write to memory selectively."


"Fodor and Pylyshyn also argued that neural networks with fixedlength
input domains could not reproduce human capabilities in tasks that involve processing
variable-length structures. In response to this criticism, neural network researchers
including Hinton (Hinton, 1986), Smolensky (Smolensky, 1990), Touretzky (Touretzky,
1990), Pollack (Pollack, 1990), Plate (Plate, 2003), and Kanerva (Kanerva, 2009) investigated specific mechanisms that could support both variable-binding and variable-length
structure within a connectionist framework."

"A crucial innovation to recurrent networks was the Long Short-Term Memory (LSTM)
(Hochreiter and Schmidhuber, 1997). This very general architecture was developed for a
specific purpose, to address the “vanishing and exploding gradient” problem (Hochreiter
et al., 2001), which we might relabel the problem of “vanishing and exploding sensitivity.”"



So the idea is that there is a memory matrix, called M_t, which a neural network is effectively using as RAM. M_t is NxM, where N is the number of memory locations, and M is the vector size of each location.

It seems like they are using the semantic pointer ideas of Eliasmith, and they cite his book.

w_t is a vector of weightings over N locations. This is normalized to unit length.
r_t is a length M read vector

To write to memory, there are two steps, an erase and an add step.
e_t: is M element erase vector
a_t: is M element add vector


Content-based addressing vs. location based, they employ both. They say content is more general than location, but use location for simplicity and generalization.

To focus by content, the read/write heads produce a length M key vector k_t that is compared to each M_t(i) by a similarity measure. \beta_t is the key strength.


For the location-based addressing, the idea is that the weighting goes through a rotational shift. So if a weighting was focused solely on one memory location (i.e. w = [0 1 0 0 ... 0]) then a rotation of 1 would move it to the next location (i.e. w= [0 0 1 0 ... 0]).


Ok, so then they use a neural network to control the content addressing. And they compare a feed-forward NN with an LSTM NN with M, and an LSTM by itself.

The first task is a copy task, where the network (they say) has learned to implement this algorithm using some sort of gradient descent:

"We believe that the sequence of operations performed by the network can be summarised by the following pseudocode:

initialise: move head to start location
while input delimiter not seen do
  receive input vector
  write input to head location
  increment head location by 1
end while
return head to start location
while true do
  read output vector from head location
  emit output
  increment head location by 1
end while

This is essentially how a human programmer would perform the same task in a low-level programming language."

They next perform a repeat copy task where the NTM is told a pattern and told to repeat the given pattern a number of times. NTM does much better than LSTM.


[I'm wondering about this weighting matrix, in this case they seem to just be using a binary weighting statement, essentially accessing a single column of M with each weighting. But it seems like the weighting could be any vector and that M could have a distributed memory]


The next task requires the network to jump around its memory in order to perform (as opposed to following a sequence).



"This implies the following memory-access algorithm: when each item delimiter is presented, the controller writes a compressed representation of the previous three time slices of the item. After the
query arrives, the controller recomputes the same compressed representation of the query
item, uses a content-based lookup to find the location where it wrote the first representation,
and then shifts by one to produce the subsequent item in the sequence (thereby combining
content-based lookup with location-based offsetting)."

Then they show a couple more algorithms, and a sorting algorithm.




Monday, October 27, 2014

Working Memory Review

Circuits - Seconds

Classics on persistent activity

Fuster, J.M., Alexander, G.E. (1971) Neuron Activity Related to Short-Term Memory. Science 173(3997):652-654.

Recordings in PFC and a thalamic nucleus (nucleus medialis dorsalis). They found some neurons that have persistent activity:

Fig 2. Frequency histogroms (0.5-sec bins) of two units (A and B) in pfc during five trials with 18-second delays. Cue presentation periods marked by horizontal lines. Arrows mark the lifting of the blind and unlocking of doors which terminate the delay and immediately precede the animal's response. The two lower excerpts from the unit at left (A) represent tests of stereo tape-recorded cries of monkeys at the time of their daily feeding played back to the experimental animals by means of overhead loudspeakers.


Miller, E.K., Erickson, C.A., Desimone, R. (1996). Neural Mechanisms of Visual Working Memory in Prefrontal Cortex of the Macaque. J. Neurosci. 16(16):5154-5167.

Delayed match to sample task. Recorded in PF and IT. More than half of PF neurons show delay period activity increase. There's some interesting insights into the encoding scheme -- i.e. using projections of large numbers of neurons to represent the information, as there is a lot of variability in the type of delay activity.





Watanabe, T., Niki, H. (1985). Hippocampal Unit Activity and Delayed Response in the Monkey. Brain Research, 325: 241-254.

Recorded from a bunch of single-units in hippocampus during delay task.

TABLE I
Classification of related hippocampal units during DR
Unit type                                           n      % 
Cue-light related units                        24     8.8
Cue- and choice-light related units       41   15.0
Choice-light related units                    21    7.7
Response-related units                       51    18.8
Reward-error units                            17     6.3
Delay units                                      118   43.4
Total                                               272  100.0





Dopamine and working memory



Local depletion of dopamine in primate pfc impairs working memory: Brozoski, Brown, Rosvold, Goldman Science 1979

Dopamine-induced facilitation of NMDA receptor action is mediated by D1 receptors: Cepeda et al. Synapse 1992, Cepeda, Buchwald, Levine, PNAS 1993.

FIG 1. a, Left, schematic sequence of events in each phase of the ODR task (aligned in time with rasters and histograms shown immediately below); right, depiction of the 8 positions of the target (plus the central fixation position, 0) to be remembered for guidance of oculomotor response. b, Effect of SCH 39166 on response of neuron W54 (raster-gram above, histogram below; bin, 50 ms) for a target (position 2) in the memory field (left) and for a target (position 7) in nearly the opposite location in space (right). Top two rows, control recording showing significant but weak delay activity. Middle two rows, SCH 39166 (25 nA) produces dramatic enhancement of activity during the delay period when the target is in the memory field but produces an inhibition of activity when the target is in position 7. Bottom two rows, SKF 38393 reverses the effect of SCH 39166 and reduces delay activity to 'back-ground' level. C, cue period; D, delay period; R. response period.

So, this looks like SCH 39166 (D1 antagonist) is enhancing the "gain" of the memory field, while SKF 38393 (partial D1 agonist) reduces/eliminates the delay period activity. 

"It has been shown that D1 receptors are located preferentially on the distal dendrites and spines of pyramidal cells, but D5 receptors can also be observed on spines as well as dendritic shafts": Smiley, Levey, Ciliax, Goldman-Rakic PNAS 1994; Bergson C et al. Soc Neurosci Abstr 1994. (I think this became this paper).

"The enhancement of neuronal activity by D1 antagonists and its reversal by SKF 38393, a D1 agonist, indicates that the normal activiation of dopamine is to constrain neuronal activation during performance of a working memory task. Such an inhibitory role for dopamine is fully in accord with numeraous previous physiological studies [38]. A known mechanism of inhibition by D1 action is the attenuation of a slow inward sodium current which normally supports activation of the cell by excitatory inputs [39]." 38: Thierry, Mantz, Milla, Glowinski, J. Ann. N.Y.Acad.Sci. 1988; 39: Calabresi et al Neuroscience, 1987.

 Here's a nice Goldman-Rakic Review



"Specificaly, we argue that working memory requires rapid updating and robust maintenance as achieved by a selective gating mechanism (Braver & Cohen, 2000; Cohen, Braver, & O’Reilly, 1996; O’Reilly, Braver, & Cohen, 1999; O’Reilly & Munakata, 2000)."

Hochreiter and Schmidhuber 1997: LSTM model with gating

Striatum to GPi/SNr and from GPi/SNr to thalamus are both inhibitory. GPi/SNR is tonically active. Striatum inhibits GPi inhibits thalamus = Striatum disinhibits thalamus. 


the basal ganglia are important for initiating the storage of new memories. 

Working memory is attractor states -- in their model the recurrence for the attractor is between PFC and thalamus and back. [why not local circuitry?]. They then talk about this information relationship between thalamus and frontal, and that thalamus would need same order of neurons. However, this is avoided with the recurrent connections in frontal. [thalamus is compressed representation, potentially "blurrier", more information is stored in the recurrent PFC connections]. 

"Miller al. (1996) observed tha frontl neurons will end o be activated transintly when irrelevant stimuli are preseted wile mokeys are maintaining other task-relevant stimul." ... "frontal neurons have intrinsic maintenence capabilities." 

Possible mechanisms: Dilmore, Gutkin & Ermentrout 1999Durstewitz, Seamans, & Sejnowski, 2000b; Fellous, Wang & Lisman, 1998Gorelova & Yang, 2000; Lewis & O'Donnell, (2000) show that prefrontal neurons exhibit  bistability -- they have up and down states. Other mechanisms take advantage of the properties of the NMDA receptor. Wang 1999, bistability emerges as a result of interactions between NMDA channels and the balance of excitatory and inhibitory inputs. Durstewitz: dopamine modulates NMDA channels and inhibition to stabilize a set of active neurons.

"To summarize, in our model, active maintenance oper- ates according to the following set of principles. 1. Stimuli generally activate their corresponding frontal representations when they are presented. 2. Robust maintenance occurs only for those stimuli that trigger the intracellular maintenance switch (as a re- sult of the conjunction of external excitation from other cortical areas and Layer 4 activation resulting from basal- ganglia-mediated disinhibition of the thalamocorticalloops). 3. When other stimuli are being maintained, those representations that did not have the intracellularswitch activated will decay quickly following stimulus offset. 4. However, if nothing else is being maintained, recurrent excitation is sufficient to maintain a stimulus untilother stimuli are presented. This “default” maintenanceis important for learning, by trial and error, what it is relevant to maintain."

Striatum: 111 million neurons (Fox & Rafols, 1976)
GPi/SNR: 160 thousand neurons (Lange, Thorner & Hopf, 1976)

"An interesting possible candidate for the regions of the frontal cortex that are independently controled by the basal ganglia are distinctive anatomical structures con- sisting of interconnected groups of neurons, called stripes (Levitt, Lewis, Yoshioka, & Lund, 1993; Pucak, Levitt, Lund, & Lewis, 1996). Each stripe appears to be isolated from the immediately adjacent tissue but interconnected with other more distal stripes, forming a cluster of inter- connected stripes. Furthermore, it appears that connectivity between the PFC and the thalamus exhibits a similar, although not identical, kind of discontinuous stripelike structuring (Erickson & Lewis, 2000). Therefore, it would be plausible that each stripe or cluster of stripes constitutes a separately controlled group of neurons;" -- there could be 20K stripes or 5K stripe clusters.

"One particularly intriguing suggestion is that the convergence of inputs from other frontal areas may be arranged in a hierarchical fashion, providing a means for more anterior frontal areas (which may represent higher level, more abstract task/goal information) to appropriately contextualize more posterior areas (e.g., supplementary and primary motor areas; Gobbel, 1997). This hierarchical structure is reflected in Figure 4."

Comprehensive literature review of FC and BG: Wise, Murray, Gerfen (1996). General theories of Frontal-Basal Ganglia system:
1. Attentional Set Shifting: need FC and BG to dynamically switch between tasks
2. Working Memory: Caudate damage on working memory function (Buters & Rosvold, 1968; Divac, Rosvold & Szwaracbart, 1967; Goldamn & Rosvold, 1972).
3. Response Learning: Passingham 1993
4. Supervisory attention: Norman & Shallice, 1986. supervisory attention system controls action by modulating the operation of the contention scheduling system. 
5. Behavior-guiding rule learning: Wise et al. 1996. 

"This notion fits well with the ideas of Fuster (1989), who suggested that the frontal cortex sits at the top of a hierarchy of sensory–motor map- ping pathways, where it is responsible for bridging the longest temporal gaps. This hierarchy can continue within the frontal cortex, with more anterior areas concerned with ever longer time scales and more abstract plans and concepts (see Christoff & Gabrieli, 2000, and Koechlin, Basso, Pietrini, Panzer, & Grafman, 1999, for recent evi- dence consistent with this idea)."

"Perhaps the dominant theme of extant computational models of the basal ganglia is that they support decision making and/or action selection (e.g., Amos, 2000; Beiser & Houk, 1998; Berns & Sejnowski, 1996; Houk & Wise, 1995; Jackson & Houghton, 1995; Kropotov & Etlinger, 1999; Wickens, 1993; Wickens et al., 1995; see Beiser, Hua, & Houk, 1997, and Wickens, 1997, for recent re- views)."

Another relevant terry paper: Seamans & Sejnowski 2001





Tuesday, October 21, 2014

ICA vs. ROI Signal Extraction


Figure 1: Calculation of Signal-to-Noise Ratio (SNR).
(A) Simultaneous optical and electrophysiological recordings of several cells under several different conditions %(pre-synaptic chemical stimulation, pre-synaptic gap-junction stimulation, spontaneous activity, swimming). (B) Raw intracellular voltage trace. (C) The Voltage trace is down-sampled to the optical frequency based on a method that mimics the optical sampling mechanism. (D) The raw optical trace is shown in red and a polynomial is used to fit the optical trace to the ephys trace (dark red). %The polynomial removes bleaching artifact. (E) The fit optical trace is overlayed on the ideal optical trace. (F) The SNR is calculated as the standard deviation of the Ideal trace divided by the standard deviation of the difference between Ideal and Fit (i.e. the residuals).



Figure 2: Sub-pixel motion correction with ECC.
(A) Raw image of ganglion stained with VSD with two ROIs at the edge of the ganglion. (B) Sub-pixel motion artifact is apparent and large compared with VSD signals. (C) Motion artifact can be removed with ECC image registration algorithm. (D) The registration algorithm performs an affine transformation of the pixels. Each panel corresponds to the warp matrix values over time. (E) The resulting motion artifact that is removed. The separation of blue and black highlights the impact of the skew and rotation values of the affine transformation


Figure 3: Comparison of ROI and ICA based signal extraction.
(A) Raw image with red ROI shown during pre-synaptic stimulation. The optical signal is the average of all pixels within the ROI for each frame. (B) ICA pulls out a component that is manually selected coming from the same cell. The ROI is shown on top to compare localization of component and hand-drawn ROI. (C) The Fit trace is overlayed on the Ideal trace for 4 conditions: ROI only, ROI with Motion Correction (ROI+MC), ICA only, and ICA with Motion Correction (ICA+MC). (D) Raw shown during swim behavior. (E) ICA component. This component shows weight in the bi-lateral pair of cells, as these cells are highly correlated. (F) SNR is compared under the 4 conditions. The 5 mV oscillation are much more clear in the ICA+MC case, even though the SNR increase is fairly small.




Figure 4: SNR comparison of different methods.
(A) The SNR for each method is compared against the ROI method. (B) ICA+MC is compared with ROI+MC as the baseline. (C) The difference in SNR for each trace is plotted as black dots, and the average is plotted as the bar graph. Each method significantly increases SNR (ROI+MC: p = 0.0358, ICA: p = 0.0013, ICA+MC: p=0.0024, t-test). (D) Same as C but with ICA+MC compared to ROI+MC (p=0.004).

Wednesday, August 20, 2014

ICM walkthrough 1

We will illustrate the use and utility of ICM through several example walkthroughs of imaging data analysis. The first data set is calcium imaging from Akinori Mitani in Takaki Komiyama's lab.

This is ICM develop version 11, committed August 20, 2014: 8f6cb87f72cc8922564125cdaf1c29b721a61ff8


Figure 1. ICM screenshot with labeled regions. 

ICM can load data from a .tiff or .mat file using the ICM menu in the interface, or data can be loaded into ICM through matlab functions set_data or add_trial. The data for this example can be found here url:XXX. Data is loaded as a trial, and trials are managed using the trial management list (Fig 1 R2).

The data can be browsed using the ROI Editor (Fig 1 R1), which allows the user to draw manual ROIs and look through the frames of the imaging data. ROIs can be drawn by simply clicking and dragging the mouse on the image in the ROI editor. The signals from manually drawn ROIs are plotted in the Data plot (Fig 1 R6). Multiple sets of both manually and automatically generated ROIs can be managed using the ROI manager (Fig 1 R3).

Each stage of the analysis is set and controlled with a tab menu in the stage tab panel (Fig 1 R4). ICM displays different information depending on which stage of the analysis is being viewed. The current stage of the anlaysis is displayed beneath the stage tab panel (Fig 1 R5), which also indicates when ICM is busy computing for each stage.

Figure 2: ICM screenshot in pre-processing stage.

In the next step, the preprocessing tab is opened, and the smooth window and down sampling values are set. The smooth window averages all pixels within a moving MxNxT window (rows, columns, time). The down sampling only keeps every MxNxT pixels, producing a smaller imaging data set. For this example, the original image data is 512x512x1600, but this is too large for the PCA-ICA analysis to be run on a desktop computer. We used a 4x4x2 smooth window and 4x4x2 down sampling to reduce the image size to 128x128x800. When the "Pre Process" button is pressed the original data is pre-processed and the pre-processed data is displayed in the ROI Editor. The original data can still be viewed in the data tab, and the pre-processing can be removed by clicking the "Reset" button in the Data tab.

Figure 3. ICM screenshot in PCA stage.

Next the principal components are computed. Every pixel in the preprocessed imaging data set is arranged in a matrix, where the rows are each pixels, and the columns are the values of the pixels over time. When the user clicks the "Run PCA" button, the principal components of this matrix are then computed, which decomposes the imaging data into several components, each of which has a “source” and a “score”. Sources are the time series of the extracted components, and the scores indicate the coefficient of the source for each pixel. The scores are rearranged back into an image to produce a “map”, which shows the spatial locations from which the sources are produced. The principal components are typically combinations of cellular signals, and do not reveal individual cells.

The principal components can be browsed in the ROI editor with the PCA tab open. Each frame in the ROI Editor shows the map of a different principal component, and the source of each component is plotted in the Component Plot. Several more example principal components can be seen in Figure 6A. 

Figure 4. ICM screenshot in ICA stage.

To reveal individual cells, the independent components are then computed from a subset of the principal components. Typically the top N principal components are used, where N is slightly larger than the number of neurons being recorded from. This can be set using the “PCs” edit box in the ICA tab (Fig. 4). The ICA algorithm will attempt to find the same number of components as principal components included, and so there should be at least as many PCs used as cells. Typically, more PCs is needed than cells, because many components are extracted that correspond to motion, background, bleaching, or other artifacts.

Once the PCs are chosen and "Run ICA" is clicked, ICM computes the independent components using the fastica algorithm. Further changes to the fastica settings can be made through a settings struct that can be changed programatically (see Documentation), as well as choosing different ICA algorithms (infomax and stICA (Mukamel et al 2009) are also built in). Like  PCA, ICA also produces sources and scores, where the sources correspond to the independent signals and the scores describe which pixels are contributing to the source. The scores are rearranged back into an image to produce a map, and the maps show the spatial locations of the independent components. 

Several example independent components are shown in Figure 6B and C. The ICA algorithm pulls out both components that are cellular signals (Fig. 6B) as well as components that are artifacts (Fig. 6C). These must be sorted manually using the interface, and components which are artifactual can be removed from further analysis by adding them to the "Remove" edit box. Further post-processing can be performed on the ICs using the tools in the "ICA Post-Processing" box.

Figure 5. ICM screenshot in segment stage.

Regions-of-interest can then be automatically generated from the ICA maps. In the Segment tab, the threshold level and amount of down-sampling are set for the segmentation algorithm. When the "Segment ICs" button is pressed, binary masks are created for each IC, which are displayed in the ROI Editor when the Segment tab is open. Each contiguous region of the binary mask is then matched with the best fitting oval to produce the ROI. The ROIs produced are saved in the ROI manager. 

ICs can have multiple ROIs because there can be multiple spatially isolated regions for a single IC. In this data, several cell soma's are slightly misaligned with the image plane, but their dendirtic branches (or axons) are in focus (see IC 26 and 30 at bottom of Fig. 6B). Because of this, the segmentation algorithm breaks up these cells into multiple ROIs (Fig. 6B, right). 


Figure 6. Example Components  

A major advantage of the PCA-ICA extraction is that this algorithm does not depend on spatial-localization to extract component signals. Calcium signals from cells or axons that do not have a single localized spatial region would be virtually impossible to extract if onlyn ROIs were used. The ICA component decomposition does not depend on spatial localization, which allows for clear signals to be extracted from out of focus cells or long axons. This suggests that many cells may be missed entirely when relying on ROI methods and that the ICA algorithm can get much higher signal-to-noise ratio in certain imaging conditions. Further, ICA can separate components that have overlapping spatial locations. For example, IC 150 (Fig. 6C) shows clear striations due to artifacts of the imaging acquisition system. These striations cover the entire image, but ICA can separate this artifact from the cellular components because the pixels share statistical patterns caused by the artifact. It would be impossible for ROI methods to separate spatially overlapping components.

Figure 7. ICM screenshot in visualization stage.

Finally, visualizations are created in the visualization stage. Two simple visualization systems are built into ICM, and here we will illustrate the PCA visualization of the ICA data. The PCA component viewer can be opened in the visualization control tabs (Fig 1. R8), and this allows the user to select three principal components to visualize. Each independent component has coefficients in the principal component space, and the user selects which dimensions to use to set the R, G, B color channels for each component. The component maps are then colored based on the coefficients of the dimensions chosen, and these are overlayed on the image data to create a visualization. The settings of the visualization can be manipulated using the controls in the "Visualization Settings" panel.

Figure 8. ICM visualization shows IC 9 and 49 are similar.

The visualizations can be useful for rapidly highlighting the functional relationsships between neurons. For instance, in Figure 8C we show a visualization of PC 7, 9 and 10. This reveals two components, 9 and 49, which have the same color. This is reflected in the close proximity of the components in the PCA space (Fig. 8B) and is a consequence of these to components having very similar calcium activity (Fig. 8A). 


Tuesday, August 12, 2014

NS&B Lecture


My lecture at NS&B covering PCA, ICA and their application in automatically extracting signals from imaging data.

Friday, July 18, 2014

Swim Oscillator Neuron Review


Collecting papers on identified neurons from previous work.

Briggman, K.L., Kristan Jr., W.B. (2006) Imaging Dedicated and Multifunctional Neural Circuits Generating Distinct Behaviors. J. Neurosci 26(42):10925-10933. 

This is a nice one of CV. You can see it is slightly lagging the DP oscillation, but I'm not entirely sure that the ganglion is 12. My guess is its 10 and he's recording DP 12, so the DP spikes are a little further behind.

 And then he gets some nice simultaneous 208 and 255 recordings. Again this matches really nicely with my data as you can see 255 slightly leading 208. This is reflected in that 255 is in the red phase and 208 is in the purple phase in my data.

And he also recorded from 3 and 4, but these are on the dorsal face.
Weeks, J.C., Kristan Jr., W.B. (1978) Initiation, maintenance and modulation of swimmming in the medicinal leech by the activity of a single neurone. J. exp. Biol. 77: 71-88.

Here's the position of 204, which matches nicely


Depolarizing cell 204 leads to the initiation of the swim oscillation.
And then here is 204 during swimming
Ok, so I put 204 in the yellow phase, which I think this matches nicely. It is hard to tell because this is DP 12, but you can see how much the phases shift by comparing 204 (10) with 204 (11). The way I have it, 204 is in the yellow phase, which means its depolarization should slightly lead the DP nerve. It would be unclear, because the duty cycle of 204 is pretty long, which can maybe slightly alter the phase, but I'm fairly happy with it. 


Weeks, J.C. (1982) Segmental specialization of a leech swim-initiating interneuron, cell 205. J. Neurosci. 2(7): 972-985.

Another paper from weeks looking at cell 205, which I don't see how to possibly differentiate from cell 204.

Although, wow it looks like S has a swim oscillation. She is showing that S is connected to cell 205, as well as cell 205 getting a bunch of input from sensory neurons -- sounds like its part of preparatory network.


But here is a simultaneous 204, 205 recording:

So, these are very similar. Maybe 205 is 263/264?


205 is the only one missing from my data set, but this is only in segment 9 according to Weeks, 1982. 
"Unlike other swim neurons which are segmentally repeated, cell 205 generally is present only in segment 9, and numerous lines of evidence suggest that it is, in fact, a segmentally differentiated homolog of cell 204."


Friesen, W.O., Hocker, C.G. (2001) Functional Analyses of the Leech Swim Oscillator. J. Neurophysiol. 86:824-835.

This schematic is in a lot of Friesen's papers. I think most of the neurons involved in swimming that Friesen studies, however, are on the dorsal face.
And here's an intracellular of DI-1 during swimming:

Another one of these phase diagrams from Friesen:

The rest of this paper is about modeling the swim behavior.


Here is a diagram of the cascade that causes swimming and a recording of DE3

More circuits and recordings. The legend says panel A is from Friesen 1989, and panel B is from Nusbaum 1987, which I need to find.

Not sure what SIN1 is, this is from Brodfuerher and Burns 1995:


Here they use the VSDs to find cells that are targeted by Tr2, which is a head cell that terminates swimming. He identifies two cells - 256 and 252 that are targeted by Tr2. These cells seem to also terminate swimming, but not really clear if they are oscillators.

Here are ephys recordings, but it doesnt seem that 256/252 are oscillators



Pretty detailed review. Good source to cite as leech cells being homologous: "Most midbody ganglia have the same neurons and locations of the soma."

7. The nervous system is iterated, with homologous neurons found in most, if not all, 21 segmental ganglia (Fig. 1). So despite having more than 10,000 neurons, the functional unit (i.e. the number of different kinds of neurons) of the leech CNS is relatively small. For instance, there are only 400 neurons per segmental ganglion (Macagno, 1980), and most of these are paired. Thus, in essence, the segmental nerve cord (roughly corresponding to the spinal cord in vertebrates) consists of 42 copies (one on each side of 21 segments) of a basic unit of 200 neurons.

And another version of the circuitry figure, with colors!






Mostly about Tr1. But here's a CV:

And it looks like Retzius has a clear swim oscillation. Hard to tell its phase from this picture, however.

This paper just wanted for this figure to cite the "Do something!" network.



Ok this is Otto's original papers on swimming. The first one is about the motor neurons and their connections:

Here's the classic about the motor neuron coupling:


Yeah so most of the diagrams that I'm interested in originally come from this paper.






This is a good set of ephys recordings. Except, here, 60 and 208 are opposite phases, which is not what I have them labelled as. This is the only 60 that I could find, but maybe this 60 is what I call 57 or 58. 




This has 161, which is a P cell follower and matches my data. As well as 212, which I think I will call FCL 212.


So he reports that 161 only gets dorsal ipsi input, whereas 162 and 212 get broader input. This could possibly mean that 161 should be 162.

Perhaps also 159, 169, 157 he reports could correspond to 153, 155, 151 since these are also LB followers.


In this paper he labels some cells as cells I have identified, but this is all crawling so it is hard to know for sure if they match.






These Wessel papers describe the AP cell. Can cite them as part of "AP cell has unknown function".



Ok, so I think there are a few changes to my table based on this review.

  • My 60 to 61
    • 61 is in phase with 208 and so my 60 is red so this is a switch
    • 61 in Nusbaum, 1986
  • My 58 to 60
    • 60 is anti-phase with 208, which is cyan/green so this could be 57 or 58
    • 60 in Nusbaum, 1987
  • My 257 to 255
    • Got these backwards from Kevin's paper. 255 is the swim oscillator, 257 does not oscillate.
    • Briggman, 2006
  • 205? Not sure what to do for this one
  • Friesen website potential conflicts/matches
    • 153 = 180 oscillation so that matches
    • 152 = CM excitor during crawling
    • 151 = NS
    • 154 = photoreception
    • 161 = EPSP from P cell
    • 212 = EPSP from P cell (perhaps this should be FCL)