Ok, so I had this idea to align all the images across trials, so that I could run ICA on a concatendated stack, and thus get the same ICs for an entire experiment.
So I found ecc, some image registration algorithm, and it works pretty well.
I've been playing with it as a motion correction algorithm too, and it detects the sub-pixel movements pretty well. ecc is just a registration algorithm, so my motion correction is just registering each frame to the first frame.
The result is a 2x3xNFrames matrix that is the affine transform. Then I can plot the values of the matrix over time to see what kind of motion there is.
So like the far right column is the translational component, and the others is like a rotation/scaling matrix. The motion is quite subtle, but clearly it is there.
Then you can kind of see how the ganglion moves with a picture like this:
The numbers on this axis are pixel values. So this is very subtle movement (that's why its hard to see by eye). The big circle is the start pont. There are two traces -- a black and blue, which indicate the rotational/scaling aspects in their seperation.
Yeah, so this works pretty well, and it looks like it helps quite a bit.
One of the problems is that the edges of the image get strange, so the final result has to be clipped at the edges. This is easy to do: we just have to find the largest distance in the transform (for motion its usually sub-pixel) and round up to nearest pixel. Then take away that many pixels from the border.
This is x120516-t6, which was pretty scary motion, and changes the shortening response quite significantly:
Friday, September 20, 2013
Tuesday, September 17, 2013
Validation of Independent Component Analysis for Rapid Spike Sorting of Optical Recording Data
Hill, E.S., Moore-Kochlacs, C., Vasireddi, S.K., Sejnowski, T.J., Frost, W.N. (2010). Validation of Independent Component Analysis for Rapid Spike Sorting of Optical Recording Data. J Neurophysiol 104: 3721-3731.
So same basic idea, use ICA to extract components from optical data. This is pretty impressive because these are fast VSDs and they are looking at spike trains. It works really well for this, thats pretty much all there is to say. Spikes are ideally suited for ICA because of their sparse nature.
One thing they did was concatenate multiple optical files and ran ICA to get the same neurons across trials. This could work for me, but would have to realign the images for each trial.
Low and hi-pass filters help. More data helps.
One thing to extend this work would be to validate ICA not just for spike traces, but also for sub-threshold changes in the membrane potential.
So same basic idea, use ICA to extract components from optical data. This is pretty impressive because these are fast VSDs and they are looking at spike trains. It works really well for this, thats pretty much all there is to say. Spikes are ideally suited for ICA because of their sparse nature.
One thing they did was concatenate multiple optical files and ran ICA to get the same neurons across trials. This could work for me, but would have to realign the images for each trial.
Low and hi-pass filters help. More data helps.
One thing to extend this work would be to validate ICA not just for spike traces, but also for sub-threshold changes in the membrane potential.
BRAIN Interim Report
https://docs.google.com/file/d/0B1wMEnylMrJcTlZEZXVpMkV3UDQ/edit
The Interim report for the BRAIN initiative came out yesterday. This report talks about the initial goals of the BRAIN Initiative, mainly focusing on short-term projects for 2014. There will be a final report out in June.
The report draws the focus of the BRAIN Initiative to questions about neural coding, neural circuit dynamics and neuromodulation. The analysis of circuits is "particularly rich in opportunity, with potential for revolutionary advances" -- we currently think that the activity and modulation of large ensembles of neurons are what underpin mental experience and behavior.
Understanding these questions is daunting -- the human mind is built from a unimaginable tangle of almost a 100 billion neurons. In order to understand this complexity will require new tools to record from a large number of neurons, new analysis techniques that can make sense of the "big data" that will be generated, and new computational theory that puts it all together.
The organizers recognize the limitations of a purely human-based approach to studying neuroscience, and that both technical and ethical issues require that the BRAIN Initiative include appropriate model organisms. They emphasize a diversity of approaches and organisms, citing different advantages for all of the different model organisms in neuroscience -- rhesus macaques (evolutionary proximity to humans), mice (mammalian, genetic tools), zebrafish (vertabrate, optical tools), worms and flies (small nervous systems, genetic tools), and molluscs, crabs and leeches (defined nervous system, electrophysiology). Other species will also highlight important brain functions through their particular niche -- i.e. songbirds are the only animals (besides humans) that have instructed vocal learning.
They summarize 9 "high-priority" research areas for 2014:
1. Generate a Census of Cell Types.
2. Create Structural Maps of the Brain.
3. Develop New Large-Scale Recording Capabilities.
4. Develop a Suite of Tools for Circuit Manipulation.
5. Link Neuronal Activity to Behavior.
6. Integrate Theory, Modeling, Statistics, and Computation with Experimentation.
7. Delineate Mechanisms Underlying Human Imaging Technologies.
8. Create Mechanisms to Enable Collection of Human Data.
9. Disseminate Knowledge and Training.
This is a great start for what could be a revolutionary initiative! The current level of funding is only $40 million, which is less than 1% of what just the NIH gives to neuroscience research currently ($5.5 B). There is hopefully more money to come -- $3 B was floated for the next few years.
The Interim report for the BRAIN initiative came out yesterday. This report talks about the initial goals of the BRAIN Initiative, mainly focusing on short-term projects for 2014. There will be a final report out in June.
The report draws the focus of the BRAIN Initiative to questions about neural coding, neural circuit dynamics and neuromodulation. The analysis of circuits is "particularly rich in opportunity, with potential for revolutionary advances" -- we currently think that the activity and modulation of large ensembles of neurons are what underpin mental experience and behavior.
Understanding these questions is daunting -- the human mind is built from a unimaginable tangle of almost a 100 billion neurons. In order to understand this complexity will require new tools to record from a large number of neurons, new analysis techniques that can make sense of the "big data" that will be generated, and new computational theory that puts it all together.
The organizers recognize the limitations of a purely human-based approach to studying neuroscience, and that both technical and ethical issues require that the BRAIN Initiative include appropriate model organisms. They emphasize a diversity of approaches and organisms, citing different advantages for all of the different model organisms in neuroscience -- rhesus macaques (evolutionary proximity to humans), mice (mammalian, genetic tools), zebrafish (vertabrate, optical tools), worms and flies (small nervous systems, genetic tools), and molluscs, crabs and leeches (defined nervous system, electrophysiology). Other species will also highlight important brain functions through their particular niche -- i.e. songbirds are the only animals (besides humans) that have instructed vocal learning.
They summarize 9 "high-priority" research areas for 2014:
1. Generate a Census of Cell Types.
2. Create Structural Maps of the Brain.
3. Develop New Large-Scale Recording Capabilities.
4. Develop a Suite of Tools for Circuit Manipulation.
5. Link Neuronal Activity to Behavior.
6. Integrate Theory, Modeling, Statistics, and Computation with Experimentation.
7. Delineate Mechanisms Underlying Human Imaging Technologies.
8. Create Mechanisms to Enable Collection of Human Data.
9. Disseminate Knowledge and Training.
This is a great start for what could be a revolutionary initiative! The current level of funding is only $40 million, which is less than 1% of what just the NIH gives to neuroscience research currently ($5.5 B). There is hopefully more money to come -- $3 B was floated for the next few years.
Monday, September 16, 2013
Automated Analysis of Cellular Signals from Large-Scale Calcium Imaging Data
Mukamel, E. A., Nimmerjahn, A., Schnitzer, M. J. (2009). Automated Analysis of Cellular Signals from Large-Scale Calcium Imaging Data. Neuron.
This paper basically does the PCA-ICA breakdown of the imaging signals.
So they also do this ICA in both space and time dimensions, and use both for trying to identify the cell traces. Most of the info comes from the space dimension (like the way I do it), and the best uses a 0.1-0.2 weighted combination of time and space.
In the image segmentation step, they identify spatially separate filters that are caused by different neurons that are highly correlated. Typically ICA handles correlations above 0.8 well, but occassionally it picks out two cells as one component.
Yeah, this is basically the paper that I want to write, but apparently its already been done... Going to check out their toolbox and see if there are any new tricks I can add.
This paper basically does the PCA-ICA breakdown of the imaging signals.
Figure 1. Analytical Stages of Automated Cell Sorting
(A) The goal of cell sorting is to extract cellular signals from imaging data (left) by estimating spatial filters (middle) and activity traces (right) for each cell. The example depicts typical fluorescence transients in the cerebellar cortex as observed in optical cross-section. Transients in Purkinje cell dendrites arise across elongated areas seen as stripes in the movie data. Transients in Bergmann glial fibers tend to be more localized, appearing ellipsoidal.
(B) Automated cell sorting has four stages that address specific analysis challenges.
So they also do this ICA in both space and time dimensions, and use both for trying to identify the cell traces. Most of the info comes from the space dimension (like the way I do it), and the best uses a 0.1-0.2 weighted combination of time and space.
In the image segmentation step, they identify spatially separate filters that are caused by different neurons that are highly correlated. Typically ICA handles correlations above 0.8 well, but occassionally it picks out two cells as one component.
Yeah, this is basically the paper that I want to write, but apparently its already been done... Going to check out their toolbox and see if there are any new tricks I can add.
Brain-wide 3D imaging of neuronal activity in Caenorhabditis elegans with sculpted light
Schrodel, T., Prevedel, R., Aumayr, K., Zimmer, M., Vaziri, A. (2013). Brain-wide 3D imaging of neuronal activity in C. elegans with sculpted light. Nature Methods.
WF-TeFo (wide-field temporal focusing) imaging of nuclear based calcium indicator can see 70% of c. elegans neurons.
Ok, from what it sounds like this technique is that there is a femto-second laser and the beam is broken up spectrally through a grating. This spreads the frequencies of light in time and space, and only at the focus does all of the laser light come back together. This enhances the 2-photon absorption at the focus, and diminishes any outside absorption. This is because the outside is seeing temporally offset waves of light at different frequencies. They acquire volumes at 5Hz.
So then they start looking at some of the activity, and group the neural responses by "agglomerative hierarchical clustering", which is apparently just the matlab function: linkage. The distance matrix is based on the correlation/covariance matrix of the responses over time.
WF-TeFo (wide-field temporal focusing) imaging of nuclear based calcium indicator can see 70% of c. elegans neurons.
Ok, from what it sounds like this technique is that there is a femto-second laser and the beam is broken up spectrally through a grating. This spreads the frequencies of light in time and space, and only at the focus does all of the laser light come back together. This enhances the 2-photon absorption at the focus, and diminishes any outside absorption. This is because the outside is seeing temporally offset waves of light at different frequencies. They acquire volumes at 5Hz.
So then they start looking at some of the activity, and group the neural responses by "agglomerative hierarchical clustering", which is apparently just the matlab function: linkage. The distance matrix is based on the correlation/covariance matrix of the responses over time.
Monday, September 2, 2013
Tuned thalamic excitation is amplified by visual cortical circuits
Lien, A.D., Scanziani, M. (2013). Tuned thalamic excitation is amplified by visual cortical circuits. Nature Neuroscience 16(9): 1315-1323.
Recording from L4 of visual cortex while showing random white/black dots and drifting gratings. Separate the thalamic component from the cortical component by activating PV with ChR2, which silences the cortex.
Cells receive on and off thalamic inputs (thalamic neurons that respond to increases and decreases in luminance) that have peaks which are slightly off-center. The orientation selectivity arises because an orientation will activate both the on and off field if situated appropriately.
Interestinyly, the thalamic input's integral is constant regardless of the orientaiton of the stimulus (see H in figure 4 below). The tuning arises only because of the synchronous inputs. The neuron is receiving a bunch of current from all of thalamus (which has no orientation tuning), and its preferred orientation is a result of the two thalamic pathways being activated simultaneously.
Figure 6. Tuning of non-thalamic excitatory F1 modulation. (a) Example cell. top, EPSC_Sub in response to drifting gratings of various orientations. The gray rectangle represents visual stimulus (1.5s) and the blue bar represents LED illumination (2.6s). Bottom, F1 modualation of EPSC_Sub. Shown are the cycle average (black) and best-fitting sinusoid (green) at the grating temporal frequency (2 Hz). (b) Orientation tuning curves of _Sub (dotted curve) and F1_Sub (solid curve) for the example cell shown in a. (c) Population tuning curve of Q_Sub (dotted curve) and F1_Sub(solid curve. Left, population tuning curves in which Q_Sub and F1_Sub tuning curves for each cell were equally shifted so that the preferred direction of Q_Sub occurred at 0 degrees (Q_Sub reference). Right, population tuning curves in which Q_Sub and F1_Sub tuning curves for each cell were independently shifted so that preferred direction of Q_Sub and F1_sub both occurred at 0 degrees (self reference). (d) OSI of F1_Sub was plotted against OSI of Q_Sub for all neurons. (e) Distribution of absolute differences in preferred orientation (D Pref Ori) between Q_Sub and F1_Sub. The dark curve represents all cells (n=42). The gray curve represents cells in the top 50th percentile of F1_Sub OSI (n=21). (f, g) Data are presented as in d and e for DSI and absolute differences in preferred direction (D Pref Dir). Filled green markers in d and f denote the OSI and DSI values of the example cell. Data in c-g are from n=42 cells from 33 mice. Error bars represent+- sem.
And they show that the cortical component is closesly tuned with the thalamic component, possibly with a 40ms offset or 30 degree phase delay.
Thalamus provided about 30% of the charge to cortical neurons.
Recording from L4 of visual cortex while showing random white/black dots and drifting gratings. Separate the thalamic component from the cortical component by activating PV with ChR2, which silences the cortex.
Cells receive on and off thalamic inputs (thalamic neurons that respond to increases and decreases in luminance) that have peaks which are slightly off-center. The orientation selectivity arises because an orientation will activate both the on and off field if situated appropriately.
Interestinyly, the thalamic input's integral is constant regardless of the orientaiton of the stimulus (see H in figure 4 below). The tuning arises only because of the synchronous inputs. The neuron is receiving a bunch of current from all of thalamus (which has no orientation tuning), and its preferred orientation is a result of the two thalamic pathways being activated simultaneously.
Figure 4 Separation of ON and OFF thalamic subfields predicts preferred orientation of thalamic excitation. (a–d) Example recording of EPSCThal which both the ON and OFF receptive fields and the responses to drifting gratings at various orientations were obtained in the same cell. (a) EPSCThal in response to black and white squares. Data are averages of five trials per location. (b) Contour plot of the OFF and ON receptive field maps for the cell shown in a. Each contour represents two z scores. Filled magenta and green circles mark the peaks of the OFF and ON receptive fields, respectively. Dashed black line connects the OFF and ON peaks to define the ON-OFF axis. The preferred orientation predicted from the ON-OFF axis, RF_Pref, is indicated by the small grating. (c) EPSCThal in response to drifting gratings of various orientations (average of three trials per direction). The gray rectangle indicates the visual stimulus (1.7 s) and the blue bars represent LED illumination (2.6 s). (d) Orientation tuning curves of F1Thal (blue) and QThal (gray) in polar coordinates for the responses shown in c. The blue line indicates the preferred orientation of F1Thal (Grating_Pref) and the black dashed line corresponds to RFPref. (e) Data presented as in b and d for three additional cells. Tuning curves on polar coordinates in d and e are normalized to peak response. Outer circle represents peak value. (f) RFPref plotted against GratingPref (n = 8 cells, 7 mice). The black line represents unity. The dashed lines denote the region in which the difference between RFPref and GratingPref is less than 30 degrees. The distributions of GratingPref (n = 42 cells, 33 mice) and RFPref (n = 13 cells, 12 mice) across the population of cells in which either value was measured are shown along the top and right, respectively. (g) Absolute difference in RFPref and GratingPref (∆Pref Ori) (n = 8 cells, 7 mice). Error bar represents ± s.e.m. (h) Diagram of how orientation tuning of F1Thal can arise from spatially offset OFF and ON thalamic excitatory input (t1 = time 1, t2 = time 2). The area of the blue shaded region corresponds to QThal. The difference between the peak and the trough of EPSCThal corresponds to F1Thal
Then to get the cortical component, they just simply subtract the thalamic component from the total. The cortical component is tuned with the thalamic component, but the Q coming in is now aligned with the preferrred orientation. Essentially suggesting that neurons with similar preferences in cortex wire together more strongtly.
Figure 6. Tuning of non-thalamic excitatory F1 modulation. (a) Example cell. top, EPSC_Sub in response to drifting gratings of various orientations. The gray rectangle represents visual stimulus (1.5s) and the blue bar represents LED illumination (2.6s). Bottom, F1 modualation of EPSC_Sub. Shown are the cycle average (black) and best-fitting sinusoid (green) at the grating temporal frequency (2 Hz). (b) Orientation tuning curves of _Sub (dotted curve) and F1_Sub (solid curve) for the example cell shown in a. (c) Population tuning curve of Q_Sub (dotted curve) and F1_Sub(solid curve. Left, population tuning curves in which Q_Sub and F1_Sub tuning curves for each cell were equally shifted so that the preferred direction of Q_Sub occurred at 0 degrees (Q_Sub reference). Right, population tuning curves in which Q_Sub and F1_Sub tuning curves for each cell were independently shifted so that preferred direction of Q_Sub and F1_sub both occurred at 0 degrees (self reference). (d) OSI of F1_Sub was plotted against OSI of Q_Sub for all neurons. (e) Distribution of absolute differences in preferred orientation (D Pref Ori) between Q_Sub and F1_Sub. The dark curve represents all cells (n=42). The gray curve represents cells in the top 50th percentile of F1_Sub OSI (n=21). (f, g) Data are presented as in d and e for DSI and absolute differences in preferred direction (D Pref Dir). Filled green markers in d and f denote the OSI and DSI values of the example cell. Data in c-g are from n=42 cells from 33 mice. Error bars represent
Subscribe to:
Posts (Atom)