Poster @ SfN2015

I’ll be presenting some new work on the role of cortical layer 6 on representing sensory change at SfN on Sunday, morning session, Poster O17.

GCaMP6s imaging in L6

GCaMP6s imaging in L6

Posted in Science | Comments Off on Poster @ SfN2015

Paper on TRN induced modulation of arousal state

Our paper ‘Thalamic reticular nucleus induces fast and local modulation of arousal state’ just went online at eLife.

TRN drive induced local slow waves
In this study we found that optogenetic drive of TRN induces slow waves in associated regions of neocortex, similar to the local sleep observed in sleep deprived animals. The manipulation also very rapidly modulated the arousal state of mice, reducing muscle tone and propensity to move around. Our results indicate that thalamic inhibition via TRN can serve as a mechanism for rapid and local modulation of cortical arousal state.

Get the paper at eLife:
Laura D Lewis, Jakob Voigts, Francisco J Flores, Lukas I Schmitt, Matthew A Wilson, Michael M Halassa, Emery N Brown: Thalamic reticular nucleus induces fast and local modulation of arousal state

Posted in Science | Comments Off on Paper on TRN induced modulation of arousal state

Exporting figures from Matlab into Illustrator

Getting nice figures out of matlab is always a bit of a nightmare.
Here’s a few pretty simple steps I’ve adopted that work ok for my needs.

I use subplots quite a lot, and try to get my figures as close to publication ready in matlab (fixed axes, labels, scale bars in the right place, etc.) and then use plot2svg.m by  Juerg Schwizer which has two crucial properties:

  • It can export transparent (alpha) filled areas, which i use constantly for confidence bounds.
  • It will export images (image/imagesc) as separate PNGs which means no figures end up with some semi-corrupted/unrecoverable embedded image data.

Now the figures look something like this in illustrator:

Just opened the .svg in illustrator

Just opened the .svg in illustrator. There should be two subplots here, and axis labels.

There are no axis labels, and the other subplot is gone because there are incorrect clipping masks on objects. Also, selecting anything is a nightmare because objects are highly grouped with no particular rhyme or reason.

The solution is to remove all clipping masks and ungroup everything. This might need to be done for a few iterations recursively, and is a massive hassle to do manually. A while back I found out about a trick with an illustrator action that applies these steps a lot of times, and works in parallel on all selected objects.Now, the first thing to do after opening a .svg is to select all, and run this ‘kill it with fire’ action. When illustrator complains and asks if it should continue, this just means that parts of the stacks of nested objects have been run through, so keep clicking ‘continue’ a few more times just in case there’s other deeper nested stacks of objects.

Download illustrator action here

After killing the mess with fire.

After recursively ungrouping & removing all clipping masks.

After this step, all objects should be neat and separate and can be grouped /layered as needed. Sometimes, parts of plots will be missing – if that happens, it’s usually enough to reduce the number of simultaneous subplots.

Posted in Matlab | Comments Off on Exporting figures from Matlab into Illustrator

Maintaining local pixel cross-correlation in image registration

This post is about the effect of re-sampling in image registration methods on local pixel cross correlation for analyzing calcium imaging data.

In awake 2p imaging, animal motion causes brain tissue motion and image motion. While z-motion that can not be recovered from only one imaging plane is usually fairly small, x/y-motion within the imaging plane is usually large enough to make further analysis impossible without first correcting it by shifting/deforming all images back to a common position, usually by aligning them to a template image.

We mostly use a simple, translation only, fft based registration method for this (matlab code by Manuel Guizar, see also Dario Ringach). This works well enough for our needs (imaging cell bodies with GCaMP in awake behaving mice, on a treadmill) as long as frame rates are >10Hz, ensuring that images are mostly just uniformly translated.

I recently wanted to image larger regions at <10Hz, and increase the registration precision. Because images are acquired line by line, different parts of the frame are translated differently, resulting in non-uniform scaling and shear. Correcting this deformation requires estimating the translation independently for parts of the image (typically either for each (fast) x-direction scan line, tied together with some sort of regularization/splines, or at least for a few vertically stacked sub-regions of the frame) and then applying a non-affine transformation to cancel the estimated deformation.

The main choices in this method are selection of the method of estimating translations (fft/Lucas-Kanade/…) interpolation or smoothing used for deformation coordinates across the entire image (image sub-regions via nearest neighbor/linear/splines/…), and type of pixel re-sampling (nearest neighbor/bilinear/quadratic/…). A popular method is the spline-based deformation by Greenberg and Kerr  (here’s their code, see also Patrick Mineault’s blog entry).

In all cases other than integer valued translations and shear, the deformation results in having to re-sample the image, and now output (corrected) frame pixels contain information from more than one source image pixel and vice-versa. While this is not a problem for extracting the brightness value time series for analysis, it can cause issues with methods that rely on neighboring pixels being independent. In our cell body detection for instance we rely heavily on computing the cross correlation between neighboring pixels to detect GCaMP driven pixels. Here’s a nice demonstration of a very similar method in action. Image re-sampling, which mixes up neighboring pixels, causes problems with this method, as neighboring pixels that originally contained uncorrelated noise, now share brightness values from the same source pixels. This means that even background noise becomes somewhat correlated in 3×3 neighborhoods. This is a problem because this is the exact measure we’d like to use to distinguish background signal from neurites, which in our case are ~1-2px wide.

Up-sampling all images with a sufficient factor to make all features in the image bigger than the affected 1 pixel scale won’t solve this issue either, as the spatial noise correlation in the source image would be maintained, and in the limit, one source pixel would contribute 50% of the value of a region of the same size in the output.

We also can’t fully solve the issue by going with nearest neighbor resampling, because occasionally, two neighboring output pixels will be identical. In my data (~250x250px, ~500μm field of view, ~8Hz) this happens on ~1-3 x-lines/frame across the image, whenever parts of the image are ‘pulled apart’ causing one line of source data to stretch over 2 lines of aligned output data. This doesn’t sound like a lot, but the occasionally identical pixels still cause unnecessary smoothing of the cross-correlation across the stack, and the issue would be even worse for more complicated deformations.

Deforming images while retaining independent pixel values

Here’s a quick (and somewhat dirty) solution for this:

  •  Run a nearest-neighbor resampling, but keep track of which pixels were assigned more than once in the output image.
  • Now we want to replace the doubled pixels (2nd or more pixels that are identical to the 1st) with independent draws from the same distribution as the first pixel. Because image statistics vary wildly across the frame and across time, we can’t just add back the correct amount of noise to achieve this. Instead, the simplest solution seems to be to just grab these pixels from the preceding (already aligned) frame, which has approximately the same image statistics as the doubled pixels, but contains independent noise.
  • This method will occasionally mix a very small amount of data from preceding frames into the current frame. For ~5-10Hz and GCaMP6s data this should be a negligible issue though. Also, if this could cause problems, one could simply switch back to regular NN data, or bilinear data for the extracted time series once the ROIs are identified.

Here’s the complete demo code for this. The crucial step is  very simple:

%start by identifying all pixels that have fan-out > 1
[C,IA,IC] = unique(ii_nn,'stable');
% this identifies all unique source pixels
% (first occurrences in IA)

fanouts=1:numel(Iout); % start with list of all pixels

fanouts(IA)=[]; % remove all 1sts,
% this leaves only the 2nd and further copies of pixels
% from the original to the deformed image

% set doubles to independent samples from
% prev. image to maintain spatial xcorr
Iout(fanouts)=last_Iout(fanouts);

Here’s some simulated data (the demo code should generate this figure) showing the xcorr from a random pixel to neighboring pixels in a 5000 frames stack of noise, deformed with pretty exaggerated shear and y-stretch. The NN image shows some spurious correlation to the source pixel in the y direction, caused by pixel doubling in frames where the image was stretched vertically. The fan-out replacement method eliminates this artifact.

Comparison of bilinear, NN and the proposed method. Images show xcorr of a randomly chosen pixel to neighboring pixels in aligned/resampled images. All input images were uncorrelated noise.

Comparison of bilinear, NN and the proposed method. Images show xcorr of a randomly chosen pixel to neighboring pixels in aligned/resampled images. All input images were uncorrelated noise.  The bilinear method introduces correlations across all neighbors, the NN method (in this case) only in the vertical direction, and less so. Re-sampling doubled pixels avoids inflating neighboring pixel’s xcorr.

In real data the effect is much less pronounced, but depending on the analysis, this trick might be somewhat useful.

Correcting local pixel cross-correlation
A pretty simple way to achieve a similar effect ‘in post’ is to just estimate the overall spatial cross-correlation (average from ~1000 random positions in the image in a 3×3 or 5×5 neighborhood) and then dividing it out later:

Local xcorr to a 'source' pixel in a region without active cells.

Local xcorr to a ‘source’ pixel in a region without active cells.

Same region/source pixel after dividing out average spatial correlation.

Same region/source pixel after dividing out average spatial correlation.

This can make detecting neighboring small cells much cleaner and should work even when significant spatial correlation structure is present in the image stack (such as due to imperfect option correction):

ROIs and correlation to source pixel - increased correlation around source pixel is clearly visible.

ROIs and correlation to source pixel – increased correlation around source pixel is clearly visible.

Same ROIs and source pixel - correcting for baseline correlation makes cell boundaries more easily visible.

Same ROIs and source pixel – correcting for baseline correlation makes cell boundaries more easily visible.

Posted in Calcium imaging, Data analysis, Matlab | Comments Off on Maintaining local pixel cross-correlation in image registration

Opinion piece on open-source electrophysiology

Josh and I (of Open Ephys), Greg Hale (of Arte) and Jon Newman (of Neurorighter, Cyclops & Puggle) just published an opinion piece on the role of open-source approaches and interfaces for large-scale electrophysiology.

coin_fig

Its a bit of a review of the history of some open-source/lab-developed ephys systems and how they built on each other, and presents an argument for why we need open interfaces between all components of tools in order to efficiently carry out increasingly complex experiments involving large scale recordings, near real-time analysis and closed-loop feedback.
One key point is that there will pretty much always be some mixing of proprietary and open-source system components, and that the two shouldn’t just be seen as alternative choices. Instead, we argue that open standards and interfaces are needed to improve scientific productivity, transparency and quality, while making sure that expertise and development work can be shared freely between academic researchers and industry.

Siegle JH, Hale GJ, Newman JP, Voigts J
Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology. 2015, Current Opinion in Neurobiology 32, 53-59
[@Curr. Op. Neurobio]

Posted in Electrophysiology, Open Ephys | Comments Off on Opinion piece on open-source electrophysiology

Tactile Object Localization by Anticipatory Whisker Motion

Our paper on active sensorimotor processing in mice that I worked on with Tansu Celikel is out now. Thanks also to Dave Herman for running a bunch of very challenging moving gap crossing experiments!

Voigts J, Herman DH, Celikel T. Tactile Object Localization by Anticipatory Whisker Motion. 2014, Journal of neurophysiology 22:jn.00241.2014. doi: 10.1152/jn.00241.2014. (link)

When exploring a target that is suddenly retracted, whisker protractions initially stay the same, and the mouse temporarily 'underestimates' the distance.

When exploring a target that is suddenly retracted, whisker protraction amplitudes initially stay the same, and the mouse temporarily ‘underestimates’ the target distance.

In a nutshell: The way mice use whiskers to perceive their environment is a great model system for cortical sensory processing, and is especially interesting because whiskers are active sensory organs that are swept through space, requiring the integration of the motor signal of where the whiskers are and the sensory signal of when (and how) they touch objects.

Mice (and rats) don’t slam their whiskers into objects with force, but modulate their whisker protractions in order to only lightly touch the environment. In this study, we found that mice don’t simply stop their whiskers after they touch an object, but instead protract them to where they expect to find an object.

This may seem like a detail, but has interesting implications for the decoding of sensory information: If whiskers are protracted to expected object positions, then the timing (and force etc.) of contacts with the actual object  doesn’t simply encode the distance to that object, but instead encodes the prediction error. If somatosensory cortex expects to see such a ‘physically pre-computed’ error signal, this probably has implications to how we need to look at all cortical computation in this system. Further, we might be able to use this system as a convenient model for top-down/bottom-up integration.

Posted in Science | Leave a comment

Poster @ SfN2014

I’ll be presenting recent work on the role of cortical layer 6 on representing sensory change at SfN on Monday.

Mon, Nov 17, 1PM 441.10/JJ11

Posted in Science | Leave a comment

Open Ephys meeting @ SfN2014

sfn2014_event_flyer02

The open ephys meeting at this year’s SfN meeting in DC will happen on Monday, Nov. 17, 2014, 6:30 p.m. in room 155 at the Walter E. Washington Convention Center.

We’ll have a number of core developers and users there, we’ll bring some hardware and software to look at, and we’re looking forward to great discussions of future directions of open source neuroscience.

 

Posted in Open Ephys | Leave a comment

Recording simultaneous units in cortex with the flexDrive

We’ve been using the flexDrive (wiki) for over a year now, recording almost 100 sessions in 5 mice. I’m just now starting to analyze neural ensemble statistics that require simultaneously recorded neurons.

Here’s the real-world distribution of how many simultaneous neurons in primary somatosensory cortex (with some thalamic electrodes) I could sort over a total of 75 sessions in awake mice with 16 nichrome tetrodes.

Units per session using 16 tetrode flexDrives

Units per session using 16 tetrode flexDrives

The mean yield was 25.8 units per session, with a minimum of 8 and a maximum of 46 units. These numbers include some not so great recordings, and bad tetrodes hat got damaged etc., but only very few sessions were outright discarded, mostly in the beginning of the drive lowering process where it looked like some electrodes were not in cortex yet, so this distribution should be fairly typical of what can be realistically achieved with this type of implant.

All in all, these numbers should be good enough to do some interesting assembly-analysis, though the relatively low density of the tetrode array (250 micron pitch) results in a relatively low occurrence of strong fast-timescale correlations between spike trains.

Posted in Electrophysiology, Technical things | Comments Off on Recording simultaneous units in cortex with the flexDrive

Programming in science – software as a manual tool

Just stumbled upon this great post by John D. Cook that explains very well why most of the software we write for scientific data analysis are poorly constructed and almost never properly tested or documented:

Scientists see their software as a kind of exoskeleton, an extension of themselves. Think Dr. Octopus. The software may do heavy lifting, but the scientists remain actively involved in its use. The software is a tool, not a self-contained product. (Read the whole post)

This is certainly true for at least 90% of the code that I write or see in use. The huge issue with this practice is that it can easily give false results, especially considering that a lot of the analyses we run are pretty complex and we have a tendency to stop ‘debugging’ the code as soon as it gives us a reasonable result.

I think especially in systems neuroscience there is an unnecessary reliance on these hacked in-house tools that are reinvented every time the responsible student/postdoc leaves a lab. There are great exceptions to this rule, with software packages like ImageJ and Chronux, but we’re still lacking some basic tools that should be very well developed and tested by now – for instance just ask around how to compute a PSTH with good confidence bounds, or how to best extract spikes form a raw voltage trace. These should be basic bread-and-butter tools, but are mostly done with unverified and hacky code.

Our hope with open ephys is that, instead of writing everything from scratch we can start to put the same amount of time and effort (or maybe a  bit more) to use cleaning up someone else’s code. We’d end up with similarly functional method, but one that has been vetted by two pairs of eyes.

Posted in Uncategorized | Leave a comment