Mirror alignment target for 2-photon microscopes

When aligning the laser path of a system in which mirrors are translated, like for instance for the x/y/z adjustments on a 2-photon microscope, the laser path needs to be kept in parallel with each of the translation axes. Also, for almost any system, it is important to keep the beam well centered in the axis of optical elements. It is therefore common practice (in systems where the mirror mounts are placed precisely in line) to align beam paths by centering the beam on each mirror in the system, by using an alignment target that is put in the mirror mount in place of the mirror.

Because the final alignment on a 2p scope can often not be done in a visible wavelength (either none is available, or the laser angle varies too much for the visible tuning range to be useful for alignment), IR viewer cards are needed to check if the beam hits the target in the center. This is cumbersome, requiring at least 2 hands, and error prone.

By making a mirror mounted IR viewing card that sits at the same plane and x/y position in the mount as the mirror surface, this process can be made much faster. There are a few existing options that look promising (Thorlabs, 3d printed cap with target for laser cutter), but none of these seem to provide the same precision and repeatability as a machined mirror target that is well seated in the mirror mount.

Here’s an easy recipe for adding an IR viewer card to a mirror alignment target, requiring only a target (Mirror mounted holder + aperture plate), a good IR viewer card, or a 1/2″ IR viewer disk (I’m not 100% about the quality of these though) and some very common tools:

Ingredients & tools for the alignment tool

Ingredients & tools for making the alignment tool

Cut part of an IR viewer card to the same size as the aperture in the mirror mount adapter (1/2″ in this case)

Adapter with viewer card inserted

Insert viewer card into the adapter and and make a small mark in the center. Then hold the card in place with the 1/2″ aperture plate and secure it with the set screw. Make sure that the adapter you’re using ends up placing the IR card at the same plane as the mirror surface, or there will be a small position offset.

Alignment tool in action

Posted in Calcium imaging | Comments Off on Mirror alignment target for 2-photon microscopes

Cheap dental drill

Common dental drills are useful/required to have in any systems lab. In addition to the usual applications, these can be used to cut holes in cover slides (diamond abrasive burrs), to cut small openings in drive implant bodies, smooth out dental cement, or even metal parts, etc. However, dental drills are quite expensive when purchased from vendors of dental supplies. Luckily, the only key part of the system that seems to be hard to find cheaply, the air regulator and foot pedal, can be made with parts available from amazon etc. just by screwing together some air hose fittings.

In total, this bill of materials combines to a fully functional, brand new dental drill for a total of <$100 excluding the bits/burrs. This list can entirely be ordered from amazon, and can likely be had a bit cheaper on aliexpress or similar.

Hand piece, the actual ‘drill’ part:
Any 2-hole handpiece will do, here’s a nice option for ~$30 that has a built-in LED that is powered by the turbine. These are available in low-speed/high torque as well, and/or at various angles.

Air regulator:
We can make the foot pedal/regulator from a simple foot pedal ($15 at amazon, 12mm threaded connectors) and a regulator ($9 with 1/4″ NPT thread). Now we just need some push-to-connect fittings for 6mm hard plastic tubing that work with the 12mm and 1/4″ NPT threads, so for instance these for $8 (they’ll need some teflon tape or epoxy to not leak on the 12mm threads), and some 6mm pneumatic tubing, like this for $10. To attach this to your air outlet, some other 6mm push-to-connect fitting with an appropriate threading might be needed.

Instead of the 6mm hard plastic tubing, just about any air hose could be used, but I like this option because the push to connect fittings are easy to use and the hose is easy to cut to length and is fairly thin and doesn’t get in the way. For the low pressure section of this, a more flexible tube like some thick wall tygon tube variant with barb connectors could likely work as well.

Now we just need a standard 2-hole style handpiece connector, (~$10 here or anywhere really) – conveniently, the common tube OD on these is ~4mm which fits snugly into the 6mm pneumatic tubing, and this part of the system is pretty low pressure, so a bit of glue and/r heat shrink tube is enough to connect the handpiece to the pedal/regulator combo. I also removed the water delivery tube from the handpiece connector, and cut off the thick protective tube that surrounded the air and water tubes, so only the air tube is left. This makes the drill a bit less robust, but removes almost any tugging from the hose and makes handling the drill easier. If the air hose is ever damaged it could be easily replaced by any type of tube that fits over the barb in the 2-hole connector.

back of foot pedal

back of foot pedal

Now these parts fit together in the obvious order: air outlet > regulator (watch the direction – there’s one input and one output) > foot pedal (also has one input and one output, plus a ‘bleed’ output on the side, ignore that one) > handpiece. The regulator could also go right next to the air outlet. Here, I screwed it to the foot pedal to make a neat little unit. I just drilled out a hole in the top of the pedal housing and used a M6 screw and nut for this.

Top view of the foot pedal

Top view of the foot pedal

For the burrs, we typically use Round, #1/4 carbide burrs for craniotomies and burr holes, and sometimes a #2-4 for thinning and/or to remove large amounts of cement. This is the only part of the system that can’t be ordered on amazon, but is quite cheap anyways.

 

Posted in Calcium imaging, Electrophysiology | Comments Off on Cheap dental drill

Open Ephys @ SfN2016

just_logo_big

There’s a lot of Open Ephys and open-source science related stuff going on at SfN this year:

We will be hanging out at our poster on Wed., Nov16, afternoon, MMM62 and talk about, the next generation system and interface standard that we’ve been working on. We’ll also have a prototype system to play with that was developed and built by Jon Newman, Aarón Cuevas López, and myself.

The quick summary is that we’re proposing a standard for PCIe based acquisition systems that can deliver very high data rates, submillisecond latencies, make it easy (and cheap) to add new data sources, and will be able to grow with new technology generations. All that is accomplished by using existing industry standards and interfaces. The project overlaps with Open Ephys but our hope is that it will serve as a pretty generic interface standard for many applications.

We’ll also have a Open Ephys meeting on Monday, 6:30 pm, Marriott Ballroom A. We’ll bring the poster and live demo, and have a quick overview of what Open Ephys is up to, and have time to chat with current and potential users and developers.

In addition, here are some posters that highlight open-source tools for electrophysiology and imaging:

Saturday afternoon, FFF9 “Validation and biological relevance of a real-time ripple detection module for Open Ephys”

Saturday afternoon, KKK58 “Smartscope 2: automated imaging for morphological reconstruction of fluorescently-labeled neurons”

Sunday afternoon, LLL49 “LabStreamingLayer: a general multi-modal data capture framework”

Sunday afternoon, LLL47 “RTXI: a hard real-time closed-loop data acquisition system with sub-millisecond latencies”

Monday afternoon, U5 “Wirelessly programmable module for custom deep brain stimulation”

Tuesday afternoon, JJJ18 “Low latency, multichannel sharp-wave ripple detection in a low cost, open source platform”

Wednesday afternoon, MMM56 “A multi-target 3D printed microdrive for simultaneous single-unit recordings in freely behaving rats”

Posted in Open Ephys | Comments Off on Open Ephys @ SfN2016

Backlight for high-speed video whisker tracking

Here’s a simple recipe for a very bright uniform background for high-speed videography. This approach will work well for any applications where the outline of a small objects needs to be measured at high frame rates. For vibrissa tracking, or calibrating piezo stimulators I currently only use a single such backlight and no other light sources – this avoids any reflections on the tracked object and usually gives the cleanest, most interpretable data.

Obligatory warning:  Do not look at LEDs with unprotected eyes – these things get extremely bright and you might damage your retina. 

backlight on

The basic design has three components, bottom to top:

  1. A customizable array of LEDs, attached to a heat-sink, with a power supply and current regulator.
  2. A spacer/reflector made from mirrored acrylic.
  3. A glass diffuser.

LED arraybacklight LED array

For whisker tracking I use either deep red, or NIR leds, in the common hexagonal packaging , in a grid pattern of one led every ~2cm or so, with a sufficiently powerful driver. The density of the LEDs could easily be increased to yield an even more powerful backlight.

I just superglue the leds to a aluminium sheet with a thin layer of insulator in between (here I just used lab tape – not ideal but good enough). The aluminum sheet is then just clamped to the optical breadboard so the whole table works as heatsink. If this is not an option, big CPU coolers are fairly cheap and can remove a lot of heat. As a current regulator I use a BuckPuck from ledsupply.com driven from a sufficiently powerful DC supply (old laptop power supplies work well, or even ATX supplies if 12V are sufficient for your LEDs). Alternatively, a current limited bench supply would also work.

Spacer/Reflector

To get a uniform backlight, I use a square box (just 4 sides) out of _mirrored_ acrylic (from mcmaster). It just sits on the led array in a couple of guides so its always in the same place. Ideally, this reflector should be measured so that the resulting apparent/virtual pattern of LEDs seen by the diffuser is totally uniform, but in practice I found that this does not matter a whole lot.

Diffuserbacklight_LED_diffuserOn top of the spacer, I use a home made diffuser, made from two sheets of cheap frosted glass, glued together (here I just use kapton tape), held at ~5mm from each other with spacers. I just cut the glass myself with one of these. This double diffuser works better than much more high quality single-sheet ones, and is ~10x cheaper. make sure that the construction of the diffuser allows for easy cleaning, so avoid tape or glue that can’t tolerate ethanol.

The trick is to play with the spacing of the mirrored box, and the LEDs until the light is very uniform. This calibration only really works when using a camera, because eyes are surprisingly bad at detecting brightness gradients – also, the brightness of this light can reach unsafe levels so avoid staring directly at the light even with the diffuser.

Posted in Technical things | Comments Off on Backlight for high-speed video whisker tracking

Simultaneous 2p imaging and visible-light optogenetics

We recently needed to verify how a large population of neurons reacts to weak optogenetic stimulation. We found that with a relatively straightforward setup, visible light optogenetic stimulation can be integrated into existing 2p rigs without resulting in problematic imaging artifacts. Here, we  slightly de/hyperpolarized cells with a ~1mm beam of light aimed at the imaging window while imaging and delivering sensory stimuli, but the same approach should work for all kinds of experiments with implanted optical fibers, scanned focused light, or even patterned light stimulation.

Setup overview (shown here is full-field diffuse illumination from a bare fiber - other configurations should work exactly the same).

Setup overview (shown here is full-field diffuse illumination from a bare fiber – other configurations should work exactly the same).

WARNING: Don’t direct light into photomultipliers unless you’ve taken adequate precautions to ensure that they won’t be damaged. None of the methods described here have been tested other than in our specific microscope. Specifically, this method is probably not safe for GaAsP PMTs.

Fast light pulsing outside the frame acquisition times
Our 2p setup using galvos only scans in the X direction, giving us at least 200μs of flyback time during which no data is acquired. By only stimulating during this period,  visible light artifacts can be  massively reduced. On systems with bidirectional scanning, there should still be some dead-time at the frame edges where the galvos stop/reverse.

Schematic of the stimulation scheme - light pulses are delivered at the onset of the galvo flyback when no data is acquired.

Schematic of the stimulation scheme – light pulses are delivered at the onset of the galvo flyback when no data is acquired.

Short light pulses (ch 2) inserted after each 8th x-line scan (ch 1)

Short light pulses (ch 2) inserted after x-line scans (ch 1) – here, only every 8th line is used.

We pipe the line trigger outputs from the galvo controller into an arduino and generate a 50μs long trigger for the LED on every Nth line, just after the previous line has finished scanning. Depending on the details, the resulting pulse rate should be at >200Hz, which for ChR2 stimulation should be close to functionally equivalent to constant light (Lin et al. 2009). Power can be adjusted by varying either the duty cycle or LED brightness.

Here’s some simple arduino code for the triggering.

The cyclops LED driver

The cyclops LED driver

The method requires a light source that can switch on and back off with no residual light within ~50-100μs. We tried a few commercial LED drivers, and the ubiquitous CNI made dpss lasers and nothing was even remotely up to the task. We had success with a fast diode laser (Power Technology), but the best solution by far was simple LEDs with a very fast and stable driver circuit, the cyclops LED driver that Jon Newman, now at the Wilson lab at MIT has developed.  The high linearity and <2μs rise/fall time of the driver means that no extra light bleeds into the frame even for fast scanning, and power can easily be adjusted by modulating either the duty cycle or the drive current.

One of the 75μs LED light pulses (triggered ~50-100μs after line end via an arduino), measured with a Si Photodiode on a Thorlabs PM100D meter. The rise time/decay are due to the meter's time constant, the actual rise/fall times are <2μs.

One of the 75μs LED light pulses (triggered ~50-100μs after line end via an arduino), measured with a Si Photodiode on a Thorlabs PM100D meter. The rise time/decay are due to the meter’s time constant, the actual rise/fall times are <2μs.

Sign up for our newsletter and we’ll ping you once the cyclops is back in stock.

Avoiding PMT damage
Even though the light pulsing means that the images should be relatively  free of the stimulation light, the PMTs would still see the full blast of light, which can either cause damage (definitely don’t try this with GaAsPs unless you’re sure that they wont see the stimulation light by accident!) or at least desensitize them. We tested this on our multi-akali PMT with around 0.1mW total integrated power at 450nm (which isn’t filtered out from the PMTs very well in our scope) out of a bare 200um fiber shining light uniformly over the imaging window which results in moderate back scatter into the imaging optics. We didn’t measure the exact power at the objective back aperture, but after only 50 trials of 1 sec each, the sensitivity of the PMT was sufficiently reduced to make imaging in deep layers of cortex almost impossible.

To resolve this issue, we attempted to filter out as much of the LED light out of the detection path as possible. We have a NIR block filter (OD 6 NIR blocking filter, Semrock) with notches at ~560nm (halo/arch) and ~470nm (chr2) that keeps most of the power of the LED away from the PMTs, plus another step of decent filtering from the primary dichroic blocking yellow light. This arrangement means that with yellow light (up to 1mW integrated power, >20mW peak, diffuse illumination directed at imaging window) we can’t see any clear imaging artifact in the line following the stimulation, and the blue LED (similar power) just leaves a very faint streak of brighter pixels across the imaging x-line after each LED pulse. In both cases, care still needs to be taken to account for residual slight brightening of the images when the LEDs are on. We haven’t been able to detect any significant PMT desensitization over the course of an imaging session using these filters.

Removing residual image artifacts
Even when filtering the LED light out of the PMT path to a degree that avoided sensitivity losses, we still observed a visible increase in image brightness over the course of ~half a image line, and weaker, but still detectable brightening over other image lines. This artifact is likely caused by a combination of the PMT  bandwidth (we are, after all still saturating the signal while the LED is on), and some slower timescale tissue fluorescence elicited by the visible light.

The simple brute force solution that we found to work well was to just pulse the LED every 4th or 8th line, and simply interpolate over these to get rid of the artifacts. With ~10Hz frame rates, and selecting this pattern so that a different set of image lines is degraded & interpolated out every frame, the resulting error in the data is minimal. If using a local pixel correlation method to detect ROIs, it is advisable to either keep track of which lines were interpolated, or detect them later, and exclude these pixels from the local cross-correlation computation to avoid skewed results. Finally, even though the interpolation does a good job at removing most of the artifact, in some cases there was still a very small predictable increase in image brightness due to the stimulation which can be accounted for fairly easily by measuring it using neuropil/background ROIs over the entire imaging session and then subtracting it out for it. Additionally, when using a method like this, or any other, that could induce slight brightness changes, it is a good idea to use analysis methods that are not affected by slight changes in overall brightness.

Posted in Calcium imaging, Open Ephys | Comments Off on Simultaneous 2p imaging and visible-light optogenetics

High quality time series plots

What is the best, or at least a good way to plot time-series data on a screen? When dealing with time series data in electrophysiology, a good deal of time is spent looking at plots in order to judge data quality, adjust experiments in progress, or look for patterns in analysis, so optimizing the display quality at least somewhat seems somewhat worthwhile.

As long as there’s one or more than one pixel per sample, the situation is easily solved with interpolation and possibly some anti-aliasing. This basic line-drawing problem is solved near optimally by most existing libraries. However, in electrophysiological data, we often need to plot many seconds of data on the screen, and often each pixel corresponds to 100-1000 samples of data (for 30kHz, and displaying ~10 seconds on a typical full-screen display). In many cases, there’s important information on this fast timescale that would be lost if, for instance, only the average of the samples per pixel was plotted. Examples of such information include spikes, often only 1-4 samples wide, and intermittent high frequency noise that can indicate recording problems. When zooming out on the time axis, this issue is exacerbated further.

A few of these challenges are examined in the following synthetic data trace (see here for full matlab code used to generate the examples in this post):

Range within pixel (one color)
Just plotting all pixels from a min to a max. value with a uniform color is the standard approach for displaying neural data, and is used in most current software.

Range of samples per pixel

Range of samples per pixel

Spikes are very visible now, but the distribution of noise, or density of spikes in a burst etc. are completely obscured. The sections of ‘clean’ fake data that are overlaid with noise are completely indistinguishable from pure noise.

Histogram per pixel (graded color)
Theoretically, just representing the histogram of all samples per pixel via the brightness/ opacity or hue of the pixels should display a lot of non-temporal information.

Histogram of samples per pixel

Histogram of samples per pixel

Indeed, the noise distribution etc become very visible, but fast yet tall features such as spikes are now almost invisible, and it is hard to judge the density of spikes.

Range per sample / supersampling (graded color)
The pure histogram display (above) doesn’t take the temporal ordering of samples into account. To solve this, we could plot a line between each pair of consecutive samples (same as the range method above, but at an x-resolution of one pixel per sample), and then down-sample the resulting bitmap. Equivalently, we can just treat this as a histogram in which the entire range between each consecutive pair of samples is counted uniformly. If we’re counting this range uniformly and add the same overall count for each pair, this replicated what an analog oscilloscope would do: In the oscilloscope case each pair of samples would contribute to the brightness as ~1/range between samples, because the overall energy deposited for the pair stays constant regardless of how far/fast the value changes. This would make ‘spikes’ fainter the taller they are. Instead, here we’re always counting pairs as 1 across the entire range, giving extra weight to samples that vary a lot from their neighbors:

Supersampling

Supersampling

This plot now does a much better job at capturing spikes, and displays the density of spikes very well (see the ‘burst’ on the right). However, especially for identifying individual spikes, we would still like an even more exaggerated representation of the maximum data range per pixel.

Combination of all three
By mixing all three of the methods (range, histogram, supersampling) , it should be possible to capture all required information and make it easy to configure the display for specific needs just by adjusting the coefficients of the three components. Further, by varying the color or saturation of the components, they can be made more distinct without adding visual complexity to the overall display.

Mix of range, histogram, and supersampling, range is indicated by color.

Mix of range, histogram, and supersampling, range is indicated by color.

We’re currently testing this method in the open ephys GUI. You can check it out by compiling the branch here – but be aware that there are currently no performance optimizations and plenty of bugs in this. We’ll fold the code into the stable main branch eventually,  once we’re sure everything is well tested and the performance is sorted out.

Posted in Data analysis, Electrophysiology, Open Ephys | Comments Off on High quality time series plots

Fast approximate whisker tracking in the gap-crossing task

After finding the mouse’s nose position (see post on Tracking mouse position in the gap-crossing task), I wanted to get a fast, robust estimate of the basic whisking pattern, together with approximate times when whiskers could have been in contact with a target.

Get the matlab code and some example data on github.

Desired information:

  • Approximate times and numbers of whiskers that appear to be <0.5 mm from the target platform edge.
  • Approximate whisker angle for at least a majority of whiskers in most frames. This should be good enough to compute a rough whisking pattern (frequency, phase, amplitude).

The problems with the dataset are:

  • Size: we have ~100 videos, containing 20-100k frames each that need tracking (mouse is in the right position in these frames).
  • Image heterogeneity: There are 6 mice with different whisker thicknesses & colors, somewhat different light/background noise conditions (dust on the back light etc.).
  • Obscured features: The target and home platforms are darker than the back light which makes whiskers intersecting them appear different, and there is a optical fiber intersecting the whiskers in many frames.
  • Full whiskers. The mice have (unilateral) untrimmed whiskers, which means there are tons of overlapping lines and generally not enough information to even attempt to maintain any whisker identity over pixels or frames.
Raw image

Raw image – the back light uniformity is not perfect here but good enough.

This is fairly different from ‘real’ whisker tracking, where usually we’re after getting precise contact times (for electrophysiology etc.) and contact parameters such as whisker bending over time to estimate torques. Usually for these cases, you’d use ~1kHz imaging, clipped whiskers, where only a row (usually C row) or even only one whisker is left, and even go to head-fixed preps (O’Connor et al. 2010). In these cases the convnet step detailed here should still be useful, but you’d use a more sophisticated method to track parametric whisker shapes. Here’s our older paper from 2008 on this, better methods have been published since, like the Clack et al. 2012 (well documented code available), the Knutsen et al. (initial paper in 2005) tracker (on github), or the BIOTACT Whisker Tracking Tool (software&docs) (paper).

Locating the head
To start out, we’re first locating whether there’s a mouse in each frame (to avoid tracking empty images), the position of the mouses head, and the position of the target platform. See the earlier post on this for details.

Pixel-wise whisker labeling with  a convolutional neural network
Next, we want to label all pixels that represent whiskers, ideally independently of light conditions, background noise etc. If this labeling is sufficiently clean, a relatively simple method can be used later to get the location and orientation of individual whisker segments. Here, I’m using a very small convolutional neural network (tutorial) to identify whisker pixels. This code uses ConvNet code by Sergey Demyanov (github page).

We’ll need a training set of raw and binary label images in which all whiskers are manually annotated, using photoshop or some other tool. It is crucial to get all whiskers in these images with very high precision, and paint over or mark to ignore (and then exclude from training set) all non-labelled whiskers so that the training can run on a clean training set. Also make sure that the training set includes enough negative examples, including pixels from all possible ooccluders such as optical cables, recording tethers etc. Here, I used just 4 images, a few more would probably have been better.

Example image and annotation

Example image and annotation

The network for this example is pretty simple:
Input radius of 5px, so we’re feeding the network 11x11px tiles,
First layer with 8 outputs, second layer with 4 outputs, softmax for output.

The input radius  / size of the image tiles around the pixel that is to be identified should be as small as possible while getting the job done. Large radii mean more parameters to learn and slow down processing later. We need around 5 pixels to do a proper line/ridge detection, and maybe a few more in order to train the NN to avoid labeling whisker-like structures that are part of the target platform etc.

In order to avoid accidentally tracking pieces of fur that are too close to the head but locally look like whiskers, we’d need a fairly large input radius for the cnn so it could be trained to label every hair that is too close to the head as negative. Instead, because locating pixels that are part of the head is dead simple via smoothing and thresholding (the head is the only big very dark object in the images) we can just accept that the cnn will give a few ‘false positives’ here, and just run a very fast cleanup pass with a much simpler convolution with larger kernel (20px diameter circle). This way the cnn can run on small easy to train 11×11 tiles and we still avoid fur labeling.

To make the training set I’m picking all positive examples, plus rotated copies, plus a large number of negative ones picked from random image locations. Further, to avoid over-training to the specific image brightness levels of the training set, I’m adding random offsets to each training sample. Because we’re just using a small number of training images, i’m not using a separate test set to track convergence for now.

The Training is then run to convergence, for around ~4 hrs on a 2 year old core7 system.

Input image and output of the CNN (only the round ROI was processed)

Input image and input+output of the CNN (the raw cnn output doesn’t show non-whisker image features, only the round ROI was processed)

Now that the whisker labels look ok, I’m running an approximate whisker angle tracking with a Hough transform. Of course the labeled image would make a good input for a proper vibrissa tracking tool, like the ones listed above, that can track a proper parametric whisker shape and even attempt to establish and maintain whisker identity over frames.

Example frame, NN output with overlayed Hough transform lines.

Example frame, NN output with overlaid Hough transform lines.

Running the tracking on large datasets
Now that the method works in principle, there’s still a few small tricks needed to it run at decent speeds. First off, given that the nose position is already known, we can restrict the NN to run only on a circular area of the image around that, and given that the direction the animal is headed is known, and the whiskers are clipped on one side, we only need to track one side of the face and can cut the circle in half.

This leaves one major avoidable time sink iin this implementation, arranging the image data so it can be fed into the neural network. The implementation I use here expects an inputsize X inputsize X outputsize array, with one  inputsize X inputsize tile per desired output. This is just a consequence of using a general purpose implementation as a convolutional NN. The simple solution of looping over output pixels and copying a section of the input image into an array takes up ~2 seconds in my dataset, way longer than the NN run and gives me below 0.5 fps, which means I can only get through a few videos a day.

The solution in matlab is to just pre-compute a mapping of indices from each desired output pixel coordinate to the indices of the input pixels corresponding to the inputsize X inputsize tile for that pixel.

%steps for tiling the image
isteps=inradius+10:size(uim,1)-inradius-10;
% cutting off additional 10px on each side
jsteps = inradius+10:size(uim,2)-inradius-10;

uim_2_ii=zeros(numel(uim),(((inradius*2)+1).^2));
% ^this is the mapping from linear input image pixel index [1:width*height]
% to a list of (inradius*2)+1) X (inradius*2)+1) indices that make up the
% tile to go with that (output) pixel (these indices are again linear).
% Once we have this mapping, we can just feed
% input_image(uim_2_ii(linear desired output pixel index,:))
% into the CNN which is faster than getting the
% -inradius:inradius X -inradius:inradius tile each time.

for i=isteps
    for j=jsteps
        x=x+1;
        ii=sub2ind(size(uim),i+meshgrid(-inradius:inradius)', ..
        j+meshgrid(-inradius:inradius)); %linear indices for that tile
        uim_2_ii(x,:)= ii(:); % shape from matrix into vector
    end;
end;
uim_2_ii=uim_2_ii(1:x,:); % now points to tile for each output/predicted pixel in uim

Once that 2d lookup table is done, feeding data to the CNN becomes negligibly fast.

Now, we’re limited mostly by the file access and the CNN and can track at ~5-6fps, which is good enough to get through a decent sized dataset in a few days.

Whisker tracking example output
Now to get the approximate whisking pattern, a simple median or mean of the angles coming from the hough transform does a decent job, and simply averaging (and maybe thresholding) the CNN output at the platform edge gives a decent measure of whether vibrissae overlapped the target in any frame. This is of course no direct indicator of whether there was contact between the two, but for many analyses this is a sufficient proxy, and at the very least gives a clear indication of whisking cycles where there was no contact.

Get the matlab code here (includes a copy of ConvNet code by Sergey Demyanov (github page)).

Posted in Data analysis, Matlab | Leave a comment

Tracking mouse position in the gap-crossing task

This post is a quick walk-through of some code I recently wrote for tracking rough mouse position and a few other things in high-speed video in a gap-crossing task. This code is just very basic image processing, ugly hacks and heuristics, but it was fast to implement, around one day including calibration, and it gives more or less usable data. It might be useful as an example for how to get similar tasks done quickly and without too much tinkering.

Get the matlab code and some example data on github

Raw image

Raw image

The setup consists of a camera that gets a top-down view of a mouse crossing from one side of the frame to the other and back. The two platforms between which the mouse moves can be moved: The ‘base’ platform (left) has a variable position and is re-positioned by hand (slowly) between trials. The ‘target’ platform (on the right)  is mostly static but occasionally retracts by a few mm very quickly (same method as in our anticipatory whisking paper).

To make this a bit more interesting, a fiber optic cable obscures parts of the image sometimes, and the mouse will sometimes assume odd poses and not just stick its head across the gap. By far the most important thing to get right with this kind of tracking is the physical setup. Often, small easy changes can make an otherwise hard image processing step very easy. Here, the setup was designed with the following things in mind:

  •  The platforms and camera are bolted down so they can’t move by even a mm, even when bumped. This means that one set of hand-tuned parameters work for all sessions. Similarly, light levels etc. are never changed unnecessarily. All automatic adjustments in the camera are disabled, and all focus/aperture rings screwed in tight.
  • I use a high frame rate (~330Hz), low (<250us) exposure times and very small aperture (Navitar HR F1.4 16mm – f stop adjusted down to a depth of field of a few cm). There should be no motion or out-of-focus blurring. This is not important for the mouse tracking, but is vital for detailed tracking of the mice’s whiskers later. The requirement for low exposure times and small aperture mean that a lot of light is needed.
  • The only light source is a very uniform and bright backlight, we use red (>650nm), because it’s fairly invisible to mice. I made this from ~12 red 700mA LEDs glued to a thin (~2mm) aluminum plate that is bolted to the air table, which then acts as a huge heatsink. On this sits a box (just 4 sides) made from mirrored acrylic, and on top of that two sheets of frosted glass as a diffuser (a few mm between the two sheets make the diffuser much more efficient). The diffuser needs to be removed for cleaning just about every session so design with that in mind. I moved the LEDs around to get pretty decent uniformity – this means I can use simple thresholding for many things, and is important for whisker tracking later. There are no room lights, and minimal glare from computer screens etc. One reason for this is that I need white or black mice to appear completely black against the backlight.
  •  I made sure that platforms stay decently aligned to the axes of the image. This makes tracking them easier.
  • The platforms are at least somewhat transparent and leave some of the backlight through, making it possible, if still hard to track the mouse once it intersects them.

Performance considerations
Uncompressed high speed video can reach around 4-8GB/min, so single sessions can easily reach 200GB, so data access is a major bottleneck. I use uncompressed avi straight off the AVT (link) api (data is acquired via the api from matlab, just to make it very easy to load configurations – here is an example of such a minimalistic setup). The uncompressed format is a bit extravagant for mouse tracking, but can be somewhat important for whisker tracking. It also means that random access to arbitrary frames in the videos is fast. To reduce the impact of this, the code performs three sequential levels of processing:

  • Go through video in ~20 frame increments and identify rough periods in which there is motion. Theoretically one could now delete parts of the video that are ’empty’ and cut down storage by a lot.
  • Track only the limited periods where something moved at a higher/full frame rate.
  • Process and analyze tracking data for the entire video after the session has been tracked frame-by-frame. This ‘off line’ analysis makes the analysis much easier because now we can look at the entire session at once and do moving window medians etc.

The code needs to track the following:

  • Position of both platforms.
  • Mouse position, keep track of which side the mouse is on at any time
  • Track the nose position (most extreme left and right ends of the mouse in x-direction) in order to track how close the nose of the mouse is to the platform it is approaching.
  • Optionally identify when the laser (for an optogenetic maniplation, via a optical fiber attached to an implanted ferrule) was turned on, when platforms were moved, etc.

Tracking image features

Mouse:
because the mouse is darker than any other image feature, just getting rid of the tether with a simple image operation and thresholding is enough to track a rough mouse position. For accurate nose position tracking, we can then re-do this step with a less robust threshold and image blurring, the resulting accurate estimate can then be corrected using the more robust, less accurate estimate later.

se = strel('ball',3,3);

% pre-process image for tracking rough features
imdilated_image=imdilate(frame,se);

%now simple thresholding does the trick
racknose.lpos(fnum)= min(find(mean((imdilated_image<10))>.1));

Platforms:
Since the platforms are aligned with the imaging axis, a simple vertical step filter and some thresholding does the trick:

ff=[ones(1,5),ones(1,5).*-1 ];
Ig=max(conv2(double(imdilated_image),-ff,'same'),0);
Ig(Ig>350)=0;
Ig(1:70,:)=0;    % we know where platforms cant be..
Ig(:,1:300)=0;   % ..because the setup and camera dont move
Ig(:,400:end)=0; % so we can clean things up here already

% now just average in vertical direction and find max
[~,gp]= max(mean(Ig));
tracknose.retractgap(fnum)=gp;

Other things: 
For tracking when the laser (fiber optic cable attached to ferrule on animal) was turned on there’s another nice shortcut. Instead of dealing with a TTL sync signal etc, we just split the laser control signal into an IR LED taped to the air table so that a light appears in the corner of the frame (outside the backlight or platform area) whenever the laser was on. Now synchronization becomes a simple thresholding operation. Similarly, one could pipe information from a state machine etc into the video by using a few more LEDs and then use the pretty high frame rate video as a master clock for everything.

Post-processing
Once the image features are tracked frame-by frame, they can be cleaned up with information that is not just local to each individual frame. For instance, single frame breakdowns in the platform positions are easily cleaned up with moving window median filters.

For issues with the nose position, it works pretty well to have one accurate but less robust and one robust but less accurate threshold for estimating the nose position. Now, when the accurate method fails, which is usually evident by sudden jumps, we just fall back to the less accurate one. Because the mouse posture is not known, there is an unknown offset between the two estimates, which we can recover from looking ta the last frame before the error. Because the posture changes slowly, using the robust estimate for ~20 frames after adjusting for this offset works very well in practice:

% fix transient tracking breakdowns
% we have two variables here:
% rpos: stable, not precise
% rnose: precise, has transient failures
%
rnose_diff = [0 (diff(tracknose.rnose))];
brkdwn=[find((rnose_diff > 10) .* (mousepresent>0)   ) numel(mousepresent)];
if numel(brkdwn) >0

    for i=1:numel(brkdwn)-1
        t=brkdwn(i)-1:brkdwn(i+1)+1; % tine window of breakdown
        if numel(t) >20
            t=t(1:20); %limit to 20 frames
        end;

 % get offset of (stable) rpos and (precise) rnose at onset of breakdown
        d=tracknose.rnose(t(1))-tracknose.rpos(t(1));
        tracknose.rnose_fix(t)=tracknose.rpos(t)+d;
    end;
end;

Now that the rough mouse position, presence etc. is know, we can track vibrissae – i’ll describe at a fairly simple method for that in the next post.

Posted in Data analysis | Comments Off on Tracking mouse position in the gap-crossing task

Open Ephys poster and meeting @SfN2015

just_logo_big

There will be two main Open Ephys events at this years SfN:

Aaron, Josh, Yogi, Jon, myself and others will be hanging out at our poster on Sunday afternoon, BB46 and talk about as well as demo the upcoming plugin interface (C++, Python, Julia, Matlab).

The Open Ephys meeting will be on Monday, 6:30 pm, Convention center room N230. We’ll have a quick overview of what Open Ephys is up to as well as demos and presentations of associated projects.

We’ll have time for discussions etc. at both the poster and the meeting, and we’ll set up some hands-on demos of hardware and software at both. Jon Newman will be demoing his new ultra-fast and precise open source LED driver, we’ll show how easy it is to write new plugins for the GUI and to set up closed-loop experiments, integrate video tracking and much more.

Also, there will be a poster by Chris Black et al. on using the Open Ephys system for EEG with simultaneous Transcranial alternating current stimulation (tACS) on Wednesday morning, S2. Drop by and see how the system can be used to minimize artifacts in EEG recordings, and besides the methods there’s interesting science.

Posted in Open Ephys | Leave a comment

Poster @ SfN2015

I’ll be presenting some new work on the role of cortical layer 6 on representing sensory change at SfN on Sunday, morning session, Poster O17.

GCaMP6s imaging in L6

GCaMP6s imaging in L6

Posted in Science | Comments Off on Poster @ SfN2015