Preprint: Free head rotation while 2-photon imaging

We just posted the preprint for a method that allows 2-photon imaging while mice  freely rotate horizontally and  run around a real (or virtual) 2-D environment. The system allows attaching other instruments (ephys, opto, etc.) to the headpost. We think that this approach is useful not only for studies of 2-D navigation, but more generally will allow studies of natural and computationally complex behaviors.

The mice run around on an air-floated maze (similar to Kislin et al. 2014 and Nashaat et al. 2016). Horizontal rotation has been demonstrated to work well for behaviour in VR in rats using a harness (Aronov and Tank 2014) and more recently with a head-fixation system in mice (Chen et al. 2018), and seems to not only make the animals more comfortable, but also seems to preserve head-direction encoding and grid-cell activity.

Our system is well tolerated by mice with minimal habituation, and we get stable 2-photon imaging even during fast head rotations and locomotion (see video).

Jakob Voigts, Mark Harnett: An animal-actuated rotational head-fixation system for 2-photon imaging during 2-d navigation

The main feature of our approach is that the rotation is active – we measure the torque applied by the mouse and move the headpost with a motor which has enough torque to quickly accelerate/decelerate the heavy rotating headpost, making it appear to have low friction and inertia. This means that the weight of the headpost doesn’t matter much, so we could make the system mechanically stable (and you can attach whatever instruments to the headpost – neuropixel probes anyone?).

Also, we modified the usual flat air maze approach (Kislin et al. 2014 and Nashaat et al. 2016) to be rotationally restricted: The maze can translate but not rotate, which is important in order for the torque applied by the animals to go completely to the headpost, where it is measured and actively compensated instead of spinning the maze.

The system right now depends on a fair bit of strategically applied epoxy, but we’re in the process of turning it into a (somewhat) easily replicated add-on to existing systems.

 

Posted in Calcium imaging, Science | Comments Off on Preprint: Free head rotation while 2-photon imaging

GCaMP imaging in cortical layer 6

For my PhD work I made extensive use of 2-photon imaging of layer 6 cell bodies at depths of up to ~850μm using GCaMP. This is somewhat deeper than we (and others) have been able to image comfortably using other mouse lines. While we didn’t empirically test all the edge cases of our protocol to validate which parameters were actually needed to achieve this imaging depth, here is a very rough overview of the likely reasons we were able to acquire reasonable images in L6. In brief: there’s no very interesting tricks involved, other than somewhat sparse expression and clean window surgeries.

This is all work that was done in collaboration with Chris Deister in Chris Moore’s lab.

L6 cell bodies, montage from multiple frames where each cell was active. Individual frames usually only show very few active cells.

Sparse expression
We used the NTSR1 line to restrict GCaMP6s expression to L6 CT cells. We used AAV2/1-hSyn-Flex-GCaMP6s (HHMI/Janelia Farm, GENIE Project; produced by the U. Penn Vector Core), with a titer of ~2*10^12/ml  with an injection of ~0.3μl, through a burr hole, >2 weeks prior to window implant surgery. This gives us a relatively localized expression in L6 (approximate diameter of region with cell bodies ~300 μm), and results in relatively little fluophore above the imaged cells.

Compounding this effect, the L6 CT processes above L4/L5a are relatively sparse. Together, this means that we were able to image at large depths without risking significant excitation of fluophores above the focal point. See also Durr et al. 2011 for a nice quantification of superficial/out of focus fluorescence.

 

“The maximum imaging depth was limited by out-of-focus background fluorescence and not by the available laser power. For specimens with sparser staining patterns or staining limited to deeper layers, larger imaging depths seem entirely possible.”  from: Theer, P., Hasan, M.T., and Denk, W. (2003). Two-photon imaging to a depth of 1000 mu m in living brains by use of a Ti:  l2O3 regenerative amplifier. Opt. Lett. 28, 1022–1024.

On top of the local expression pattern achieved through the AAV injection, the highly sparse spiking activity in L6 CT cells are very friendly to GCaMP imaging. Because neighboring cells rarely were co-active, the identification of cells and segmentation of fluorescence traces was relatively easy, even with the significantly degraded z-resolution. Edge case: We imaged a few animals where GCaMP expression was much more spread out, likely due to variation in the AAV spread, and some reporter line crosses that expressed YFP in all L6 CT cells, in addition to AAV-mediated GCaMP. Imaging at depths past L4/5 was harder in these animals with laser powers that would safely avoid any tissue heating or bleaching, suggesting that local expression/sparsity of superficial fluorescence was a requirement for imaging. Part of this was that increased background fluorescence from the dense L4/5 innervation by the L6 CT made it harder to distinguish cell bodies, but it seems likely that the overall increased out-of-focus fluorescence starts being an issue in some cases.

Window diameter
At depths below L2/3, the window diameter can start to affect imaging quality. With large NA objectives (we almost exclusively used a 16x 0.8NA here), deeper imaging planes, and imaging locations away from the center of the window, progressively more excitation light can get cut off by the edge of the window, resulting in power and effective NA loss.

Here is a plot of the available 2-photon excitation power for a completely uniformly filled 0.8NA objective through a 1mm window, ignoring tissue scattering. Realistic beam profiles that deliver more power at lower angles will be affected less in terms of power, but will still lead to effective NA loss, so this plot only works as an upper bound on how bad things could get. The plot shows the squared fraction of photons that make it to the focal spot, for imaging in the center of the (1mm) window (red), or 200μm of center (black).
1mm_window_light_cutoff

While a 2mm window should be big enough from this point of view when imaging in the window center, we used a 3mm imaging windows, giving us plenty of room to search for sensory driven barrels to image in without risking any light cut off. Also, the edges of windows are rarely as clear as the center, so the extra safety margin is good to have. This can mean not having to wait for an extra week for the window to clear sufficiently, which is a big help. Past 3mm, window size seems to offer little further advantages, at least for S1 imaging, and bigger windows are much harder to position flat on the cortex.

Large windows could also make it somewhat easier to collect the emitted (scattered) visible light. The rule of thumb for the surface area from which scattered photons are emitted is ~1.5*imaging depth (Beaurepaire&Mertz 2002), so a window that doesn’t cut off excitation light should be near optimal for collection as well.

‘Stacked’/’Plug’ Imaging window
We used the window design described in Andermann et al. 2011 and Goldey et al. 2014, made from 3 and 5mm cover slips (Warner CS-3R and CS-5R, ~100-120μm thickness), directly on the dura without any agar (or any topical pharmaceuticals). This, together with somewhat thinning the skull under the 5mm portion of the glass (especially rostral&caudal of the window for S1 implants, these are the ‘high spots’ that would make the window rock in the medial/latral direction otherwise) to ensure flat position of the glass on the brain, positions the bottom of the window at, or slightly below the level of the inner surface of the skull, which pushes back any swelling that will have occurred during the craniotomy, and compensates the distance between the glass and the brain surface cause by the curvature of the skull.

imaging window 'plug' design.

imaging window ‘plug’ design.

When setting the window into place, it is important to carefully inspect blood flow and to avoid applying too much pressure on the brain and chronically affecting blood flow, especially at the borders of the window. If flow is reduced immediately after window insertion but recovers within a few minutes we usually had no issues.

The main effect of the window design is that the edge formed by the 3mm cover slips seems to keep dura/bone regrowth out of the imaging area – we’re usually able to image for as long as we want to (>2-3months) – usually AAV over-expression rather than window clarity limits the imaging schedule.

Edge case:
Flat 5mm windows without the stacked 3mm cover slips seem to give approximately the same initial imaging quality, but quickly degrade due to tissue regrowth, suggesting that the flat positioning of the window is not always a limiting factor for good optical access.

Surgery quality
We made sure to minimize any damage to the dura during the craniotomy and window implant. If bleeding occurred post-operatively, or if there was any amount of subdural blood, L6 imaging was impossible. Due to the window design, superficial blood usually cleared up within 1-2 weeks. In some cases, window clarity still improved after ~4 weeks. The main reason we saw bleeding was when we had performed viral injections ~2 weeks before the window implant, and the burr hole left a small spot of dura adhesion that ripped out when removing the bone – it seems possible that performing injections at the time of window implant could be preferable in some cases.

Occasionally windows deteriorated after >2 months – the first sign of this is the appearance of freely moving csf(?) under the window, and/or increased dura autofluorescence elicited by blue light. In any of these cases, L6 imaging became almost impossible immediately, even though axons/dendrites down to L4 could still be imaged without problems.

Edge case: We had 2 cases of animals with very mild cases of  superficial blood in the tissue in which L6 imaging was possible with laser powers of ~70mW total that were barely ok to use in other cases (that is we didn’t observe beaching or any evidence of tissue damage), but that caused superficial tissue damage in the mice with mild residual blood. We don’t know whether this is due to a higher IR absorption and subsequent damage by superficial layers/dura in these mice, or whether the blood increased the likelihood of a immune reaction, or whether the problem was purely coincidental. The take away is that it’s better to wait a few days for windows to clear up rather than pushing to potentially dangerous laser powers.

Microscope optics
We’re using a microscope with a 2″ collection path and a Nikon 16x/.8NA objective. This objective seems to represent a nice sweet spot of good enough NA and great collection efficiency (see also Labrigger). We’re slightly under-filling the back aperture, which sacrifices z-resolution but somewhat increases the proportion of photons that make it to the focal spot because lights coming in at vertical angles has to traverse less tissue (check the Labrigger post on this). We haven’t systematically tested the difference of over vs. underfilling, but it looks like the effect on achieving imaging depth  is pretty negligible in our hands, partially because the sparsity of L6 firing makes z-resolution less important than it would be otherwise. Only in cases where L4/5 neurite fluorescence was an issue, overfilling significantly improved matters. We also switched to overfilling for occasional high-magnification scans of individual cells to verify that the cells appeared healthy – typical imaging resolution and PSF degradation in L6 means that the cell nucleus was almost never clearly visible.

Excitation wavelength & Pre-chirping
We’re using a Spectra-Physics Mai Tai DeepSee laser, usually at a wavelength of 980nm, which is a good choice for exciting Gcamp6, and gives us more ballistic photons than shorter wavelengths. Generally, longer wavelengths result in less scattering – this increase in mean free path length at longer wavelengths is a significant factor in deep imaging because only non-scattered photons contribute to the 2p excitation at the focal volume (see Helmchen&Denk 2005 for a review, Durr et al. 2011 also has some nice quantification of this in non-brain tissue). We observe massively increased tissue autofluorescence at the dura for wavelenghts of >1000nm, so we settled on 980nm for most deep imaging.

Here’s a plot of the available power (Lamber-Beer law, squared to account for 2p excitation power) for a few wavelengths, mean free path length estimates are taken from Jaques, 2013. Take this with a grain of salt – the estimates depend heavily on estimates of the scattering coefficients of alive neural tissue which vary substantially, but the general trend should apply in any case.

Lambert-Beer exponential decay of non-scattered photons by depth for a few wavelengths (P_0 * exp(-depth/l_s))^2

Lambert-Beer exponential decay of non-scattered photons by depth for a few wavelengths (P_0 * exp(-depth/l_s))^2. All mean free path length estimates are approximations, the literature is not fully consistent on the numbers, so the values will not match specific setups.

For deep imaging past 700μm we typically set our laser power at 980nm to ~160-180mW total with the galvos centered, which corresponds to a maximum of 70-80mW total going into tissue when scanning at ~8-10Hz with an approximate pixel dwell time of 1-2μs. We haven’t systematically tested how much further we could push the power levels. In our experience total delivered powers above 140-150mW damage the tissue, though there is evidence that higher levels could be possible without causing damage (Podgorski et al.) – the details of the surgery, duty cycle of the imaging, area over which the beam is scanned, wavelength, pulse frequency vs energy per pulse etc. seem to start to matter substantially in this regime.

We also use a pre-chirper to maximize 2p excitation. The effect of tuning the pre-chirper is much more pronounced in deep imaging than at L2/3, but it looks like most animals with good image quality should work, albeit with lower yield and requiring marginally more power without tuned pre-chirping. For tuning, we use software that displays a trace of the mean brightness of some large region of the image where we see fluorescence, and we manually select a setting that maximizes brightness.

GCaMP6s & virus expression time scale
We’re using GCaMP6s to maximize SNR – the slower kinetics of 6s are a good fit for the very low firing rates of L6 CT cells. We haven’t tested 6f yet in this preparation, but with good surgeries it seems like it should work as well, if maybe at a slightly lower yield.

It is also noteworthy that we almost always observe a sudden shift from expression levels that were too low for imaging but gave us a few barely visible cells to great expression – often from one day to the next. We’re not sure whether this is due to a nonlinearity in apparent cell brightness on top of a linear increase in indicator level, or if there’s an uptick in indicator expression somewhere ~2-3 weeks post infection.

We used AAV2/1-hSyn-Flex-GCaMP6s, and usually had to wait ~3 weeks for good expression, but in some animals the data quality still improved slightly after week 6. This is fairly typical of AAV2/1 and matches the time scale of the increase in chr2 photocurrent when using aav mediated chr2.

Refrences

  • Deep tissue two-photon microscopy. 2005, Nat. Methods, Helmchen Fritjof, Denk Winfried (link)
  • Influence of optical properties on two-photon fluorescence imaging in turbid samples. 2000, Applied Optics, Andrew K. Dunn, Vincent P. Wallace, Mariah Coleno, Michael W. Berns, and Bruce J. Tromberg (link)
  • Epifluorescence collection in two-photon microscopy. 2002, Applied Optics, Emmanuel Beaurepaire and Jerome Mertz (link)
  • Effects of objective numerical apertures on achievable imaging depths in multiphoton microscopy. 2004, Microsc Res Tech., Tung CK1, Sun Y, Lo W, Lin SJ, Jee SH, Dong CY. (link)
  • Maximum imaging depth of two-photon autofluorescence microscopy in epithelial tissues
    Nicholas J. Durr, Christian T. Weisspfennig, Benjamin A. Holfeld, and Adela Ben-Yakar
    (link)
Posted in Calcium imaging, Science | Leave a comment

Mirror alignment target for 2-photon microscopes

When aligning the laser path of a system in which mirrors are translated, like for instance for the x/y/z adjustments on a 2-photon microscope, the laser path needs to be kept in parallel with each of the translation axes. Also, for almost any system, it is important to keep the beam well centered in the axis of optical elements. It is therefore common practice (in systems where the mirror mounts are placed precisely in line) to align beam paths by centering the beam on each mirror in the system, by using an alignment target that is put in the mirror mount in place of the mirror.

Because the final alignment on a 2p scope can often not be done in a visible wavelength (either none is available, or the laser angle varies too much for the visible tuning range to be useful for alignment), IR viewer cards are needed to check if the beam hits the target in the center. This is cumbersome, requiring at least 2 hands, and error prone.

By making a mirror mounted IR viewing card that sits at the same plane and x/y position in the mount as the mirror surface, this process can be made much faster. There are a few existing options that look promising (Thorlabs, 3d printed cap with target for laser cutter), but none of these seem to provide the same precision and repeatability as a machined mirror target that is well seated in the mirror mount.

Here’s an easy recipe for adding an IR viewer card to a mirror alignment target, requiring only a target (Mirror mounted holder + aperture plate), a good IR viewer card, or a 1/2″ IR viewer disk (I’m not 100% about the quality of these though) and some very common tools:

Ingredients & tools for the alignment tool

Ingredients & tools for making the alignment tool

Cut part of an IR viewer card to the same size as the aperture in the mirror mount adapter (1/2″ in this case)

Adapter with viewer card inserted

Insert viewer card into the adapter and and make a small mark in the center. Then hold the card in place with the 1/2″ aperture plate and secure it with the set screw. Make sure that the adapter you’re using ends up placing the IR card at the same plane as the mirror surface, or there will be a small position offset.

Alignment tool in action

Posted in Calcium imaging | Comments Off on Mirror alignment target for 2-photon microscopes

Cheap dental drill

Common dental drills are useful/required to have in any systems lab. In addition to the usual applications, these can be used to cut holes in cover slides (diamond abrasive burrs), to cut small openings in drive implant bodies, smooth out dental cement, or even metal parts, etc. However, dental drills are quite expensive when purchased from vendors of dental supplies. Luckily, the only key part of the system that seems to be hard to find cheaply, the air regulator and foot pedal, can be made with parts available from amazon etc. just by screwing together some air hose fittings.

In total, this bill of materials combines to a fully functional, brand new dental drill for a total of <$100 excluding the bits/burrs. This list can entirely be ordered from amazon, and can likely be had a bit cheaper on aliexpress or similar.

Hand piece, the actual ‘drill’ part:
Any 2-hole handpiece will do, here’s a nice option for ~$30 that has a built-in LED that is powered by the turbine. These are available in low-speed/high torque as well, and/or at various angles.

Air regulator:
We can make the foot pedal/regulator from a simple foot pedal ($15 at amazon, 12mm threaded connectors) and a regulator ($9 with 1/4″ NPT thread). Now we just need some push-to-connect fittings for 6mm hard plastic tubing that work with the 12mm and 1/4″ NPT threads, so for instance these for $8 (they’ll need some teflon tape or epoxy to not leak on the 12mm threads), and some 6mm pneumatic tubing, like this for $10. To attach this to your air outlet, some other 6mm push-to-connect fitting with an appropriate threading might be needed.

Instead of the 6mm hard plastic tubing, just about any air hose could be used, but I like this option because the push to connect fittings are easy to use and the hose is easy to cut to length and is fairly thin and doesn’t get in the way. For the low pressure section of this, a more flexible tube like some thick wall tygon tube variant with barb connectors could likely work as well.

Now we just need a standard 2-hole style handpiece connector, (~$10 here or anywhere really) – conveniently, the common tube OD on these is ~4mm which fits snugly into the 6mm pneumatic tubing, and this part of the system is pretty low pressure, so a bit of glue and/r heat shrink tube is enough to connect the handpiece to the pedal/regulator combo. I also removed the water delivery tube from the handpiece connector, and cut off the thick protective tube that surrounded the air and water tubes, so only the air tube is left. This makes the drill a bit less robust, but removes almost any tugging from the hose and makes handling the drill easier. If the air hose is ever damaged it could be easily replaced by any type of tube that fits over the barb in the 2-hole connector.

back of foot pedal

back of foot pedal

Now these parts fit together in the obvious order: air outlet > regulator (watch the direction – there’s one input and one output) > foot pedal (also has one input and one output, plus a ‘bleed’ output on the side, ignore that one) > handpiece. The regulator could also go right next to the air outlet. Here, I screwed it to the foot pedal to make a neat little unit. I just drilled out a hole in the top of the pedal housing and used a M6 screw and nut for this.

Top view of the foot pedal

Top view of the foot pedal

For the burrs, we typically use Round, #1/4 carbide burrs for craniotomies and burr holes, and sometimes a #2-4 for thinning and/or to remove large amounts of cement. This is the only part of the system that can’t be ordered on amazon, but is quite cheap anyways.

 

Posted in Calcium imaging, Electrophysiology | Comments Off on Cheap dental drill

Open Ephys @ SfN2016

just_logo_big

There’s a lot of Open Ephys and open-source science related stuff going on at SfN this year:

We will be hanging out at our poster on Wed., Nov16, afternoon, MMM62 and talk about, the next generation system and interface standard that we’ve been working on. We’ll also have a prototype system to play with that was developed and built by Jon Newman, Aarón Cuevas López, and myself.

The quick summary is that we’re proposing a standard for PCIe based acquisition systems that can deliver very high data rates, submillisecond latencies, make it easy (and cheap) to add new data sources, and will be able to grow with new technology generations. All that is accomplished by using existing industry standards and interfaces. The project overlaps with Open Ephys but our hope is that it will serve as a pretty generic interface standard for many applications.

We’ll also have a Open Ephys meeting on Monday, 6:30 pm, Marriott Ballroom A. We’ll bring the poster and live demo, and have a quick overview of what Open Ephys is up to, and have time to chat with current and potential users and developers.

In addition, here are some posters that highlight open-source tools for electrophysiology and imaging:

Saturday afternoon, FFF9 “Validation and biological relevance of a real-time ripple detection module for Open Ephys”

Saturday afternoon, KKK58 “Smartscope 2: automated imaging for morphological reconstruction of fluorescently-labeled neurons”

Sunday afternoon, LLL49 “LabStreamingLayer: a general multi-modal data capture framework”

Sunday afternoon, LLL47 “RTXI: a hard real-time closed-loop data acquisition system with sub-millisecond latencies”

Monday afternoon, U5 “Wirelessly programmable module for custom deep brain stimulation”

Tuesday afternoon, JJJ18 “Low latency, multichannel sharp-wave ripple detection in a low cost, open source platform”

Wednesday afternoon, MMM56 “A multi-target 3D printed microdrive for simultaneous single-unit recordings in freely behaving rats”

Posted in Open Ephys | Comments Off on Open Ephys @ SfN2016

Backlight for high-speed video whisker tracking

Here’s a simple recipe for a very bright uniform background for high-speed videography. This approach will work well for any applications where the outline of a small objects needs to be measured at high frame rates. For vibrissa tracking, or calibrating piezo stimulators I currently only use a single such backlight and no other light sources – this avoids any reflections on the tracked object and usually gives the cleanest, most interpretable data.

Obligatory warning:  Do not look at LEDs with unprotected eyes – these things get extremely bright and you might damage your retina. 

backlight on

The basic design has three components, bottom to top:

  1. A customizable array of LEDs, attached to a heat-sink, with a power supply and current regulator.
  2. A spacer/reflector made from mirrored acrylic.
  3. A glass diffuser.

LED arraybacklight LED array

For whisker tracking I use either deep red, or NIR leds, in the common hexagonal packaging , in a grid pattern of one led every ~2cm or so, with a sufficiently powerful driver. The density of the LEDs could easily be increased to yield an even more powerful backlight.

I just superglue the leds to a aluminium sheet with a thin layer of insulator in between (here I just used lab tape – not ideal but good enough). The aluminum sheet is then just clamped to the optical breadboard so the whole table works as heatsink. If this is not an option, big CPU coolers are fairly cheap and can remove a lot of heat. As a current regulator I use a BuckPuck from ledsupply.com driven from a sufficiently powerful DC supply (old laptop power supplies work well, or even ATX supplies if 12V are sufficient for your LEDs). Alternatively, a current limited bench supply would also work.

Spacer/Reflector

To get a uniform backlight, I use a square box (just 4 sides) out of _mirrored_ acrylic (from mcmaster). It just sits on the led array in a couple of guides so its always in the same place. Ideally, this reflector should be measured so that the resulting apparent/virtual pattern of LEDs seen by the diffuser is totally uniform, but in practice I found that this does not matter a whole lot.

Diffuserbacklight_LED_diffuserOn top of the spacer, I use a home made diffuser, made from two sheets of cheap frosted glass, glued together (here I just use kapton tape), held at ~5mm from each other with spacers. I just cut the glass myself with one of these. This double diffuser works better than much more high quality single-sheet ones, and is ~10x cheaper. make sure that the construction of the diffuser allows for easy cleaning, so avoid tape or glue that can’t tolerate ethanol.

The trick is to play with the spacing of the mirrored box, and the LEDs until the light is very uniform. This calibration only really works when using a camera, because eyes are surprisingly bad at detecting brightness gradients – also, the brightness of this light can reach unsafe levels so avoid staring directly at the light even with the diffuser.

Posted in Uncategorized | Comments Off on Backlight for high-speed video whisker tracking

Simultaneous 2p imaging and visible-light optogenetics

We recently needed to verify how a large population of neurons reacts to weak optogenetic stimulation. We found that with a relatively straightforward setup, visible light optogenetic stimulation can be integrated into existing 2p rigs without resulting in problematic imaging artifacts. Here, we  slightly de/hyperpolarized cells with a ~1mm beam of light aimed at the imaging window while imaging and delivering sensory stimuli, but the same approach should work for all kinds of experiments with implanted optical fibers, scanned focused light, or even patterned light stimulation.

Setup overview (shown here is full-field diffuse illumination from a bare fiber - other configurations should work exactly the same).

Setup overview (shown here is full-field diffuse illumination from a bare fiber – other configurations should work exactly the same).

WARNING: Don’t direct light into photomultipliers unless you’ve taken adequate precautions to ensure that they won’t be damaged. None of the methods described here have been tested other than in our specific microscope. Specifically, this method is probably not safe for GaAsP PMTs.

Fast light pulsing outside the frame acquisition times
Our 2p setup using galvos only scans in the X direction, giving us at least 200μs of flyback time during which no data is acquired. By only stimulating during this period,  visible light artifacts can be  massively reduced. On systems with bidirectional scanning, there should still be some dead-time at the frame edges where the galvos stop/reverse.

Schematic of the stimulation scheme - light pulses are delivered at the onset of the galvo flyback when no data is acquired.

Schematic of the stimulation scheme – light pulses are delivered at the onset of the galvo flyback when no data is acquired.

Short light pulses (ch 2) inserted after each 8th x-line scan (ch 1)

Short light pulses (ch 2) inserted after x-line scans (ch 1) – here, only every 8th line is used.

We pipe the line trigger outputs from the galvo controller into an arduino and generate a 50μs long trigger for the LED on every Nth line, just after the previous line has finished scanning. Depending on the details, the resulting pulse rate should be at >200Hz, which for ChR2 stimulation should be close to functionally equivalent to constant light (Lin et al. 2009). Power can be adjusted by varying either the duty cycle or LED brightness.

Here’s some simple arduino code for the triggering.

The cyclops LED driver

The cyclops LED driver

The method requires a light source that can switch on and back off with no residual light within ~50-100μs. We tried a few commercial LED drivers, and the ubiquitous CNI made dpss lasers and nothing was even remotely up to the task. We had success with a fast diode laser (Power Technology), but the best solution by far was simple LEDs with a very fast and stable driver circuit, the cyclops LED driver that Jon Newman, now at the Wilson lab at MIT has developed.  The high linearity and <2μs rise/fall time of the driver means that no extra light bleeds into the frame even for fast scanning, and power can easily be adjusted by modulating either the duty cycle or the drive current.

One of the 75μs LED light pulses (triggered ~50-100μs after line end via an arduino), measured with a Si Photodiode on a Thorlabs PM100D meter. The rise time/decay are due to the meter's time constant, the actual rise/fall times are <2μs.

One of the 75μs LED light pulses (triggered ~50-100μs after line end via an arduino), measured with a Si Photodiode on a Thorlabs PM100D meter. The rise time/decay are due to the meter’s time constant, the actual rise/fall times are <2μs.

Sign up for our newsletter and we’ll ping you once the cyclops is back in stock.

Avoiding PMT damage
Even though the light pulsing means that the images should be relatively  free of the stimulation light, the PMTs would still see the full blast of light, which can either cause damage (definitely don’t try this with GaAsPs unless you’re sure that they wont see the stimulation light by accident!) or at least desensitize them. We tested this on our multi-akali PMT with around 0.1mW total integrated power at 450nm (which isn’t filtered out from the PMTs very well in our scope) out of a bare 200um fiber shining light uniformly over the imaging window which results in moderate back scatter into the imaging optics. We didn’t measure the exact power at the objective back aperture, but after only 50 trials of 1 sec each, the sensitivity of the PMT was sufficiently reduced to make imaging in deep layers of cortex almost impossible.

To resolve this issue, we attempted to filter out as much of the LED light out of the detection path as possible. We have a NIR block filter (OD 6 NIR blocking filter, Semrock) with notches at ~560nm (halo/arch) and ~470nm (chr2) that keeps most of the power of the LED away from the PMTs, plus another step of decent filtering from the primary dichroic blocking yellow light. This arrangement means that with yellow light (up to 1mW integrated power, >20mW peak, diffuse illumination directed at imaging window) we can’t see any clear imaging artifact in the line following the stimulation, and the blue LED (similar power) just leaves a very faint streak of brighter pixels across the imaging x-line after each LED pulse. In both cases, care still needs to be taken to account for residual slight brightening of the images when the LEDs are on. We haven’t been able to detect any significant PMT desensitization over the course of an imaging session using these filters.

Removing residual image artifacts
Even when filtering the LED light out of the PMT path to a degree that avoided sensitivity losses, we still observed a visible increase in image brightness over the course of ~half a image line, and weaker, but still detectable brightening over other image lines. This artifact is likely caused by a combination of the PMT  bandwidth (we are, after all still saturating the signal while the LED is on), and some slower timescale tissue fluorescence elicited by the visible light.

The simple brute force solution that we found to work well was to just pulse the LED every 4th or 8th line, and simply interpolate over these to get rid of the artifacts. With ~10Hz frame rates, and selecting this pattern so that a different set of image lines is degraded & interpolated out every frame, the resulting error in the data is minimal. If using a local pixel correlation method to detect ROIs, it is advisable to either keep track of which lines were interpolated, or detect them later, and exclude these pixels from the local cross-correlation computation to avoid skewed results. Finally, even though the interpolation does a good job at removing most of the artifact, in some cases there was still a very small predictable increase in image brightness due to the stimulation which can be accounted for fairly easily by measuring it using neuropil/background ROIs over the entire imaging session and then subtracting it out for it. Additionally, when using a method like this, or any other, that could induce slight brightness changes, it is a good idea to use analysis methods that are not affected by slight changes in overall brightness.

Posted in Calcium imaging, Open Ephys | Comments Off on Simultaneous 2p imaging and visible-light optogenetics

High quality time series plots

What is the best, or at least a good way to plot time-series data on a screen? When dealing with time series data in electrophysiology, a good deal of time is spent looking at plots in order to judge data quality, adjust experiments in progress, or look for patterns in analysis, so optimizing the display quality at least somewhat seems somewhat worthwhile.

As long as there’s one or more than one pixel per sample, the situation is easily solved with interpolation and possibly some anti-aliasing. This basic line-drawing problem is solved near optimally by most existing libraries. However, in electrophysiological data, we often need to plot many seconds of data on the screen, and often each pixel corresponds to 100-1000 samples of data (for 30kHz, and displaying ~10 seconds on a typical full-screen display). In many cases, there’s important information on this fast timescale that would be lost if, for instance, only the average of the samples per pixel was plotted. Examples of such information include spikes, often only 1-4 samples wide, and intermittent high frequency noise that can indicate recording problems. When zooming out on the time axis, this issue is exacerbated further.

A few of these challenges are examined in the following synthetic data trace (see here for full matlab code used to generate the examples in this post):

Range within pixel (one color)
Just plotting all pixels from a min to a max. value with a uniform color is the standard approach for displaying neural data, and is used in most current software.

Range of samples per pixel

Range of samples per pixel

Spikes are very visible now, but the distribution of noise, or density of spikes in a burst etc. are completely obscured. The sections of ‘clean’ fake data that are overlaid with noise are completely indistinguishable from pure noise.

Histogram per pixel (graded color)
Theoretically, just representing the histogram of all samples per pixel via the brightness/ opacity or hue of the pixels should display a lot of non-temporal information.

Histogram of samples per pixel

Histogram of samples per pixel

Indeed, the noise distribution etc become very visible, but fast yet tall features such as spikes are now almost invisible, and it is hard to judge the density of spikes.

Range per sample / supersampling (graded color)
The pure histogram display (above) doesn’t take the temporal ordering of samples into account. To solve this, we could plot a line between each pair of consecutive samples (same as the range method above, but at an x-resolution of one pixel per sample), and then down-sample the resulting bitmap. Equivalently, we can just treat this as a histogram in which the entire range between each consecutive pair of samples is counted uniformly. If we’re counting this range uniformly and add the same overall count for each pair, this replicated what an analog oscilloscope would do: In the oscilloscope case each pair of samples would contribute to the brightness as ~1/range between samples, because the overall energy deposited for the pair stays constant regardless of how far/fast the value changes. This would make ‘spikes’ fainter the taller they are. Instead, here we’re always counting pairs as 1 across the entire range, giving extra weight to samples that vary a lot from their neighbors:

Supersampling

Supersampling

This plot now does a much better job at capturing spikes, and displays the density of spikes very well (see the ‘burst’ on the right). However, especially for identifying individual spikes, we would still like an even more exaggerated representation of the maximum data range per pixel.

Combination of all three
By mixing all three of the methods (range, histogram, supersampling) , it should be possible to capture all required information and make it easy to configure the display for specific needs just by adjusting the coefficients of the three components. Further, by varying the color or saturation of the components, they can be made more distinct without adding visual complexity to the overall display.

Mix of range, histogram, and supersampling, range is indicated by color.

Mix of range, histogram, and supersampling, range is indicated by color.

We’re currently testing this method in the open ephys GUI. You can check it out by compiling the branch here – but be aware that there are currently no performance optimizations and plenty of bugs in this. We’ll fold the code into the stable main branch eventually,  once we’re sure everything is well tested and the performance is sorted out.

Posted in Data analysis, Electrophysiology, Open Ephys | Comments Off on High quality time series plots

Fast approximate whisker tracking in the gap-crossing task

After finding the mouse’s nose position (see post on Tracking mouse position in the gap-crossing task), I wanted to get a fast, robust estimate of the basic whisking pattern, together with approximate times when whiskers could have been in contact with a target.

Get the matlab code and some example data on github.

Desired information:

  • Approximate times and numbers of whiskers that appear to be <0.5 mm from the target platform edge.
  • Approximate whisker angle for at least a majority of whiskers in most frames. This should be good enough to compute a rough whisking pattern (frequency, phase, amplitude).

The problems with the dataset are:

  • Size: we have ~100 videos, containing 20-100k frames each that need tracking (mouse is in the right position in these frames).
  • Image heterogeneity: There are 6 mice with different whisker thicknesses & colors, somewhat different light/background noise conditions (dust on the back light etc.).
  • Obscured features: The target and home platforms are darker than the back light which makes whiskers intersecting them appear different, and there is a optical fiber intersecting the whiskers in many frames.
  • Full whiskers. The mice have (unilateral) untrimmed whiskers, which means there are tons of overlapping lines and generally not enough information to even attempt to maintain any whisker identity over pixels or frames.
Raw image

Raw image – the back light uniformity is not perfect here but good enough.

This is fairly different from ‘real’ whisker tracking, where usually we’re after getting precise contact times (for electrophysiology etc.) and contact parameters such as whisker bending over time to estimate torques. Usually for these cases, you’d use ~1kHz imaging, clipped whiskers, where only a row (usually C row) or even only one whisker is left, and even go to head-fixed preps (O’Connor et al. 2010). In these cases the convnet step detailed here should still be useful, but you’d use a more sophisticated method to track parametric whisker shapes. Here’s our older paper from 2008 on this, better methods have been published since, like the Clack et al. 2012 (well documented code available), the Knutsen et al. (initial paper in 2005) tracker (on github), or the BIOTACT Whisker Tracking Tool (software&docs) (paper).

Locating the head
To start out, we’re first locating whether there’s a mouse in each frame (to avoid tracking empty images), the position of the mouses head, and the position of the target platform. See the earlier post on this for details.

Pixel-wise whisker labeling with  a convolutional neural network
Next, we want to label all pixels that represent whiskers, ideally independently of light conditions, background noise etc. If this labeling is sufficiently clean, a relatively simple method can be used later to get the location and orientation of individual whisker segments. Here, I’m using a very small convolutional neural network (tutorial) to identify whisker pixels. This code uses ConvNet code by Sergey Demyanov (github page).

We’ll need a training set of raw and binary label images in which all whiskers are manually annotated, using photoshop or some other tool. It is crucial to get all whiskers in these images with very high precision, and paint over or mark to ignore (and then exclude from training set) all non-labelled whiskers so that the training can run on a clean training set. Also make sure that the training set includes enough negative examples, including pixels from all possible ooccluders such as optical cables, recording tethers etc. Here, I used just 4 images, a few more would probably have been better.

Example image and annotation

Example image and annotation

The network for this example is pretty simple:
Input radius of 5px, so we’re feeding the network 11x11px tiles,
First layer with 8 outputs, second layer with 4 outputs, softmax for output.

The input radius  / size of the image tiles around the pixel that is to be identified should be as small as possible while getting the job done. Large radii mean more parameters to learn and slow down processing later. We need around 5 pixels to do a proper line/ridge detection, and maybe a few more in order to train the NN to avoid labeling whisker-like structures that are part of the target platform etc.

In order to avoid accidentally tracking pieces of fur that are too close to the head but locally look like whiskers, we’d need a fairly large input radius for the cnn so it could be trained to label every hair that is too close to the head as negative. Instead, because locating pixels that are part of the head is dead simple via smoothing and thresholding (the head is the only big very dark object in the images) we can just accept that the cnn will give a few ‘false positives’ here, and just run a very fast cleanup pass with a much simpler convolution with larger kernel (20px diameter circle). This way the cnn can run on small easy to train 11×11 tiles and we still avoid fur labeling.

To make the training set I’m picking all positive examples, plus rotated copies, plus a large number of negative ones picked from random image locations. Further, to avoid over-training to the specific image brightness levels of the training set, I’m adding random offsets to each training sample. Because we’re just using a small number of training images, i’m not using a separate test set to track convergence for now.

The Training is then run to convergence, for around ~4 hrs on a 2 year old core7 system.

Input image and output of the CNN (only the round ROI was processed)

Input image and input+output of the CNN (the raw cnn output doesn’t show non-whisker image features, only the round ROI was processed)

Now that the whisker labels look ok, I’m running an approximate whisker angle tracking with a Hough transform. Of course the labeled image would make a good input for a proper vibrissa tracking tool, like the ones listed above, that can track a proper parametric whisker shape and even attempt to establish and maintain whisker identity over frames.

Example frame, NN output with overlayed Hough transform lines.

Example frame, NN output with overlaid Hough transform lines.

Running the tracking on large datasets
Now that the method works in principle, there’s still a few small tricks needed to it run at decent speeds. First off, given that the nose position is already known, we can restrict the NN to run only on a circular area of the image around that, and given that the direction the animal is headed is known, and the whiskers are clipped on one side, we only need to track one side of the face and can cut the circle in half.

This leaves one major avoidable time sink iin this implementation, arranging the image data so it can be fed into the neural network. The implementation I use here expects an inputsize X inputsize X outputsize array, with one  inputsize X inputsize tile per desired output. This is just a consequence of using a general purpose implementation as a convolutional NN. The simple solution of looping over output pixels and copying a section of the input image into an array takes up ~2 seconds in my dataset, way longer than the NN run and gives me below 0.5 fps, which means I can only get through a few videos a day.

The solution in matlab is to just pre-compute a mapping of indices from each desired output pixel coordinate to the indices of the input pixels corresponding to the inputsize X inputsize tile for that pixel.

%steps for tiling the image
isteps=inradius+10:size(uim,1)-inradius-10;
% cutting off additional 10px on each side
jsteps = inradius+10:size(uim,2)-inradius-10;

uim_2_ii=zeros(numel(uim),(((inradius*2)+1).^2));
% ^this is the mapping from linear input image pixel index [1:width*height]
% to a list of (inradius*2)+1) X (inradius*2)+1) indices that make up the
% tile to go with that (output) pixel (these indices are again linear).
% Once we have this mapping, we can just feed
% input_image(uim_2_ii(linear desired output pixel index,:))
% into the CNN which is faster than getting the
% -inradius:inradius X -inradius:inradius tile each time.

for i=isteps
    for j=jsteps
        x=x+1;
        ii=sub2ind(size(uim),i+meshgrid(-inradius:inradius)', ..
        j+meshgrid(-inradius:inradius)); %linear indices for that tile
        uim_2_ii(x,:)= ii(:); % shape from matrix into vector
    end;
end;
uim_2_ii=uim_2_ii(1:x,:); % now points to tile for each output/predicted pixel in uim

Once that 2d lookup table is done, feeding data to the CNN becomes negligibly fast.

Now, we’re limited mostly by the file access and the CNN and can track at ~5-6fps, which is good enough to get through a decent sized dataset in a few days.

Whisker tracking example output
Now to get the approximate whisking pattern, a simple median or mean of the angles coming from the hough transform does a decent job, and simply averaging (and maybe thresholding) the CNN output at the platform edge gives a decent measure of whether vibrissae overlapped the target in any frame. This is of course no direct indicator of whether there was contact between the two, but for many analyses this is a sufficient proxy, and at the very least gives a clear indication of whisking cycles where there was no contact.

Get the matlab code here (includes a copy of ConvNet code by Sergey Demyanov (github page)).

Posted in Data analysis, Matlab | Leave a comment

Tracking mouse position in the gap-crossing task

This post is a quick walk-through of some code I recently wrote for tracking rough mouse position and a few other things in high-speed video in a gap-crossing task. This code is just very basic image processing, ugly hacks and heuristics, but it was fast to implement, around one day including calibration, and it gives more or less usable data. It might be useful as an example for how to get similar tasks done quickly and without too much tinkering.

Get the matlab code and some example data on github

Raw image

Raw image

The setup consists of a camera that gets a top-down view of a mouse crossing from one side of the frame to the other and back. The two platforms between which the mouse moves can be moved: The ‘base’ platform (left) has a variable position and is re-positioned by hand (slowly) between trials. The ‘target’ platform (on the right)  is mostly static but occasionally retracts by a few mm very quickly (same method as in our anticipatory whisking paper).

To make this a bit more interesting, a fiber optic cable obscures parts of the image sometimes, and the mouse will sometimes assume odd poses and not just stick its head across the gap. By far the most important thing to get right with this kind of tracking is the physical setup. Often, small easy changes can make an otherwise hard image processing step very easy. Here, the setup was designed with the following things in mind:

  •  The platforms and camera are bolted down so they can’t move by even a mm, even when bumped. This means that one set of hand-tuned parameters work for all sessions. Similarly, light levels etc. are never changed unnecessarily. All automatic adjustments in the camera are disabled, and all focus/aperture rings screwed in tight.
  • I use a high frame rate (~330Hz), low (<250us) exposure times and very small aperture (Navitar HR F1.4 16mm – f stop adjusted down to a depth of field of a few cm). There should be no motion or out-of-focus blurring. This is not important for the mouse tracking, but is vital for detailed tracking of the mice’s whiskers later. The requirement for low exposure times and small aperture mean that a lot of light is needed.
  • The only light source is a very uniform and bright backlight, we use red (>650nm), because it’s fairly invisible to mice. I made this from ~12 red 700mA LEDs glued to a thin (~2mm) aluminum plate that is bolted to the air table, which then acts as a huge heatsink. On this sits a box (just 4 sides) made from mirrored acrylic, and on top of that two sheets of frosted glass as a diffuser (a few mm between the two sheets make the diffuser much more efficient). The diffuser needs to be removed for cleaning just about every session so design with that in mind. I moved the LEDs around to get pretty decent uniformity – this means I can use simple thresholding for many things, and is important for whisker tracking later. There are no room lights, and minimal glare from computer screens etc. One reason for this is that I need white or black mice to appear completely black against the backlight.
  •  I made sure that platforms stay decently aligned to the axes of the image. This makes tracking them easier.
  • The platforms are at least somewhat transparent and leave some of the backlight through, making it possible, if still hard to track the mouse once it intersects them.

Performance considerations
Uncompressed high speed video can reach around 4-8GB/min, so single sessions can easily reach 200GB, so data access is a major bottleneck. I use uncompressed avi straight off the AVT (link) api (data is acquired via the api from matlab, just to make it very easy to load configurations – here is an example of such a minimalistic setup). The uncompressed format is a bit extravagant for mouse tracking, but can be somewhat important for whisker tracking. It also means that random access to arbitrary frames in the videos is fast. To reduce the impact of this, the code performs three sequential levels of processing:

  • Go through video in ~20 frame increments and identify rough periods in which there is motion. Theoretically one could now delete parts of the video that are ’empty’ and cut down storage by a lot.
  • Track only the limited periods where something moved at a higher/full frame rate.
  • Process and analyze tracking data for the entire video after the session has been tracked frame-by-frame. This ‘off line’ analysis makes the analysis much easier because now we can look at the entire session at once and do moving window medians etc.

The code needs to track the following:

  • Position of both platforms.
  • Mouse position, keep track of which side the mouse is on at any time
  • Track the nose position (most extreme left and right ends of the mouse in x-direction) in order to track how close the nose of the mouse is to the platform it is approaching.
  • Optionally identify when the laser (for an optogenetic maniplation, via a optical fiber attached to an implanted ferrule) was turned on, when platforms were moved, etc.

Tracking image features

Mouse:
because the mouse is darker than any other image feature, just getting rid of the tether with a simple image operation and thresholding is enough to track a rough mouse position. For accurate nose position tracking, we can then re-do this step with a less robust threshold and image blurring, the resulting accurate estimate can then be corrected using the more robust, less accurate estimate later.

se = strel('ball',3,3);

% pre-process image for tracking rough features
imdilated_image=imdilate(frame,se);

%now simple thresholding does the trick
racknose.lpos(fnum)= min(find(mean((imdilated_image<10))>.1));

Platforms:
Since the platforms are aligned with the imaging axis, a simple vertical step filter and some thresholding does the trick:

ff=[ones(1,5),ones(1,5).*-1 ];
Ig=max(conv2(double(imdilated_image),-ff,'same'),0);
Ig(Ig>350)=0;
Ig(1:70,:)=0;    % we know where platforms cant be..
Ig(:,1:300)=0;   % ..because the setup and camera dont move
Ig(:,400:end)=0; % so we can clean things up here already

% now just average in vertical direction and find max
[~,gp]= max(mean(Ig));
tracknose.retractgap(fnum)=gp;

Other things: 
For tracking when the laser (fiber optic cable attached to ferrule on animal) was turned on there’s another nice shortcut. Instead of dealing with a TTL sync signal etc, we just split the laser control signal into an IR LED taped to the air table so that a light appears in the corner of the frame (outside the backlight or platform area) whenever the laser was on. Now synchronization becomes a simple thresholding operation. Similarly, one could pipe information from a state machine etc into the video by using a few more LEDs and then use the pretty high frame rate video as a master clock for everything.

Post-processing
Once the image features are tracked frame-by frame, they can be cleaned up with information that is not just local to each individual frame. For instance, single frame breakdowns in the platform positions are easily cleaned up with moving window median filters.

For issues with the nose position, it works pretty well to have one accurate but less robust and one robust but less accurate threshold for estimating the nose position. Now, when the accurate method fails, which is usually evident by sudden jumps, we just fall back to the less accurate one. Because the mouse posture is not known, there is an unknown offset between the two estimates, which we can recover from looking ta the last frame before the error. Because the posture changes slowly, using the robust estimate for ~20 frames after adjusting for this offset works very well in practice:

% fix transient tracking breakdowns
% we have two variables here:
% rpos: stable, not precise
% rnose: precise, has transient failures
%
rnose_diff = [0 (diff(tracknose.rnose))];
brkdwn=[find((rnose_diff > 10) .* (mousepresent>0)   ) numel(mousepresent)];
if numel(brkdwn) >0

    for i=1:numel(brkdwn)-1
        t=brkdwn(i)-1:brkdwn(i+1)+1; % tine window of breakdown
        if numel(t) >20
            t=t(1:20); %limit to 20 frames
        end;

 % get offset of (stable) rpos and (precise) rnose at onset of breakdown
        d=tracknose.rnose(t(1))-tracknose.rpos(t(1));
        tracknose.rnose_fix(t)=tracknose.rpos(t)+d;
    end;
end;

Now that the rough mouse position, presence etc. is know, we can track vibrissae – i’ll describe at a fairly simple method for that in the next post.

Posted in Data analysis | Comments Off on Tracking mouse position in the gap-crossing task