Tracking mouse position in the gap-crossing task

This post is a quick walk-through of some code I recently wrote for tracking rough mouse position and a few other things in high-speed video in a gap-crossing task. This code is just very basic image processing, ugly hacks and heuristics, but it was fast to implement, around one day including calibration, and it gives more or less usable data. It might be useful as an example for how to get similar tasks done quickly and without too much tinkering.

Get the matlab code and some example data on github

Raw image

Raw image

The setup consists of a camera that gets a top-down view of a mouse crossing from one side of the frame to the other and back. The two platforms between which the mouse moves can be moved: The ‘base’ platform (left) has a variable position and is re-positioned by hand (slowly) between trials. The ‘target’ platform (on the right)  is mostly static but occasionally retracts by a few mm very quickly (same method as in our anticipatory whisking paper).

To make this a bit more interesting, a fiber optic cable obscures parts of the image sometimes, and the mouse will sometimes assume odd poses and not just stick its head across the gap. By far the most important thing to get right with this kind of tracking is the physical setup. Often, small easy changes can make an otherwise hard image processing step very easy. Here, the setup was designed with the following things in mind:

  •  The platforms and camera are bolted down so they can’t move by even a mm, even when bumped. This means that one set of hand-tuned parameters work for all sessions. Similarly, light levels etc. are never changed unnecessarily. All automatic adjustments in the camera are disabled, and all focus/aperture rings screwed in tight.
  • I use a high frame rate (~330Hz), low (<250us) exposure times and very small aperture (Navitar HR F1.4 16mm – f stop adjusted down to a depth of field of a few cm). There should be no motion or out-of-focus blurring. This is not important for the mouse tracking, but is vital for detailed tracking of the mice’s whiskers later. The requirement for low exposure times and small aperture mean that a lot of light is needed.
  • The only light source is a very uniform and bright backlight, we use red (>650nm), because it’s fairly invisible to mice. I made this from ~12 red 700mA LEDs glued to a thin (~2mm) aluminum plate that is bolted to the air table, which then acts as a huge heatsink. On this sits a box (just 4 sides) made from mirrored acrylic, and on top of that two sheets of frosted glass as a diffuser (a few mm between the two sheets make the diffuser much more efficient). The diffuser needs to be removed for cleaning just about every session so design with that in mind. I moved the LEDs around to get pretty decent uniformity – this means I can use simple thresholding for many things, and is important for whisker tracking later. There are no room lights, and minimal glare from computer screens etc. One reason for this is that I need white or black mice to appear completely black against the backlight.
  •  I made sure that platforms stay decently aligned to the axes of the image. This makes tracking them easier.
  • The platforms are at least somewhat transparent and leave some of the backlight through, making it possible, if still hard to track the mouse once it intersects them.

Performance considerations
Uncompressed high speed video can reach around 4-8GB/min, so single sessions can easily reach 200GB, so data access is a major bottleneck. I use uncompressed avi straight off the AVT (link) api (data is acquired via the api from matlab, just to make it very easy to load configurations – here is an example of such a minimalistic setup). The uncompressed format is a bit extravagant for mouse tracking, but can be somewhat important for whisker tracking. It also means that random access to arbitrary frames in the videos is fast. To reduce the impact of this, the code performs three sequential levels of processing:

  • Go through video in ~20 frame increments and identify rough periods in which there is motion. Theoretically one could now delete parts of the video that are ’empty’ and cut down storage by a lot.
  • Track only the limited periods where something moved at a higher/full frame rate.
  • Process and analyze tracking data for the entire video after the session has been tracked frame-by-frame. This ‘off line’ analysis makes the analysis much easier because now we can look at the entire session at once and do moving window medians etc.

The code needs to track the following:

  • Position of both platforms.
  • Mouse position, keep track of which side the mouse is on at any time
  • Track the nose position (most extreme left and right ends of the mouse in x-direction) in order to track how close the nose of the mouse is to the platform it is approaching.
  • Optionally identify when the laser (for an optogenetic maniplation, via a optical fiber attached to an implanted ferrule) was turned on, when platforms were moved, etc.

Tracking image features

Mouse:
because the mouse is darker than any other image feature, just getting rid of the tether with a simple image operation and thresholding is enough to track a rough mouse position. For accurate nose position tracking, we can then re-do this step with a less robust threshold and image blurring, the resulting accurate estimate can then be corrected using the more robust, less accurate estimate later.

se = strel('ball',3,3);

% pre-process image for tracking rough features
imdilated_image=imdilate(frame,se);

%now simple thresholding does the trick
racknose.lpos(fnum)= min(find(mean((imdilated_image<10))>.1));

Platforms:
Since the platforms are aligned with the imaging axis, a simple vertical step filter and some thresholding does the trick:

ff=[ones(1,5),ones(1,5).*-1 ];
Ig=max(conv2(double(imdilated_image),-ff,'same'),0);
Ig(Ig>350)=0;
Ig(1:70,:)=0;    % we know where platforms cant be..
Ig(:,1:300)=0;   % ..because the setup and camera dont move
Ig(:,400:end)=0; % so we can clean things up here already

% now just average in vertical direction and find max
[~,gp]= max(mean(Ig));
tracknose.retractgap(fnum)=gp;

Other things: 
For tracking when the laser (fiber optic cable attached to ferrule on animal) was turned on there’s another nice shortcut. Instead of dealing with a TTL sync signal etc, we just split the laser control signal into an IR LED taped to the air table so that a light appears in the corner of the frame (outside the backlight or platform area) whenever the laser was on. Now synchronization becomes a simple thresholding operation. Similarly, one could pipe information from a state machine etc into the video by using a few more LEDs and then use the pretty high frame rate video as a master clock for everything.

Post-processing
Once the image features are tracked frame-by frame, they can be cleaned up with information that is not just local to each individual frame. For instance, single frame breakdowns in the platform positions are easily cleaned up with moving window median filters.

For issues with the nose position, it works pretty well to have one accurate but less robust and one robust but less accurate threshold for estimating the nose position. Now, when the accurate method fails, which is usually evident by sudden jumps, we just fall back to the less accurate one. Because the mouse posture is not known, there is an unknown offset between the two estimates, which we can recover from looking ta the last frame before the error. Because the posture changes slowly, using the robust estimate for ~20 frames after adjusting for this offset works very well in practice:

% fix transient tracking breakdowns
% we have two variables here:
% rpos: stable, not precise
% rnose: precise, has transient failures
%
rnose_diff = [0 (diff(tracknose.rnose))];
brkdwn=[find((rnose_diff > 10) .* (mousepresent>0)   ) numel(mousepresent)];
if numel(brkdwn) >0

    for i=1:numel(brkdwn)-1
        t=brkdwn(i)-1:brkdwn(i+1)+1; % tine window of breakdown
        if numel(t) >20
            t=t(1:20); %limit to 20 frames
        end;

 % get offset of (stable) rpos and (precise) rnose at onset of breakdown
        d=tracknose.rnose(t(1))-tracknose.rpos(t(1));
        tracknose.rnose_fix(t)=tracknose.rpos(t)+d;
    end;
end;

Now that the rough mouse position, presence etc. is know, we can track vibrissae – i’ll describe at a fairly simple method for that in the next post.

This entry was posted in Data analysis. Bookmark the permalink.

Comments are closed.