Project 1: The Mouse Brain Library Project 2: Internet Microscopy (iScope) Project 3: Neurocartographer and Segmentation of the MBL Project 4: The Neurogenetics Tool Box
























 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
 

EXPERIMENTAL PLAN

 
 

Principal Investigator/Program Director Williams,Robert W.

 
  Aim 3: A web archive of image stacks and software for stereological analysis

We will develop a system to automatically acquire batches of through-focus series in the form of 1- to 2-second digital video clips. This process will involve systematically driving the stage to specific coordinates generated by the Slide-Coordinate database. Through-focus series with 1-µm z-axis step size will be acquired at each site in <10 seconds. Z-axis coordinates will be corrected for optical foreshortening (Williams and Rakic 1988). Using powerful commercial video processing programs and utilities (DVEdit, Media Cleaner Pro 4, Sorenson Video Pro) we will develop batch processing methods to collect and process several hundred z-axis stacks per day. Original digital video (DV) will consist of less than 60 frames at full-DV resolution and will be approximately 7.2 MB in size. These videos will be put on an ftp server. Two G4 computers running the asymmetric Sorenson Video codec (compression-decompression program) will compress clips to under 1MB. These clips will be uploaded onto the MBL servers as QT4 movie files.
Once we have developed and extensively tested the system described above, we plan to acquire two complementary sets of z-axis stacks.

1. Starting in Year 02 we plan to acquire a systematic, random, and unbiased sample of through-focus series (~200 points/case) from every mouse brain in the MBL for advanced stereological analysis. This density of sampling is adequate for analysis of large structures such as the caudate, neocortex, and cerebellum. This online collection can be used to obtain precise estimates of total neuron and glial cell populations either by manual counting or by using automatic cell recognition programs. Higher-density sampling (~400 points/case) may be justified in some cases to allow even fairly small nuclei to be analyzed.

2.
Starting in year 03, we plan to acquire stacks from a list of ~100 structures or regions defined semi-automatically by the NeuroCartographer project. As part of Project 2, slide coordinates of these regions will be automatically generated during the segmentation process. Once the segmentation coordinates have been manually verified and adjusted, we will use these NeuroCartographer coordinates to generate a list of "greatest hits" for each brain. This greatest hits list will provide a way to rapidly compare the cellular architecture of specific regions at very high magnification in different strains and different individuals.

The analysis of image stacks and the QT4 movies.  

Macros written for NIH Image and IPLab Spectrum make it possible to import and then analyze stacks of images or QT movies. We will use these movies to study several traits that require high magnification light level analysis.
We will also be able to quantify the density of vascular beds in different parts of retina and uvea using these through-focus series.

Mechanics of acquiring through-focus series.  

Most microscope stages have appreciable mechanical backlash. For example, the Nikon Diaphot has a backlash of about 0.6 µm. The new Zeiss Harmonic drive and the older Leitz planatary gear focusing blocks have backlash that is almost negligible (<0.4 µm). The Zeiss Universal blocks are quite good, and have a backlash that I have estimated to be about .4 µm (Williams, unpublished). We plan to acquire all image stacks in the same sequence, focusing from the bottom of the top to the bottom (driving the stage upward or in a fixed stage configuration, driving the objective down (see Boddeke et al., 1997). Longterm drift will not be a significant problem: each new slide will be automatically focused in the z axis. The client will be able to rezero the z as needed during a streaming video session.

Throughput

 Once this system is in place it should be feasible to chose a specimen and a coordinate and then define the lower and upper focal planes of the region of interest. A click of the mouse should then initiate the capture or images and the construction of QT4 movies. We may need to run each frame through a series of Adobe PhotoShop 5 filters (unsharp mask, level adjust, discard color information, resize, etc.) prior to assembling the movie. These operations will be done automatically using the batch processing feature of PhotoShop 5. We expect to be able to generate several thousand QT movies per year.

Sampling considerations for iScope stacks.

Systematic random sample is more appropriate and is unbiased and will generally produce a lower coefficient of error (SE/mean) than a random sample. A systematic sample of this type is referred to by Gundersen as a fractionator. Our system will make a two-stage systematic sample possible. In the first stage every nth section through the target is selected for stage 2 analysis. The second stage consists of sampling counting boxes that are systematically spaced in the target.

Sample size considerations.  

From the point of view of obtaining robust estimates that apply to an entire population or genotype of animals, the precision of individual estimates of cell number from a nucleus of a single animal should approximately match the variability of the genotype or strain of mice that are the subject of analysis (West and Gundersen, 1990; Glaser and Wilson, 1998). However, this latter value is generally not known in advance, nor is the magnitude of technical error generally well characterized. For this reason, the precision of individual estimates should by design be somewhat higher the estimate of the standard deviation of the population of animals. For example, if the number of cells in the brain of C57BL/6J animals has a standard deviation of 10 million cells (a CV of 10%), then it would be reasonable to target our estimates to have a CE of 5 million cells. Glaser and Wilson have demonstrated, not surprisingly, that the CE is an inverse function of the number of cells counted per case when the counting box or disector volume is held constant. For both practical and statistical reasons, at a fixed total count it is better to obtain a count from a higher density of smaller counting boxes than a lower density of large counting boxes (Williams and Rakic, 1988). The Scheaffer-Mendenhall-Ott approach to computing the CE of a sample is summarized well by Glaser and Wilson (1998).

To obtain this level of accuracy will require a minimum of 200 sample sites per brain. Furthermore, to obtain a robust estimate of the within-genotype variation we would like a reasonable number of samples. This is the reason why our target for the MBL is 12 animals per genotype or strain. For genetic crosses such as our 10th generation cross, each individual mouse is genetically unique and a somewhat different sampling strategy needs to be employed (Williams et al., 1996). In essence all animals are initially analyzed using a low density fractionator and outliers are then analyzed using higher density fractionators.
 
   
   
   
 

Next Topic

 
  Aim 4: Streaming Video