[Insight-users] Shape prior level sets: question and suggestion

Zachary Pincus zpincus at stanford.edu
Wed Jan 26 18:16:22 EST 2005


Hello all,

For the last few days, I've been going through the ITK shape prior 
level set segmentation code trying to understand how it all works. The 
good news is that I've been pretty successful. I've got a couple 
questions about the documentation, and one question about how the 
ShapePriorMAPCostFunction is implemented.

The code looks great (as usual with ITK), and the example use of the 
GeodesicActiveContourShapePriorLevelSetImageFilter class provided is 
really helpful. I did have one question and one suggestion for the 
documentation (which seems like it might wind up in the next version of 
the software guide?)

First, on lines 65-67 of 
GeodesicActiveContourShapePriorLevelSetImageFilter.cxx in 
Insight/Examples/Segmentation, we are told:

   // The process pipeline begins with centering the input image using 
the
   // the \doxygen{ChangeInformationImageFilter} to simplify the 
estimation of the pose
   // of the shape, to be explained later.

then on line 319,

  // The \doxygen{ChangeInformationImageFilter} is the first filter in 
the preprocessing
  // stage and is used to force the image origin to the center of the 
image.

Unfortunately, I couldn't figure out why it helps to change the origin 
of the image. If someone has some insight on this, I would be most 
appreciative. (e.g. If I am looking for an object that's not in the 
center of the image, should I move the origin to the estimated location 
of the object for simpler pose estimation? Or should one *always* 
center the origin regardless of where the initial level set is to be 
laid down?)

The next bit of documentation that I found a bit confusing was lines 
668 to 671

   // Further we assume that the shape modes have been normalized
   // by multiplying with the corresponding singular value. Hence,
   // will can set the principal component standard deviations to all
   // ones.

Since the PCAShapeModelEstimator filter trades in eigenvalues, and not 
singular values, this statement was a bit perplexing until I remembered 
that PCA eigenvalues are the squares of the corresponding SVD singular 
values, and that PCA eigenvalues represent the variance of that mode of 
variation. So multiplying by a singular value is the same as 
multiplying by sqrt(eigenvalue), which is to say, multiplying by the 
standard deviation of that mode.

I would personally re-phrase those lines as follows:

   // Further we assume that each shape mode has been normalized
   // by multiplying it with the standard deviation of that mode
   // (so that high-variation modes are proportionately weighted).
   // In this case, we can set the principal component standard
   // deviations to all ones, because we've already incorporated
   // that information.
   // These standard deviations can be computed from the
   // eigenvalues reported by the \doxygen{ImagePCAShapeModelEstimator}.
   // Each eigenvalue corresponds to the variance of a PCA mode,
   // so the standard deviation is found by taking the square root
   // of that eigenvalue. If the training images were calculated by
   // a singular value decomposition procedure, then the standard
   // deviations are precisely the singular values.

I'll ask my ShaprPriorMAPCostFunction question in a later email...

Thanks,

Zach Pincus

Department of Biochemistry and Program in Biomedical Informatics
Stanford University School of Medicine



More information about the Insight-users mailing list