[Insight-users] Shape prior level sets: question about MAPCostFunction

Zachary Pincus zpincus at stanford.edu
Wed Jan 26 19:26:18 EST 2005


Hello,

In my perusal of the ITK code implementing Mike Leventon's paper on 
shape modeling, I had a question about the implementation of the 
ShapePriorMAPCostFunction class.

Specifically, my question concerns the calculation of what Leventon 
calls the Gradient Term. (See page 5 and figure 7 of his paper, which 
is available here: 
http://www.spl.harvard.edu:8000/pages/papers/leventon/cvpr00/cvpr00.pdf 
)

This term is calculated in itkShapePriorMAPCostFunction.txx in the 
ComputeLogGradientTerm() method (lines 132 to 166).

In Leventon's approach, this term is calculated by taking the laplacian 
of the goodness-of-fit to the best-fit gaussian between the current 
signed distance function and the magnitude of the image gradient at 
that point.

In the ITK approach, this term is calculated by taking the laplacian of 
the goodness-of-fit between a 0-mean, unit-variance gaussian evaluated 
at the signed distance, and (1 - the feature image) at that point.

I think shifting from image gradient to (1 - Feature Image) is a 
reasonable step. However, I'm wondering how assuming the unit gaussian 
changes the calculations compared to actually fitting a gaussian.

First off, by using a unit-variance gaussian instead of fitting a 
gaussian, this implementation assumes that 95% of the edge intensity in 
the feature image (or 1 - that image) is contained within two pixels of 
the "actual" edge. This assumption is violated when users provide a 
feature image that has been smoothed with too-large a gaussian.

Now, the actual pose parameters that maximize the fit between the unit 
gaussian and the image intensity will be the same as the pose 
parameters that maximize the fit between a best-fit gaussian and the 
image intensity. However, the goodness-of-fit metric will be much worse 
in the former case (i.e. it will a much larger negative log), which 
could cause the Gradient Term to unexpectedly dominate the MAP cost 
function with certain feature images.

Next, while (1 - feature image) will vary between 1 at maximum edge 
intensity and 0 at minimum edge intensity, a unit gaussian ranges from  
(1 / sqrt(2pi)) or about 0.4  to 0. Again, this fact will not change 
the best pose parameters, but it does make the goodness-of-fit measure 
much worse (larger in terms of negative log), which again makes the 
gradient term a lot more dominant than would otherwise be expected.

Clearly, it would be *possible* to determine the appropriate variance 
for the gaussian fit between the distance function values and the (1 - 
feature image) values (hint: this is not the same as the variance in (1 
- feature image) values). However this would take an extra iteration 
through the narrow band at each evaluation of the cost function. 
Likewise, it would be possible to rescale the (1 - feature image) 
values to better fit a normalized gaussian (either through determining 
the appropriate value at each time step, or just forcibly dividing by 
sqrt(2pi) at each time step). I'm just wondering what real benefits 
there would be...

I know it's not acceptable to change the current code which people have 
optimized to, but I do wonder whether the MAPCostFunction would be 
better-behaved over a larger range of feature images with these changes 
in place... Also, these changes would allow the cost function to 
compute something more closely approaching a true posterior probability 
for the image/shape fit, which might be useful in some circumstances.

If there's any reason why having an alternate MAPCostFinction that 
works that way would benefit anyone, I might code one up. Otherwise 
I'll leave well enough alone!

Zach Pincus

Department of Biochemistry and Program in Biomedical Informatics
Stanford University School of Medicine



More information about the Insight-users mailing list