[Insight-users] Re: Mutual Information questioin in ITK

Luis Ibanez luis.ibanez at kitware.com
Thu Aug 12 15:02:14 EDT 2004


Hi Xiaosong,

There are two interpretations for "performance"
that unfortunately get mixed too often.

1) Performance as 'time required for a computation'
2) Performance as 'quality of results'


Comparing (1) between two methods only makes sense
when we can assume that both of them produce the same
quality of results.


In your case, you seem to be concerned by the second
interpretation, that is, performance in the sense of
correctness, or quality of the final registration.

The Metric values themselves of different metrics
cannot be compared. It will be like adding apples
and oranges. For example, Mutual Information is
measured in "bits" (bit is the unit of information)
while MeanSquares is measured in units of intensity,
if you were using CT images, that will be Hunsfield
units, for example. You cannot compare the values of
MutualInformation with the values of MeanSquares
or the values of NormalizedCorrelation.


There are *many* factors that will affect the evaluation
of the Mutual Information Metric. Among them :


1) The interpolator used for resampling the moving image

2) The number of histogram bins used in the estimation
    of the probability distribution

3) The number of samples used for the estimation (although
    it has been shown that this number is not very critical).

4) The non-linearities of the transform that is mapping
    the coordinates of the fixed image into the reference
    frame of the moving image.


One thing that you should look at is the level of noise of
the values of MI, *even* when you reevaluated with the same
parameters for the transform. This makes this metric to be
badly suited for GradientDescent-like optimization algorithms.

You should consider using an Evolutionary (Genetic) algorithm
for optimizing MutualInformation, since those algorithms are
better suited for noisy functions such as MI.

You will find examples on the use of the ITK OnePlusOne
evolutionary optimizer in the directory


     Insight/Examples/Registration/
                   ImageRegistration11.cxx
                   ImageRegistration14.cxx
                   ModelToImageRegistration1.cxx


Before you deal with the optmizer issues, It is important
to *sample* the values of the MI metric for your images,
with a specific interpolator, and a range of the Transform
parameters. In other words, you want to explore the landscape
of the cost function, because that's the topography that you
are asking the optimizer to walk through. The shape of that
landscape will give you hints regarding the type of optimization
methods that may be apropriate to find an extrema, as well as
the setting for such optimizers.



Regards,


    Luis



-----------------------
yuanx3 at rpi.edu wrote:

> Dear Dr. Ibanez,
> 
...

> I evaluated the performance of
> different registration methods on the same group of images. I drew the
> metric values of different methods in order to see how well the gradient
> descent optimaization is. I found that the metric values of Mutual
> Information is always much worse than the Mean Square Difference (MSD). I
> have tried various parameters of MI but the iteration always stops far away
> 
> from the accurate alignment parameters. How do you evaluate MI performance?
> 
> What factors will affect the MI method? Thank you very much for your
> explanation.
> 
> Best Regards,
> -Xiaosong
> 
> 






More information about the Insight-users mailing list