[Insight-users] Mutual information

Luis Ibanez luis . ibanez at kitware . com
Sat, 13 Jul 2002 20:42:55 -0400


Hi Bjorn,

The MutualInformation metric in ITK was nicely
implemented by Lydia Ng (@insightful.com).

She put a lot of focus in following precisely the
implementation described in the paper by Viola & Wells.
She even went into Wells' disertation in order to make
sure that the method was implemented as faithfully as
possible.

Any visible differences in the ITK implementation may be
the consequence of the C++ Generic Programming style of ITK
and the effort we put in making sure that the components
of the Registration Framework were reusable.

For example the use of Jacobians for computing Metric
derivatives and the possibility of plugging in custom
image interpolators are features that may seem unnecessary
if you are interested in implementing *only* the method
described in the paper by Viola & Wells.

The decomposition of registration methods into reusable
components in ITK makes possible to produce a large taxonomy
of registration methods.

The use of the RandomImageIterator for selecting the sampling
points is also an ITK-ism.

--

BTW
If you are in need of an extra chapter for your thesis
and you are not in a hurry for getting your degree...  :-)

I would suggest to study the problem of parameter tunning
in MutualInfomation.  That is, parameters such as:

1) Number of sampling points
2) Sigma of the normal distribution used for estimation
3) Learning rate of the gradient descent optimizer
4) Number of iterations
5) Number of levels for multiresolution
6) Scale factor between translations and rotations
    when an affine transform is used.

We have observed that the selection of these parameters
is a kind of "Art" that is learned in lengthly sleepless
sessions of trial and error, which end up generating a
Numeric Mitology and a series of Supersticions about
which parameters should be changed and which not.

Some of ITK users have expected MutualInformation to be
a black box in which by plugging in two images magic
results will appear at the output. They got frustrated
when realize that a lot of parameter tunning is required
in order to make it work. Those who are patient and
methodic enough have succed in using this method for
registration.

It will be quite useful for the Medical Image community
if somebody provide good rules about how to tune parameters
for Mutual Information.

The importance of parameter tunning has passed unnoticed
during the dark pre-ITK ages because papers on medical
image processing could be published without making the code
available, and henceforth violating the basic scientific
principle that a publication must provide enough detail to
allow a third party to reproduce an experiment.

Because the lack of the code and standard images, paper
readers will not even attempt to reproduce an experience,
and never got to the point of asking themselves what
the order of magnitude should be for a learning rate...

Modern journals like MEDIA, in which every issue is
complemented with a CD will hopefully make possible
to do more serious publications where:

      ideas + code + test data + parameters

are actually provided to the readers anxious of using
them in real life.



   Luis


====================================================

Bjorn Hanch Sollie wrote:

> I have a quesiton regarding the implementation of the mutual
> information registration metric in ITK.  Are there any significant
> differences between the method as implemented in ITK and the
> description in the article by Viola & Wells it is based on?  If there
> are, what has been changed, and in what way?
> 
> I'm currently writing a thesis, and I need to document the mutual
> information method exactly in the way it is implemented in ITK.
> 
> Thanks in advance,
> 
> -Bjorn
>