FW: [Insight-users] MultiResMIRegistration Example

Luis Ibanez luis.ibanez@kitware.com
Wed, 10 Apr 2002 21:46:11 -0400


David,

Yes, I understand that self-registering
the image was only an approach for code
testing and it is quite a reasonable test.

The modification of the metrics is only in
the CVS version. The installation is about
the same that you already did for the Beta
version. Please let us know if you encounter
any problem building it.

The CVS version has also reorganized the
structure of the registration methods to
facilitate that you can switch different
components at run time. Now you can switch
the metric in the middle of the registration
process for example.

We are putting together a test-experiment for
profiling the performance and memory consumption.

In order to do that I propose you to use the
following images that are available at the
ftp site:

ftp://public.kitware.com/pub/itk/Data/BrainWeb/

Those images are from the BrainWeb project,

    http://www.bic.mni.mcgill.ca/brainweb/

Text headers have been created for them so we
can read them as MetaImage files.

You can also get to the images through the
"Data" link in the ITK Home page:

    http://www.itk.org/HTML/Data.htm


If we all use the same set of images it will be
much easier  to reproduce the experiments.

----

About the MI metric:

Yes, it creates new samples at each iteration this
is necesary to keep statistics on your side. That is,
given that the samples are such a small number, it is
unlikely that a single set could represent the information
of all the pixels. So what the algorithm does is to
keep replacing the set of samples at each iteration,
that makes that on the mean, the set of all sets of
samples is a better representation of the distribution
of gray levels on the image.  It may seem a waste at
first sight, but considering that you are using just
hundreds of points to represent the information of
thousands (maybe millions) of pixels, that's still a
good balance.


You are right on the assumption that multiple transforms
are attempted at each level of the pyramid before moving
to the next level. The Optimizer is the one deciding when
one level is done, that's why you provide the number of
maximum iterations per level of the pyramid.

What you propose is an interesting thing to try, reusing
the same set of samples for a couple (or maybe up to 10 ?)
iterations could save some computation. My guess is that
that should be more and more interesting when the set of
sampling points is large (so it is a better representation
of the statistical distribution of gray pixels). It may end
up being a balance between using 10 different sets of 100
pixels or 1 with 1000 pixels but the fact that in the second
case you factorize a lot of the mathematical operation may
still leave some profit.

Most of the "science" of these algorithms is on the selection
of parameters, and this is basically an experimentation
process. We have had discussions about the fact that it is
worth to have a database of parameters recomended for
typical circumstances. (e.g. a recomended number of samples,
standard deviations... for CT to MR registration, another
for CT to CT... and so on).


Once we identify where the bottlenecks are we will be in
a better position for designing variants of the algorithm,
and that could probably result in the design of new more
efficient algorithms.


Please let us know if you encounter any difficulty
getting the images from the ftp server.


Thanks


   Luis




==================================================================
David Rich wrote:

I appreciate your feedback.  However, I first tried running the code 
with dissimilar images.  The results were so bad that I slowly stepped 
backwards to try to find something that would work.  The image with 
itself was actually the last step backwards.  Maybe there is something 
else I could do with the parameters to get the dissimilar images to 
work,  but I have not yet discovered what that might be.  I will try 
incorporating the region limitation as you have mentioned.  Is that in 
the beta code yet or do I need to start working with cvs?

In the process of trying to understand the code (which I have not come 
close to), it appears that the metric creates new samples and new 
derivatives, with all attendant calculations, on every pass.  This seems 
to be a terrible waste as far as the target image is concerned.  I am 
assuming that multiple transforms are attempted at each level before 
concluding that the best fit for that level has been achieved.  It would 
appear that the calculations for the target could be accomplished once 
only for each level thereby speeding up the calculations by 
approximately 25% to 45% (25% assumes that only 3 transforms are 
calculated at each level, about 45% improvement would occur if a large 
number of transforms are calculated at each level but you remove all but 
one of the calculation sets for the target image).  Is there a technical 
reason I have missed for sampling the target image and re-doing the 
calculations for each new attempted fit?  Or, have I misunderstood the 
code and this is not actually occurring?