FW: [Insight-users] MultiResMIRegistration Example

David Rich David.Rich@trw.com
Wed, 10 Apr 2002 19:24:31 -0400


Luis,
I appreciate your feedback.  However, I first tried running the code with =
dissimilar images.  The results were so bad that I slowly stepped =
backwards to try to find something that would work.  The image with itself =
was actually the last step backwards.  Maybe there is something else I =
could do with the parameters to get the dissimilar images to work,  but I =
have not yet discovered what that might be.  I will try incorporating the =
region limitation as you have mentioned.  Is that in the beta code yet or =
do I need to start working with cvs?

In the process of trying to understand the code (which I have not come =
close to), it appears that the metric creates new samples and new =
derivatives, with all attendant calculations, on every pass.  This seems =
to be a terrible waste as far as the target image is concerned.  I am =
assuming that multiple transforms are attempted at each level before =
concluding that the best fit for that level has been achieved.  It would =
appear that the calculations for the target could be accomplished once =
only for each level thereby speeding up the calculations by approximately =
25% to 45% (25% assumes that only 3 transforms are calculated at each =
level, about 45% improvement would occur if a large number of transforms =
are calculated at each level but you remove all but one of the calculation =
sets for the target image).  Is there a technical reason I have missed for =
sampling the target image and re-doing the calculations for each new =
attempted fit?  Or, have I misunderstood the code and this is not actually =
occurring?

Thanks,
Dave

>>> "Luis Ibanez" <luis.ibanez@kitware.com> 04/10/02 04:35PM >>>

Hi David,

Thanks for your feedback on you experience with
the toolkit.

ITK is not intended only for education and research
purposes. It has been designed to be used for
commercial products as well.  Particlar attention
has been paid to the terms of the license so that
commercial use becomes a reality.

Performance both in memory and computing time are
important elements in ITK design. The heavy use of
templated code, for example, responds to the need of
simplifying the maintenance of the code and improving
its performance. Templates allow a lot of code to be
inlined and also substitute the run-time mechanism
of virtual function overloading with more expedite
compile-time instantiations.

However the toolkit needs to pass through the natural
process of code maturation. In this sense, feedback
from users is of fundamental importance and is always
greately appreciated.

Your report on memory consumption is quite interesting.
We certainly want to track down the reasons for the
excess of memory use and come up with a solution for it.

The fact that you observed the memory growing as the
registration ran, leads to suspect that the image pyramids
may be implicated. At each stage of the registration
only images at the same level of the pyramid are required
to be in memory. It is likely that the current algorithm
is not releasing the memory used by previous levels of
the pyramid. Also, the subsampling is performed by first
smoothing the images with Gaussian Blurring filters.
These filters allocate memory for internal copies of the
images for intermediate computations. It is also likely
that this memory is failing to be released. If this is
the case it should be relatively easy to fix. We will
explore these possibilities.


You also mention an option for increasing performance
by computing the metric over a restricted region of
interest in the image. This modification has recently
being made on the metrics. In order to take advantage
of it you just have to provide the metric with a region
on the Fixed image. The rest of the Fixed image will
be excluded from the computations required for registration.

The use is like:

       metric->SetFixedImageRegion( region );




A comment about Mutual Information: this metric is not
particularly well suited for images of the same modality,
and specially not for registering an image with itself.
The reason is that when you produce the joint histogram
of an image with itself (assuming they are already
perfectly registered) it will result in filling only the
diagonal elements of the joint histogram. In consequence
only N out of NxN bins will be contributing information to
the matching. Mutual Information behaves better when it is
used with Multimodal images because in that case the joint
histogram have a better distribution of samples (larger
joint entropy). The registration will be naturally slow when
the images get closer to the optimal position because in
that case only the regions of the image with large gradients
will contribute to variations in the metric (and hence to the
computation of the metric derivatives that drive the gradient
descent optimization method). The chances of random points
to fall into gradient areas are pretty small in general, so
most of the points will be only contributing to the diagonal
of the joint histogram. That is, most of them are just
reporting that the images seems to be registrered. That may
explain why your test required a larger-than-normal population
of points to progress and why it fails to register after
getting to 1 or 2 pixels of distance from the optimal position.
(which seems reasonable to expect for an image with itself).
Multimodal images have better chances of producing joint
entropy gradients with smaller populations of sampling points.
Same modality images will register much better by using a
MeanSquare metric for low resolutions and a PatternIntensity
metric for high resolution. [I understand that you did this
only for the purpouse of testing the code so this is just a
comment for the record]


We'll profile the memory consumption of the method and get
back to you as soon as we identify the sources.


Thanks a lot for your feedback



   Luis


=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>
>>While watching this operate for over an hour, I observed that=20
>>the executable started out at about 20MB but increased to=20
>>almost 60MB before completion!  Such excessive memory usage=20
>>becomes intolerable in real-world applications-what if I were=20
>>trying to register two 40MB images?=20
>>
>>So, my first question is whether or not ITK is intended to be=20
>>used in commercial applications or is it only designed as a=20
>>tool for people who want to experiment with many different=20
>>models?  The templated classes would certainly facilitate the=20
>>latter but the complexity combined with size and time=20
>>constraints do not contribute to the former.
>>
>>In an attempt to better understand the parameters that could=20
>>be entered, I continued testing.  The initial results in=20
>>about 23 minutes of operation were:
>>
>>Final parameters:=20
>>-0.000313974  -0.000876174  -0.000350401  1  -0.431262 =20
>>0.368347  -0.0612012 =20
>>
>>The rotation parameters indicate a close fit although the=20
>>translations are disconcerting, considering that an exact=20
>>match should be more feasible (Note:  the voxel size is 2.72=20
>>in all directions).
>>
>>I changed the default input of 0.4 standard deviation for the=20
>>registration to 0.2 and used 100 sample points, and the=20
>>results were miserable:
>>
>>Final parameters:=20
>>0.574853  -0.761886  -0.298448  -0.00139123  -19.8732 =20
>>28.4372  -79.6355 =20
>>
>>With a standard deviation of 0.6 and 100 sample points the=20
>>results are different but still miserable:
>>
>>Final parameters:=20
>>-0.0671739  0.0841046  -0.994183  0.00385258  -14.183 =20
>>-1.85797  -1.66959 =20
>>
>>The conclusion seems to be that with enough sample points the=20
>>basic algorithm may provide satisfactory results.  However,=20
>>from past experience the time delay becomes critical for=20
>>medical personnel.  And if the algorithm requires a minimum=20
>>of 500 data points, 60MB RAM, and 23 minutes on a 1.6MB image=20
>>registered with itself, what would be required for two=20
>>less-similar images of larger sizes?
>>
>>An examination of the source code to try to understand the=20
>>parameters and whether or not the memory can be handled more=20
>>efficiently again reminded me of the Microsoft ATL wizard. =20
>>Only this time it seemed like it was necessary to understand=20
>>the complexities of the ATL wizard for the purpose of=20
>>creating a specific implementation.  And again it occurred to=20
>>me that the purpose of ITK seems to be that of creating a=20
>>framework for research and experimentation for those who are=20
>>not concerned with commercial requirements of speed and=20
>>hardware constraints.  Am I trying to use this contrary to=20
>>the intent of the toolkit?
>>
>>On the other hand, is it possible that the ITK could be=20
>>developed into something more like the ATL wizard?  That is,=20
>>ITK with the power of the templates built in, could be used=20
>>to generate a basic code structure for the specific=20
>>implementation requested.  With such a code framework=20
>>generated, it might be more feasible for the researcher as=20
>>well as the commercial user to customize the specific=20
>>implementation for speed and memory or to modify the=20
>>mathematical calculations to fit specific requirements. =20
>>
>>At this point I feel little more than frustrated.  Although I=20
>>would like to be able to provide commercial support to public=20
>>code, customizing it for customers but being able to leave=20
>>the source with them, I can only recommend that ITK is not=20
>>the right framework for commercial applications.  It is slow,=20
>>cumbersome, and requires significant hardware resources.  If=20
>>anyone wants to delve into the complexities of the code, they=20
>>have to peel away multiple layers of typedefs and templated=20
>>classes to try to evaluate implemented constructs and=20
>>operations.  If the end result in operation, size, and=20
>>effectiveness were tremendous, the complexities could be=20
>>overlooked.  That does not seem to be the case.
>>
>>I would like to report more positive feedback.  Can you help me? =20
>>1) For the MultResMIRegistration example, how can I identify=20
>>parameters that will be stable for generic image sets? =20
>>2) How can I improve the time and memory requirements?
>>3) Can I tell the algorithm to skip the blank slices at the=20
>>front and/or back of the data of interest?
>>4) Or, is this toolkit just not intended to be used in=20
>>software used by medical personnel and others whose time is critical?
>>
>>Any suggestions that might help me understand the toolkit=20
>>better or how to make it effective would be greatly appreciated.
>>
>>Dave Rich
>>
>>
>>