FW: [Insight-users] MultiResMIRegistration Example

Lydia Ng lng@insightful.com
Wed, 10 Apr 2002 16:05:24 -0700


Luis, David:

In terms of memory - I think the multi-resolution image pyramid
is indeed the culprit.=20

Historically, it only computed the images at each level one at
at time. That was subsequently changed to computing all the level
following comments of the code review. To support large images I think
this will need to change back to outputing one level at a time.

-however-

What I am more concern about is that David is reporting
that the underlying algorithm is not working.

Luis: you are correct when you say that when dealing with
intra-modality one might be better off using something other
than MI. But both you and I have seen working demos of when the
same MI code can successfully register MR to MR with=20
significant translation and rotation.

David:
[1] How quantitatively bad were the results=20
when you only used 50 sample points?
[2] Have you tried tests where you have shifted and/or rotated
the image by a known amount? What happened?
[3] Would you mind posting your parameter file?

- Lydia=20

> -----Original Message-----
> From: Luis Ibanez [mailto:luis.ibanez@kitware.com]
> Sent: Wednesday, April 10, 2002 3:35 PM
> To: David Rich
> Cc: Lydia Ng; insight-users@public.kitware.com
> Subject: Re: FW: [Insight-users] MultiResMIRegistration Example
>=20
>=20
>=20
> Hi David,
>=20
> Thanks for your feedback on you experience with
> the toolkit.
>=20
> ITK is not intended only for education and research
> purposes. It has been designed to be used for
> commercial products as well.  Particlar attention
> has been paid to the terms of the license so that
> commercial use becomes a reality.
>=20
> Performance both in memory and computing time are
> important elements in ITK design. The heavy use of
> templated code, for example, responds to the need of
> simplifying the maintenance of the code and improving
> its performance. Templates allow a lot of code to be
> inlined and also substitute the run-time mechanism
> of virtual function overloading with more expedite
> compile-time instantiations.
>=20
> However the toolkit needs to pass through the natural
> process of code maturation. In this sense, feedback
> from users is of fundamental importance and is always
> greately appreciated.
>=20
> Your report on memory consumption is quite interesting.
> We certainly want to track down the reasons for the
> excess of memory use and come up with a solution for it.
>=20
> The fact that you observed the memory growing as the
> registration ran, leads to suspect that the image pyramids
> may be implicated. At each stage of the registration
> only images at the same level of the pyramid are required
> to be in memory. It is likely that the current algorithm
> is not releasing the memory used by previous levels of
> the pyramid. Also, the subsampling is performed by first
> smoothing the images with Gaussian Blurring filters.
> These filters allocate memory for internal copies of the
> images for intermediate computations. It is also likely
> that this memory is failing to be released. If this is
> the case it should be relatively easy to fix. We will
> explore these possibilities.
>=20
>=20
> You also mention an option for increasing performance
> by computing the metric over a restricted region of
> interest in the image. This modification has recently
> being made on the metrics. In order to take advantage
> of it you just have to provide the metric with a region
> on the Fixed image. The rest of the Fixed image will
> be excluded from the computations required for registration.
>=20
> The use is like:
>=20
>        metric->SetFixedImageRegion( region );
>=20
>=20
>=20
>=20
> A comment about Mutual Information: this metric is not
> particularly well suited for images of the same modality,
> and specially not for registering an image with itself.
> The reason is that when you produce the joint histogram
> of an image with itself (assuming they are already
> perfectly registered) it will result in filling only the
> diagonal elements of the joint histogram. In consequence
> only N out of NxN bins will be contributing information to
> the matching. Mutual Information behaves better when it is
> used with Multimodal images because in that case the joint
> histogram have a better distribution of samples (larger
> joint entropy). The registration will be naturally slow when
> the images get closer to the optimal position because in
> that case only the regions of the image with large gradients
> will contribute to variations in the metric (and hence to the
> computation of the metric derivatives that drive the gradient
> descent optimization method). The chances of random points
> to fall into gradient areas are pretty small in general, so
> most of the points will be only contributing to the diagonal
> of the joint histogram. That is, most of them are just
> reporting that the images seems to be registrered. That may
> explain why your test required a larger-than-normal population
> of points to progress and why it fails to register after
> getting to 1 or 2 pixels of distance from the optimal position.
> (which seems reasonable to expect for an image with itself).
> Multimodal images have better chances of producing joint
> entropy gradients with smaller populations of sampling points.
> Same modality images will register much better by using a
> MeanSquare metric for low resolutions and a PatternIntensity
> metric for high resolution. [I understand that you did this
> only for the purpouse of testing the code so this is just a
> comment for the record]
>=20
>=20
> We'll profile the memory consumption of the method and get
> back to you as soon as we identify the sources.
>=20
>=20
> Thanks a lot for your feedback
>=20
>=20
>=20
>    Luis
>=20
>=20
> =
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> >>
> >>While watching this operate for over an hour, I observed that=20
> >>the executable started out at about 20MB but increased to=20
> >>almost 60MB before completion!  Such excessive memory usage=20
> >>becomes intolerable in real-world applications-what if I were=20
> >>trying to register two 40MB images?=20
> >>
> >>So, my first question is whether or not ITK is intended to be=20
> >>used in commercial applications or is it only designed as a=20
> >>tool for people who want to experiment with many different=20
> >>models?  The templated classes would certainly facilitate the=20
> >>latter but the complexity combined with size and time=20
> >>constraints do not contribute to the former.
> >>
> >>In an attempt to better understand the parameters that could=20
> >>be entered, I continued testing.  The initial results in=20
> >>about 23 minutes of operation were:
> >>
> >>Final parameters:=20
> >>-0.000313974  -0.000876174  -0.000350401  1  -0.431262 =20
> >>0.368347  -0.0612012 =20
> >>
> >>The rotation parameters indicate a close fit although the=20
> >>translations are disconcerting, considering that an exact=20
> >>match should be more feasible (Note:  the voxel size is 2.72=20
> >>in all directions).
> >>
> >>I changed the default input of 0.4 standard deviation for the=20
> >>registration to 0.2 and used 100 sample points, and the=20
> >>results were miserable:
> >>
> >>Final parameters:=20
> >>0.574853  -0.761886  -0.298448  -0.00139123  -19.8732 =20
> >>28.4372  -79.6355 =20
> >>
> >>With a standard deviation of 0.6 and 100 sample points the=20
> >>results are different but still miserable:
> >>
> >>Final parameters:=20
> >>-0.0671739  0.0841046  -0.994183  0.00385258  -14.183 =20
> >>-1.85797  -1.66959 =20
> >>
> >>The conclusion seems to be that with enough sample points the=20
> >>basic algorithm may provide satisfactory results.  However,=20
> >>from past experience the time delay becomes critical for=20
> >>medical personnel.  And if the algorithm requires a minimum=20
> >>of 500 data points, 60MB RAM, and 23 minutes on a 1.6MB image=20
> >>registered with itself, what would be required for two=20
> >>less-similar images of larger sizes?
> >>
> >>An examination of the source code to try to understand the=20
> >>parameters and whether or not the memory can be handled more=20
> >>efficiently again reminded me of the Microsoft ATL wizard. =20
> >>Only this time it seemed like it was necessary to understand=20
> >>the complexities of the ATL wizard for the purpose of=20
> >>creating a specific implementation.  And again it occurred to=20
> >>me that the purpose of ITK seems to be that of creating a=20
> >>framework for research and experimentation for those who are=20
> >>not concerned with commercial requirements of speed and=20
> >>hardware constraints.  Am I trying to use this contrary to=20
> >>the intent of the toolkit?
> >>
> >>On the other hand, is it possible that the ITK could be=20
> >>developed into something more like the ATL wizard?  That is,=20
> >>ITK with the power of the templates built in, could be used=20
> >>to generate a basic code structure for the specific=20
> >>implementation requested.  With such a code framework=20
> >>generated, it might be more feasible for the researcher as=20
> >>well as the commercial user to customize the specific=20
> >>implementation for speed and memory or to modify the=20
> >>mathematical calculations to fit specific requirements. =20
> >>
> >>At this point I feel little more than frustrated.  Although I=20
> >>would like to be able to provide commercial support to public=20
> >>code, customizing it for customers but being able to leave=20
> >>the source with them, I can only recommend that ITK is not=20
> >>the right framework for commercial applications.  It is slow,=20
> >>cumbersome, and requires significant hardware resources.  If=20
> >>anyone wants to delve into the complexities of the code, they=20
> >>have to peel away multiple layers of typedefs and templated=20
> >>classes to try to evaluate implemented constructs and=20
> >>operations.  If the end result in operation, size, and=20
> >>effectiveness were tremendous, the complexities could be=20
> >>overlooked.  That does not seem to be the case.
> >>
> >>I would like to report more positive feedback.  Can you help me? =20
> >>1) For the MultResMIRegistration example, how can I identify=20
> >>parameters that will be stable for generic image sets? =20
> >>2) How can I improve the time and memory requirements?
> >>3) Can I tell the algorithm to skip the blank slices at the=20
> >>front and/or back of the data of interest?
> >>4) Or, is this toolkit just not intended to be used in=20
> >>software used by medical personnel and others whose time is=20
> critical?
> >>
> >>Any suggestions that might help me understand the toolkit=20
> >>better or how to make it effective would be greatly appreciated.
> >>
> >>Dave Rich
> >>
> >>
> >>
>=20
>=20
>=20