[Insight-users] mutual information registration

Luis Ibanez luis . ibanez at kitware . com
Tue, 11 Nov 2003 22:47:53 -0500


Hi John,


One option that you can try for smoothing
the output of the MutualInformation metric
is to increase the number of spatial samples.

Note that the computation time of the metric
(and its derivatives) increase with the square
of the number of samples.  However the smoothness
of the metrics also improves with the number
of samples is increased.

In your case, making averages of 500 metric evaluations
would be equivalent (in computing timeO) to increasing
the number of spatial samples by a factor of 22X.

You can easily verify the smoothing effect that
results from increasing the number of samples, by
playing with the parameters in the test:

     itkMutualInformationMetricTest.cxx

available in Insight/Testing/Code/Algorithms.


-----

The stochastic nature of MutualInformation make it
in fact a good candiate cost function to be optimized
with Evolutionary Algorithms instead of Gradient
Descent-like ones.  You could experiment with the
OnePlusOneEvolutionaryOptimizer

http://www . itk . org/Insight/Doxygen/html/classitk_1_1OnePlusOneEvolutionaryOptimizer . html

Evolutionary algorithms are known to be resistant
to noise and to aboundance of secondary minima.
Notice that this optimizer has also a number of
parameters to tune, so it will require some
experimentation on your part.

--

The Mattes implementation of MutualInformation
provides smoother results but also incurrs in
longer computational times. You may want to
give it a try.

---

About the scaling factors associated with rotation,
yes, 1.0 is the nominal value.

---

For the CenteredRigid2D transform to work correctly
you must make sure that you initialize the rotation
center and invoke "ComputeOffset()" before using
the transform for mapping the points.

---


FYI a group from the Imperial College of London contributed
a set of classes to ITK for peforming 3D/2D registraiton
just in the case that you seem to be working on.

A DRR like image is generated from the Volume by using
a RayCast interpolator
http://www . itk . org/Insight/Doxygen/html/classitk_1_1RayCastInterpolateImageFunction . html

You may want to use it for generating the DRR.
In this framework the registration is not done in 2D
but in 3D by rotating the 2D image around the 3D volume.

This helps to compensate for the errors in pose
of the volume when the DRR is generated.


----

Please let us know of your progress,


Thanks



    Luis



----------------------
Dill, John wrote:
> Hi Luis,
> 
> 
>>Averaging 500 iteration of the metric is an overkill.
>>You shouldn't be needing to do this.
> 
> 
> I sure hope so ;)
> 
> 
>>Are you computing
>>this average for the same set of transform paramters ?
> 
> 
> I average over each transformation vector, so I evaluate the metric 500
> times, and average the metric for each transformation vector.  In my range,
> they go like
> 
> [x y r]
> [-10 -10 -5] each one of these is 500 iter of metric for the average
> [-10 -10 -4]
> [-10 -10 -3]
> etc...
> 
> I did a plot of the cost function with respect to a line in the
> transformation space (neglicting rotation) and plotted the cost, and found
> that it was very noisy (for one evaluation of the metric).  I then started
> averaging  from 100 iterations, and kept increasing in increments of 100
> until I found a nice curve.  At 500, the curve was very smooth.  Then I did
> all my registrations with those parameters, and got good results for about
> 90-95% of the test cases.  I decreased the avg iterations down to 400, and
> evaluated over a couple of patients and got results that are noticeably
> different, which differed from the original between 2-6 pixels in either
> direction (and worse registrations than 500 avg iterations).  It's a bit
> unusual why I seem to need this many also.
> 
> However the image registrations are quite more complicated than the trivial
> test cases given in the toolkit.  There are several sources of errors and
> problems which make this more difficult.  The first is that since these
> images are projections, there are out of plane rotations which can not be
> corrected for.  Since patient immobilization is not perfect, a 1 or 2 degree
> out of plane rotation can cause difficulties for determining what the
> correct registration is.  I might get the top half of the skull lined up,
> but the jaw is off due to this out of plane rotation.
> 
> Second, there are large artifacts present in the portfilm that are not their
> in the CT due to patient immobilization.  There is typically a large high
> intensity white artifact coming from the table underneath the patient in the
> portfilm, sometimes taking up 30 to 40 pixels in height, and almost the
> whole width in the portfilm (size of images is about 280x280).
> 
> Lastly, there is misinformation along the borders of the images.  Since the
> drr and portfilm are not aligned, any misregistration in taking the film
> introduces misinformation along the borders.  I have seen this occur within
> about 5-8 pixels (~3-4mm) of alignment error, which introduces up to 16
> pixel-width of misinformation (8 from the drr's left side, and 8 from the
> portfilm's right side for example).  In extreme cases, the tumor is close to
> the edge of the ct volume, thus my drr is cut off, in some cases up to half
> of my drr is not there!
> 
> 
>>Are you doing this with the Viola-Wells MutualInformation
>>or with the Mattes MutualInformation metric ?
>>
> 
> 
> Viola-Wells
> 
> 
>>Have you plotted the values that you get from the metric ?
> 
> 
> I've plotted the x, y, and rotation with respect to iteration number.  With
> just the translation, I have found values for learning rate and variance for
> which they converged, but do not converge to a registration that is
> acceptable.  The noise stays within one pixel from what it converges to, but
> the majority of my cases, they converge to unacceptable registrations.
> 
> I am new to using CenteredRigid2DTransform, and from my test adding rotation
> to the search, the rotation does not converge yet at all, (and since the
> rotation doesn't converge, the translation does not either).
> 
> 
>>It is to expect that the Viola-Well metric will be quite
>>noisy, while the Mattes implementation will be smoother
>>(still noisy...).
>>
>>The response to this noise level is to use an appropriate
>>optimizer and tune its parameters.
> 
> 
> Are you saying Viola-Wells is not appropriate?  I will try out Mattes
> algorithm.
> 
> 
>>--
>>
>>Note that the scaling parameters that you pass to the
>>optimizer are a critical piece of the registration process.
>>The scaling for the translation paramters  should be on
>>the order 1.0 / (image diagonal measured in millimeters).
> 
> 
> Any scale value suggestion for the rotation, is it an implied 1.0?
> 
> 
>>If you want to get familiar with the CenteredRigid2DTransform
>>you can play with the examples in:
>>
>>     Insight/Examples/Registration/ImageRegistration5.cxx
>>     Insight/Examples/Registration/ImageRegistration6.cxx
>>     Insight/Examples/Registration/ImageRegistration7.cxx
>>     Insight/Examples/Registration/ImageRegistration8.cxx
> 
> 
> I am continuing to play with them, although I've only looked at 5 so far.
> 
> 
>>Note that a key piece for using this class, is to initialize
>>it with the CenteredTransformInitializer.
>>
>>http://www . itk . org/Insight/Doxygen/html/classitk_1_1CenteredTr
>>ansformInitializer.html
>>
>>In your case, you want to use the initializer in "Moments"
>>mode. You will find details about this in the SoftwareGuide
>>
>>   http://www . itk . org/ItkSoftwareGuide . pdf
>>
>>
>>Section 8.5.l, pdf-page 263, paper-page 289.
>>
>>
>>
>>Please let us know if you have further questions.
>>
> 
> 
> I appreciate your advice.
> 
> Best regards,
> John
>