[Insight-users] Re: Mutual information - Registration of US to CT

Luis Ibanez luis . ibanez at kitware . com
Mon, 15 Jul 2002 08:11:09 -0400


Hi Jon,

1) One way of avoiding to deal with the Learning Rate is not to
     use the GradientDescent optimizer but rather the RegularStep
     GradientDescent optimizer.  The LR of the first one has the
     serious drawback that the length of the step is proportional
     to the derivative of the metric and it is pretty hard to get
     an idea of how this derivative will change as the registration
     evolves. For this reason, even if you select a good LR for the
     initial steps of the registration, the value may become inapropiate
     at any moment. In the same way, low gradient regions in the Metric
     can easily skyrocket the step, which is the defect of the Newton-like
     methods.

     The RegularStepGradientDescent on the other hand doesn't use the
     magnitude of the Metric gradient. It only uses its direction.
     The length of the Step is selected from an initial value that you
     provide. Every time that the direction changes more than 90 degrees,
     the step length is divided by two. A good initial value for Affine
     transforms is "0.1" which is equivalent to a moderate rotation of
     about 5 degrees.

     You can compare the behavior of the two optimizers in the:

     Testing/Code/Numerics directory,

     just run "itkNumericsTest" and select:

     a - itkGradientDescentOptimizerTest
     b - itkRegularStepGradientDescentOptimizerTest


2) The option of using the edge map of the CT in order to register it
     against the US is an interesting one.  Note that MutualInformation
     will try maximize the matching of regions regardless of the
     intensities appearing on those regions. So, MI is immune to the
     basic differences of gray levels due to image modality, but it cannot
     do much is the images in questions don't actually have enough
     potential matching regions.  If you observe that the edge map of CT
     results in a distribution of regions similar to the one in the US,
     that's an option worth to try. In general, any variation of the
     images that improves this region matching should result in a better
     registration.  Note also that recent papers have proposed variations
     of the MI in which gradient information is added. It could be worth
     to add one of these metric variations to ITK since published results
     seem to be positive.


3) About the way in which pixels are mapped, here is a extract from
     the itkMeanSquaresImageToImageMetric.txx  file:

     FixedIteratorType ti( fixedImage, this->GetFixedImageRegion() );
     ...
     while(!ti.IsAtEnd())
       {
       index = ti.GetIndex();
       InputPointType inputPoint;
       fixedImage->TransformIndexToPhysicalPoint( index, inputPoint );
       OutputPointType transformedPoint =
            m_Transform->TransformPoint( inputPoint );

      if( m_Interpolator->IsInsideBuffer( transformedPoint ) )
        {
        const RealType movingValue  =
                 m_Interpolator->Evaluate( transformedPoint );
        const RealType fixedValue   = ti.Get();
        m_NumberOfPixelsCounted++;
        const RealType diff = movingValue - fixedValue;
        measure += diff * diff;
        }
      ++ti;
      }


     Note that pixels are take from a restricted region of the
     Fixed image. This region is provided by the user and can act as ROI.
     The position of every pixel is then mapped through the transform of
     the fixed image in order to get a point in physical space. The point
     is passed to the interpolator which is connected to the Moving image.

4) This also helps to answer your question about the initial position
     of the image. Every ITK image can have attached an arbitrary
     Transform. The transform indicates how the integer grid positions
     are mapped to physical space. The default transform is an Affine one,
     but you can use any other. So, if you know that your images are
     rotated 90 degrees due to patient position, you may compensate such
     rotation in the IndexToPhysicalTransform(). Same thing goes for
     initial known translations. This can be done both in the Fixed and
     Moving images. Note that this addinitional mapping is always done,
     so taking advantage of the corrections will not make your
     registration any slower.

5) A good criterion for selecting the translation scale is to use the
     size in millimeters (or the units you use for the slice spacing)
     of the whole volume.  The goal is to make the coefficients of the
     translation to be on the [-1:1] range, which is about the same
     range expected for the rotation components.

     Otherwise, the optimizer will have to explore a space in which some
     dimensions (the translational ones) need steps of [100->500] while
     other need steps on the order of 0.1.



Please let us know if that helps to answer some of your questions.


   Thanks


    Luis



==============================

Jon Harald Kaspersen wrote:

 > Hi again Luis,
 >
 > We have been working on registration of US and CT as I talked about some
 > time ago. We (I) have some additional questions that you may be able to
 > help us with. In general, we have found it difficult to register US to
 > CT. The date we have used is from abdominal acquisitions, that is, CT/US
 > of Abdominal Aortic Aneurysms.
 > Here are the questions :
 >
 > 0) We have not yet succeeded with the CT(moving) to US(fixed) 
registration.
 > (simulations using the same CT (CT to tailedCT) and US volume, as well as
 > T2-MRI to T1-MRI works well)
 > We have used resampled versions of the CT and US volumes. Using small
 > learning rates (LR)
 > the registration sometimes moves in the "right direction", but it
 > doesn't reach the correct
 > position (very small steps (0.1-0.01), and often fluctuations as seen
 > from the iteration
 > observer). Increasing the number of levels, or the number of iterations
 > in each level
 > doesn't help.  This is often the case when the LR is increased up to
 > certain threshold (around
 > 5e-3  to 1e-4), and the moving image moves to much when the LR
 > increases above this value. It could be that quality of the US-images
 > are to bad,
 > and we are not sure how the Mutual information handles the 
registration of
 > a image (CT with uniform regions) to its edge map (a blurred and  noisy
 > edge is often
 > a good approximation of that can be seen in the US-images.)
 >
 > Also, from what you said in an earlier mail, (all) the pixels in the
 > fixed image (Pf) was mapped
 > to the moving image (Pm) using the current registration transform T, and
 > the metric
 > is then computed ( Metric ( all Pf,  corresponding Pm = T(all Pf) ) ).
 > This can be seen for example in the implementation of the
 > MeanSquaresI2IMetric, but
 > reading the .h i was i bit confused:
 >
 > Pixel values are taken from the Moving image. Their positions are mapped
 > to the Fixed image and result in general in non-grid position on it.
 > Values at these non-grid position of the Fixed image are interpolated
 > using a user-selected interpolator
 >
 >
 > 1)  Setting en arbitrary initial transform between the moving and the
 > fixed image in the
 > MultiResMIRegistration example: Can the "registration" method
 > SetInitialTansformParameters
 > be used to set an initial transform (translation and rotation) between
 > the corner-origin of
 > the two volumes when the preprocessor in the example first makes the
 > center the origin
 > in the two volumes (in addition to normalizing) ? If this is a problem,
 > how is it best
 > solved ? Can the pre and post transformation step be dropped altogether
 > (I understand that
 > its more convenient to register(rotate) relative to the center of the
 > volumes when both
 > modalities cover approx. the same region, but is there a more
 > fundamental reason? )
 >
 > We have:
 > rfMf = rfMus = transformation matrix telling where the fixed/US-volume
 > is relative to the
 > patient reference frame. (maps points in the fixed/US-space to the
 > rf-space: Prf = rfMf * Pf)
 > rfMm = rfMct = transformation matrix telling where the moving/CT-volume
 > is relative to the
 > patient reference frame. (maps points in the moving/CT-space to the
 > rf-space: Prf = rfMm * Pm)
 > We can then find:
 > mMf = ctMus = transformation matrix telling where the fixed/US-volume is
 > relative to the
 > moving/CT-volume. (maps points in the fixed/US-space to the
 > moving/CT-space: Pm = mMf * Pf)
 > And we want to initiate the registration with the mMf  matrix.
 >
 > Where:
 > Origin is in the corner of the first voxel  (in the col, row, slice
 > order, x-axes = direction of
 > the first two voxels =  col-direction)
 > Fixed = f = 3D Ultrasound
 > Moving = m = CT
 > Reference Frame = rf = Patient reference frame used by the tracking 
system.
 >
 >
 > 2) Is there a scientific way to determine the scale applied to the
 > translation parameter.
 > when doing a CT to US reg. ? and what is the purpose of this parameter ?
 >
 >
 > ========================================================
 > Jon Harald Kaspersen Tel: +47 73 59 75 89
 > Ph.D. Mechanical Engineering Mob: +47 93 03 65 90
 > Senior Scientist Pager +47 96 84 29 94
 > SINTEF Unimed - Ultralyd Fax: +47 73 59 78 73
 > N-7465 Trondheim
 > NORWAY e-mail: Jon.H.Kaspersen@unimed.sintef.no
 > WEB: _http://www.us.unimed.sintef.no/_
 > ========================================================