[Insight-users] Registration - Mutual Information + Affine - Fine tuning of parameters for optimization

Sharath Venkatesha sharath20284 at yahoo.com
Thu Jul 2 15:01:43 EDT 2009


Hi,

Thanks for the all the help and information provided in the earlier mails.

I have ensured that I am using correct initialization ( translation only) and correct values of optimizer scales.

*** Is it sufficient if I provide initialization parameters for translation (approx values) only? It is difficult for me estimate the scale and rotation parameters.

I am using MutualInformationHistogramImageToImageMetric +
MultiResolutionImageRegistrationMethod + AffineTransform +
RegularStepGradientDescentOptimizer/GradientDescentOptimizer

I am having problem with tuning of parameters (MaxStepSize and MinStepSize for RegularStepGradientDescentOptimizer, and LearningRate for GradientDescentOptimizer) . My results start diverging off the correct route in the first few iterations itself. 

I have looked into
(1) Plotting of joint histograms   (section 8.5.3 of ITK manual)
(2)  vtkRegistrationMonitor


I am having trouble interpreting the results of the joint histograms.I understand that with correct parameters for registration, I should get a histogram which has highest density only along the diagonal.  I have the outputs of the JointHistograms after very iteration for very level , and finding hard to follow the changes.

*** Is there is a defined procedure for understanding the results?

Can you please point me to where I can get more information on (1) and (2) ?

Thanks,
Sharath Venkatesha


------------------------------
Luis wrote:


Hi Sharath,

0) Whenever you get the exception saying that too many points
    mapped outside of the moving image, it means that the
    current Transform is such that when mapping the moving image
    into the Fixed image coordinate system the overlap between
    the two image is so small that it is unlikely that the
    registration will recover in further iterations.

    This is typically due to:


        A) Poor initilization of the Transform

        B) Poor selection of Scaling parameters
           (the array that normalizes the dynamic
            range of the different Transform parameters,
            e.g. radians versus millimeters)

        C) Optimizers that are set to perform jumps
           that are too large, and bring the Transform
           out of the range of the image.

1) You want to check this potential suspects in order.

    That is.

    First, verify that the initial transform
    is reasonably placing the Moving image on top of the
    Fixed image. You can do this by using the Resample
    image filter, passing the moving image as input,
    using the initial transform as Transform, and using
    the Fixed image as reference. Then compare the
    fixed image to the resampled moving image.

    If the initial image looks ok,
    then you want to check the values of the ParameterScaling.
    It should be such that when you look at the Transform
    parameters at ever iteration (using an Observer), the
    values should change from iteration to iteration, according
    to the expected dynamic range.

    For example, transform parameters that correspond to rotations
    should change by increments smaller than 0.01 (since they are
    measured in Radians).  While transform parameters that correspond
    to translations should change at increments of 1 ~ 10


    Finally, you should identify the parameter of the optimzer
    that is responsible for selecting the size of jumps that
    are performed in the parameteric space.  (e.g. as you have
    done for the learning rate in the GradientDescent optimizer).

    You want to reduce the size of that jump, until you get the
    Transform to have small increments at every iteration.


2) These parameters must be set up for every "family" of
    registration problems.  That is, the parameters that may be
    good for registering a T1 to T2 MRI brain images, may not be
    appropriate for registering a confocal microscopy image to
    another.

    However, once you fine tune the paramters for a pair of
    T1-T2 images, it is likely that the same set of parameters
    will work for another pair of the same type.

    There is a need for a "smart layer" above the registration
    framework, that could take away from the user the burden
    of finding proper parameter settings....

    any ideas are welcome  :-)


3) Visual monitoring of the registraiton process
    will help to make the fine-tunning process less
    frustrating.

    You may want to give it a try at the VTK helper
    classes:

       InsightApplications/Auxiliary/vtk/
                      vtkRegistrationMonitor.h
                      vtkRegistrationMonitor.cxx

     they will display renderings from iso-surfaces
     at every iteration of the registration process.

     This is usually very informative...



   Regards,


      Luis


--------------
sharath v wrote:
> Hi,
> 
> Thanks for the help. Changing the learning parameter worked...
> 
>
For Viola MI + Affine and with learning rate of 0.01 and 100
iterations, I get good results on BrainProtonDensitySliceR10X13Y17
image. Whereas on the BrainProtonDensitySliceR10X13Y17S12 image, it
requires atleast 200 iterations to give correct results.
> 
>
I want to use an optimizer which has a stopping value (i.e not fixed
number of iterations) like Amoeba/Evolutionary/GradientDescentStep
> 
> I tried using the Amoeba optimizer with the following
> 
>     OptimizerType::ParametersType simplexDelta( transform->GetNumberOfParameters() );
>     simplexDelta.Fill( 5.0 );
>     optimizer->AutomaticInitialSimplexOff();
>     optimizer->SetInitialSimplexDelta( simplexDelta );
>     optimizer->SetParametersConvergenceTolerance( 0.01 ); // quarter pixel
>     optimizer->SetFunctionConvergenceTolerance(0.001); // 0.1%
>     optimizer->SetMaximumNumberOfIterations( 200 );
> 
>
but I get an exception that sampled point mapped  to outside of the
moving image, after 6-7 iterations. Similar issue happens for
OnePlusOne optimizer with 
> 
>     typedef itk::Statistics::NormalVariateGenerator  GeneratorType;
>     GeneratorType::Pointer generator = GeneratorType::New();
>     generator->Initialize(12345);
>     optimizer->SetNormalVariateGenerator( generator );
>     optimizer->Initialize( 10 );
>     optimizer->SetEpsilon( 1.0 );
>     optimizer->SetMaximumIteration( 4000 );
> 
> Can you please let me know what values need to be used? 
> 
> And is there a way to make the registration process partially independent of these parameters?
> 
> Thanks,
> Sharath



      



More information about the Insight-users mailing list