[Insight-users] About registration and the SoftwareGuide
BATY Xavier
xavier.baty at univ-angers.fr
Fri Mar 16 02:05:01 EST 2007
Hi all,
As I'm working on a registration process, I have the following suggestion.
The 2 messages (cited below) about registration parameters tunning and
the Too many samples maps outside... execption sent by Luis recently
should be added to registration chapter of the software guide.
Those messages contains answers to many questions for people who start
working on registration.
XB
> ------------------------------
>
> Message: 3
> Date: Thu, 15 Mar 2007 21:26:39 -0400
> From: "Luis Ibanez" <luis.ibanez at kitware.com>
> Subject: Re: [Insight-users] Too many samples map outside moving image
> buffer
> To: Goo <gtshowtime at gmail.com>
> Cc: insight-users at itk.org
> Message-ID:
> <f7abd23c0703151826j6b7c23cfwb6ffe0c9583039dd at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi Goo,
>
> This is a common error message in image registration.
>
> It means that at the current iteration of the optimization,
> the two images as so off-registration that their spatial
> overlap is not large enough for bringing them back into
> registration.
>
> The common causes of this problem are:
>
> 1) Poor initialization: You must initialize the transform
> properly. Please read the ITK Software Guide
> http://www.itk.org/ItkSoftwareGuide.pdf for a description
> of the use of the CenteredTransformInitializer class.
>
>
> 2) Optimzer steps too large. If you optimizer takes steps
> that are too large, it risks to become unstable and to
> send the images too far appart.
>
> You may want to start the optimizer with a maximum
> step lenght of 1.0, and only increase it once you have
> managed to fine tune all other registration parameters.
>
> Increasing the step length makes your program faster,
> but it also makes it more unstable.
>
>
> 3) Poor set up o the transform parameters scaling.
>
> This is extremely critical in registration. You must make
> sure that you balance the relative difference of scale between
> the rotation parameters and the translation parameters.
>
> In typical medical datasets such as CT and MR, translations
> are measured in millimeters, and therefore are in the range
> of -100:100, while rotations are measured in radians, and
> therefore they tend to be in the range of -1:1.
>
>
> A rotation of 3 radians is catastrophic, while a translation
> of 3 millimeters is rather inoffensive. That difference in scale
> is the one that must be accounted for.
>
>
>
>
> Regards,
>
>
> Luis
>
>
>
>
> --------------------------------------------------------------------
> On 3/13/07, Goo <gtshowtime at gmail.com> wrote:
>
>> Hi All :
>>
>> I met an error message when using Mattes MI and Affine Transform.
>> This message is notifying me "Too many samples map outside moving image
>> buffer"
>>
>> I guess this maybe caused by TransformPoint in
>> itkMattesMutualInformationImageToImageMetric.txx
>>
>> this->TransformPoint( nFixedImageSamples, parameters, mappedPoint,
>> sampleOk, movingImageValue );
>>
>> but I don't know how to solve this problem.
>>
>> My test data are two different modality images with the size 512*512.
>>
>> The Metric setting :
>> unsigned int numberOfBins = 50;
>> unsigned int numberOfSamples = 10000;
>>
>> The Transform setting :
>> initialParameters[0] = 0.0; // Initial offset in mm along X
>> initialParameters[1] = 0.0; // Initial offset in mm along Y
>> registration->SetInitialTransformParameters( initialParameters );
>>
>> The optimizer setting :
>> optimizer->MinimizeOn();
>> optimizer->SetMaximumStepLength( 5.00 );
>> optimizer->SetMinimumStepLength( 0.001 );
>> optimizer->SetNumberOfIterations( 200 );
>> optimizer->SetRelaxationFactor( 0.8 );
>>
>> Can anybody help me to solve this problem?
>>
>> Regards.
>>
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 15 Mar 2007 21:05:18 -0400
> From: "Luis Ibanez" <luis.ibanez at kitware.com>
> Subject: [Insight-users] Re: How to choose user-defined parameter for
> combining Mattes MI metric and Affine transform?
> To: Goo <gtshowtime at gmail.com>
> Cc: insight-users at itk.org
> Message-ID:
> <f7abd23c0703151805p62fd964dr2db34e0a09bdf394 at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi Goo
>
> Here is some advice on how to fine tune the parameters
> of the registration process.
>
>
> 1) Set Maximum step length to 0.1 and do not change it
> until all other parameters are stable.
>
> 2) Set Minimum step length to 0.001 and do not change it.
>
> You could interpret these two parameters as if their
> units were radians. So, 0.1 radian = 5.7 degrees.
>
>
> 3) Number of histogram bins:
>
> First plot the histogram of your image using the
> example program in
>
> Insight/Examples/Statistics/ImageHistogram2.cxx
>
> In that program use first a large number of bins
> (for example 2000) and identify the different
> populations of intensity level and to what anatomical
> structures they correspond.
>
> Once you identify the anatomical structures in the
> histogram, then rerun that same program with less
> and less number of bins, until you reach the minimun
> number of bins for which all the tissues that are important
> for your application, are still distinctly differentiated in the
> histogram. At that point, take that number of bins and
> us it for your Mutual Information metric.
>
>
> 4) Number of Samples:
>
> The trade-off with the number of samples is the following:
>
> a) computation time of registration is linearly proportional
> to the number of samples
> b) the samples must be enough to significantly populate
> the joint histogram.
> c) Once the histogram is populated, there is not much
> use in adding more samples.
>
> Therefore do the following:
>
> Plot the joint histogram of both images, using the number
> of bins that you selected in item (3). You can do this by
> modifying the code of the example:
>
> Insight/Examples/Statistics/
> ImageMutualInformation1.cxx
>
> you have to change the code to print out the values
> of the bins. Then use a plotting program such as gnuplot,
> or Matlab, or even Excel and look at the distribution.
>
> The number of samples to take must be enough
> for producing the same "appearance" of the joint histogram.
>
> As an arbitrary rule of thumb you may want to start using
> a high number of samples (80% ~ 100%). And do not
> change it until you have mastered the other parameters
> of the registration. Once you get your registration to converge
> you can revisit the number of samples and reduce it in order
> to make the registration run faster. You can simply reduce it
> until you find that the registration becomes unstable. That's
> your critical bound for the minimum number of samples.
> Take that number and multiply it by the magic number 1.5,
> to send it back to a stable region, or if your application is
> really critical, then use an even higher magic number x2.0.
>
> This is just engineering: you figure out what is the minimal
> size of a piece of steel that will support a bridge, and then
> you enlarge it to keep it away from the critical value.
>
>
>
> 5) The MOST critical values of the registration process are the
> scaling parameters that define the proportions between
> the parameters of the transform. In your case, for an Affine
> Transform in 2D, you have 6 parameters. The first four are
> the ones of the Matrix, and the last two are the translation.
>
> The rotation matrix value must be in the ranges of radians
> which is typically [ -1 to 1 ], while the translation values are
> in the ranges of millimeters (your image size units).
>
> You want to start by setting the scaling of the matrix
> parameters to 1.0, and the scaling of the Translation
> parameters to the holy esoteric values:
>
> 1.0 / ( 10.0 * pixelspacing[0] * imagesize[0] )
> 1.0 / ( 10.0 * pixelspacing[1] * imagesize[1] )
>
> This is telling the optimizer that you consider that rotating
> the image by 57 degrees is as "significant" as translating
> the image by half its physical extent.
>
> Note that esoteric value has included the arbitrary number
> 10.0 in the denominator, for no other reason that we have
> been lucky when using that factor. This of course is just a
> supersticion, so you should feel free to experiment with
> different values of this number.
>
> Just keep in mind that what the optimizer will do is to
> "jump" in a paramteric space of 6 dimension, and that the
> component of the jump on every dimension will be proporitional
> to 1/scaling factor * OptimizerStepLenght. Since you put
> the optimizer Step Length to 0.1, then the optimizer will start
> by exploring the rotations at jumps of about 5degrees, which
> is a conservative rotation for most medical applications.
>
> If you have reasons to think that your rotations are larger or
> smaller, then you should modify the scaling factor of the matrix
> parameters accordingly.
>
> In the same way, if you thinkl that 1/10 of the image size is too
> large as the first step for exploring the translations, then you
> should modify the scaling of translation parameters accordingly.
>
>
>
> In order to drive all these you need to analyze the feedback that
> the observer is providing you. For example, plot the metric values,
> and plot the translation coordinates so that you can get a feeling
> of how the registration is behaving.
>
>
> Note also that image registration is not a science. it is a pure
> engineerig practice, and therefore, there are no correct answers,
> nor "truths" to be found. It is all about how much quality your want,
> and how must computation time, and development time are you
> willing to pay for that quality. The "satisfying" answer for your
> specific application must be found by exploring the trade-offs
> between the different parameters that regulate the image
> registration process.
>
> If you are proficient in VTK you may want to consider attaching
> some visualization to the Event observer, so that you can have
> a visual feedback on the progress of the registration. This is a
> lot more productive than trying to interpret the values printed
> out on the console by the observer.
>
>
> Regards,
>
>
> Luis
>
>
>
> =================================
> On 3/14/07, Goo <gtshowtime at gmail.com> wrote:
>
>> > Hi, All :
>> >
>> > I want to combine Affine transform and Mattes MI metric into my registration
>> > process.
>> > The process is fine to work but the result of registration is incorrect.
>> > I think this is caused by some coefficients setting that is not suitable for
>> > my application,
>> > but how to determine a applicable coefficients that is very suffering for
>> > me.
>> > So, I wish any experienced friends can give me some indication that
>> > including how to set:
>> > OptimizerScales (specially translation Scale) <---How to set ???
>> > MaximumStepLength <---How to set ???
>> > MinimumStepLength <---How to set ???
>> > NumberOfHistogramBins (now I set 20~50 in typically)
>> > NumberOfSpatialSamples (now I set 20% of image pixels )
>> >
>> >
>> > My code is posting as below:
>> > Regards.
>> >
>> > class CommandIterationUpdate : public itk::Command
>> > {
>> > public:
>> > typedef CommandIterationUpdate Self;
>> > typedef itk::Command Superclass;
>> > typedef itk::SmartPointer<Self> Pointer;
>> > itkNewMacro( Self );
>> > protected:
>> > CommandIterationUpdate() {};
>> > public:
>> > typedef itk::RegularStepGradientDescentOptimizer OptimizerType;
>> > typedef const OptimizerType * OptimizerPointer;
>> >
>> > void Execute(itk::Object *caller, const itk::EventObject & event)
>> > {
>> > Execute( (const itk::Object *)caller, event);
>> > }
>> >
>> > void Execute(const itk::Object * object, const itk::EventObject & event)
>> > {
>> > OptimizerPointer optimizer =
>> > dynamic_cast< OptimizerPointer >( object );
>> > if( ! itk::IterationEvent().CheckEvent( &event ) )
>> > {
>> > return;
>> > }
>> > std::cout << optimizer->GetCurrentIteration() << " ";
>> > std::cout << optimizer->GetValue() << " ";
>> > std::cout << optimizer->GetCurrentPosition();
>> >
>> > // Print the angle for the trace plot
>> > vnl_matrix<double> p(2, 2);
>> > p[0][0] = (double) optimizer->GetCurrentPosition()[0];
>> > p[0][1] = (double) optimizer->GetCurrentPosition()[1];
>> > p[1][0] = (double) optimizer->GetCurrentPosition()[2];
>> > p[1][1] = (double) optimizer->GetCurrentPosition()[3];
>> > vnl_svd<double> svd(p);
>> > vnl_matrix<double> r(2, 2);
>> > r = svd.U() * vnl_transpose(svd.V());
>> > double angle = asin(r[1][0]);
>> > std::cout << " AffineAngle: " << angle * 45.0 / atan(1.0) <<
>> > std::endl;
>> > }
>> > };
>> >
>> > int main( int argc, char *argv[] )
>> > {
>> > if( argc < 4 )
>> > {
>> > std::cerr << "Missing Parameters " << std::endl;
>> > std::cerr << "Usage: " << argv[0];
>> > std::cerr << " fixedImageFile movingImageFile outputImagefile" <<
>> > std::endl;
>> > return 1;
>> > }
>> >
>> > const unsigned int Dimension = 2;
>> > typedef float PixelType;
>> >
>> > typedef itk::Image< PixelType, Dimension > FixedImageType;
>> > typedef itk::Image< PixelType, Dimension > MovingImageType;
>> >
>> > typedef itk::AffineTransform<
>> > double,
>> > Dimension > TransformType;
>> >
>> > typedef itk::RegularStepGradientDescentOptimizer OptimizerType;
>> >
>> > typedef itk::MattesMutualInformationImageToImageMetric<
>> > FixedImageType,
>> > MovingImageType > MetricType;
>> >
>> > typedef itk:: LinearInterpolateImageFunction<
>> > MovingImageType,
>> > double > InterpolatorType;
>> > typedef itk::ImageRegistrationMethod<
>> > FixedImageType,
>> > MovingImageType > RegistrationType;
>> >
>> > MetricType::Pointer metric = MetricType::New();
>> > OptimizerType::Pointer optimizer = OptimizerType::New();
>> > InterpolatorType::Pointer interpolator = InterpolatorType::New();
>> > RegistrationType::Pointer registration = RegistrationType::New();
>> >
>> > registration->SetMetric( metric );
>> > registration->SetOptimizer( optimizer );
>> > registration->SetInterpolator( interpolator );
>> >
>> > TransformType::Pointer transform = TransformType::New();
>> > registration->SetTransform( transform );
>> >
>> > typedef itk::ImageFileReader< FixedImageType > FixedImageReaderType;
>> > typedef itk::ImageFileReader< MovingImageType > MovingImageReaderType;
>> > FixedImageReaderType::Pointer fixedImageReader =
>> > FixedImageReaderType::New();
>> > MovingImageReaderType::Pointer movingImageReader =
>> > MovingImageReaderType::New();
>> > fixedImageReader->SetFileName( argv[1] );
>> > movingImageReader->SetFileName( argv[2] );
>> >
>> >
>> > registration->SetFixedImage( fixedImageReader->GetOutput() );
>> > registration->SetMovingImage( movingImageReader->GetOutput() );
>> > fixedImageReader->Update();
>> >
>> > registration->SetFixedImageRegion(
>> > fixedImageReader->GetOutput()->GetBufferedRegion() );
>> >
>> > typedef itk::CenteredTransformInitializer<
>> > TransformType,
>> > FixedImageType,
>> > MovingImageType >
>> > TransformInitializerType;
>> > TransformInitializerType::Pointer initializer =
>> > TransformInitializerType::New();
>> > initializer->SetTransform( transform );
>> > initializer->SetFixedImage( fixedImageReader->GetOutput() );
>> > initializer->SetMovingImage( movingImageReader->GetOutput() );
>> > initializer->MomentsOn();
>> > initializer->InitializeTransform();
>> >
>> > registration->SetInitialTransformParameters(
>> > transform->GetParameters() );
>> >
>> > typedef OptimizerType::ScalesType OptimizerScalesType;
>> > OptimizerScalesType optimizerScales( transform->GetNumberOfParameters() );
>> >
>> > //-----------------------------------------------------------------------------
>> > // How to determine these ?
>> > metric->SetNumberOfHistogramBins( 50 );
>> > metric->SetNumberOfSpatialSamples( 10000 );
>> > metric->ReinitializeSeed( 76926294 );
>> >
>> > double translationScale = 1.0 / 1000.0;
>> > optimizerScales[0] = 1.0;
>> > optimizerScales[1] = 1.0;
>> > optimizerScales[2] = 1.0;
>> > optimizerScales[3] = 1.0;
>> > optimizerScales[4] = translationScale;
>> > optimizerScales[5] = translationScale;
>> > optimizer->SetScales( optimizerScales );
>> >
>> > double steplength = 1; //0.1
>> > unsigned int maxNumberOfIterations = 200;
>> > optimizer->SetMaximumStepLength( steplength );
>> > optimizer->SetMinimumStepLength( 0.0001 ); //0.0001
>> > optimizer->SetNumberOfIterations( maxNumberOfIterations );
>> > //-----------------------------------------------------------------------------
>> >
>> >
>> > optimizer->MinimizeOn();
>> >
>> >
>> > CommandIterationUpdate::Pointer observer = CommandIterationUpdate::New();
>> > optimizer->AddObserver( itk::IterationEvent(), observer );
>> >
>> > try
>> > {
>> > registration->StartRegistration();
>> > }
>> > catch( itk::ExceptionObject & err )
>> > {
>> > std::cerr << "ExceptionObject caught !" << std::endl;
>> > std::cerr << err << std::endl;
>> > return -1;
>> > }
>> >
>> >
>> >
>> > //-----------------------------------------------------------
>> > OptimizerType::ParametersType finalParameters =
>> > registration->GetLastTransformParameters();
>> >
>> > const double finalRotationCenterX = transform->GetCenter()[0];
>> > const double finalRotationCenterY = transform->GetCenter()[1];
>> > const double finalTranslationX = finalParameters[4];
>> > const double finalTranslationY = finalParameters[5];
>> >
>> > const unsigned int numberOfIterations = optimizer->GetCurrentIteration();
>> >
>> > const double bestValue = optimizer->GetValue();
>> > std::cout << "Result = " << std::endl;
>> > std::cout << " Center X = " << finalRotationCenterX << std::endl;
>> > std::cout << " Center Y = " << finalRotationCenterY << std::endl;
>> > std::cout << " Translation X = " << finalTranslationX << std::endl;
>> > std::cout << " Translation Y = " << finalTranslationY << std::endl;
>> > std::cout << " Iterations = " << numberOfIterations << std::endl;
>> > std::cout << " Metric value = " << bestValue << std::endl;
>> >
>>
> ------------------------------
More information about the Insight-users
mailing list