[Insight-users] itkSYN itkBsplineSYN Image Registration- Optimizer Documentation??
Nicholas Tustison
ntustison at gmail.com
Fri Jan 24 11:10:24 EST 2014
Hi Emma,
This is a good example of a difficult registration problem and
illustrates why we often refer people to ANTs instead of the
simpler registration tests within ITK which demonstrate usage.
There are some preprocessing issues you might want to
consider such as bias correction and histogram matching.
More directly related to the image registration part, though,
is the relatively extreme anisotropy and the deformation
required to register lungs, particularly at the base, during
different inspiration phases.
Obviously, you should check that the affine part completed
in an expected way. If the affine registration step failed,
you can’t expect the deformable part to recover. Assuming
that the affine registration looks reasonable, I would use the
neighborhood cross-correlation which is our go-to metric
for deformable registration.
http://www.itk.org/Doxygen/html/classitk_1_1ANTSNeighborhoodCorrelationImageToImageMetricv4.html
When I ran your example, I used a multi-resolution
deformable approach with 4 levels. I also used the CC metric
with a radius of 4. The radius of 4 is necessary to capture the
basal pulmonary motion. However, the relatively large anisotropy
doesn’t capture the finer motion at the higher resolution so I ran
another B-spline SyN image registration stage using the Demons
metric for 200 iterations at the highest level which gave a pretty
good result.
So all put together, I needed the following registration stages:
1. Initialize with a translation transform by aligning the centers
of intensity mass in both images.
2. Follow-up with a rigid registration using the MI metric, sampled
at a rate of 25%, for 4 multi-resolution levels (1000,500,250,100
iterations), learning rate = 0.1.
3. Compose the previous transforms with an affine registration
using the MI metric, sampled at a rate of 25%, for 4 multi-resolution
levels (1000,500,250,100 iterations) and a learning rate = 0.1.
4. Compose the previous transforms with a B-spline SyN registration
using the CC metric of a radius of 4, with 4 multi-resolution
levels, with 100,70,50,20 iterations and a learning rate of 0.1
5. Finally, compose the previous transforms with a B-spline SyN
registration using the Demons metric at the finest resolution level
with 200 iterations and a learning rate of 0.25.
The first 3 transforms are all linear transforms so I collapsed them to
a single affine transform file. The last two transforms are displacement
field transforms with inverses which I also composed to a single
invertible displacement transform.
There might be other ways of getting a similar (or even better solution)
but this is something that seems to work and it’s something that you
can put together using classes from the new ITKv4 framework which
has the flexibility to permit complex registration solutions such as this.
Putting together an application-level program which takes advantage
of this flexibility can be quite involved which is why we wrote
antsRegistration.cxx as part of ANTs and often point people to it on
the ITK list. antsRegistration takes full advantage of the flexibility
we built into the new ITKv4 registration framework which you just would
not put into test code or simple registration examples.
Nick
On Jan 23, 2014, at 8:41 AM, Emma Saunders <emmasaunders123 at gmail.com> wrote:
> Hi Brain,
>
> Thanks for your help,
>
> Below is the code I am using. The images have size [217, 45, 336] and Dimensions [1.48 5.5 1.48 (mm)]. I am using a 3 step multi-resolution framework. Initialized with an affine followed by BsplineSYN. The metric is mattes mutual information, with 40 bins. No pre-processing of the images is performed, I do intend to perform some pre filtering. The metric value appears to take a big jump in the first stage of Bspline at around 100 iterations, using the code below and the result is a mess.
>
> If you can see anything obvious from my code that would be great.
>
> Thanks for your time I really appreciate it, Kind regards
>
> Emma
>
>
> #include "itkImageFileReader.h"
> #include "itkImageFileWriter.h"
> #include "itkImageRegistrationMethodv4.h"
> #include "itkAffineTransform.h"
> #include "itkMattesMutualInformationImageToImageMetricv4.h"
> #include "itkBSplineSyNImageRegistrationMethod.h"
> #include "itkShrinkImageFilter.h"
> #include "itkBSplineSmoothingOnUpdateDisplacementFieldTransform.h"
> #include "itkBSplineSmoothingOnUpdateDisplacementFieldTransformParametersAdaptor.h"
> #include "itkCompositeTransform.h"
> #include "itkVector.h"
> #include <itkRegion.h>
>
> template<class TFilter>
> class CommandIterationUpdate : public itk::Command
> {
> public:
> typedef CommandIterationUpdate Self;
> typedef itk::Command Superclass;
> typedef itk::SmartPointer<Self> Pointer;
> itkNewMacro( Self );
> protected:
> CommandIterationUpdate() {};
>
> public:
>
>
> void Execute(itk::Object *caller, const itk::EventObject & event)
> {
> Execute( (const itk::Object *) caller, event);
> }
>
>
> void Execute(const itk::Object * object, const itk::EventObject & event )
> {
> TFilter const * const filter = dynamic_cast<const TFilter *>( object );
>
> if( typeid( event ) == typeid( itk::InitializeEvent ) )
> {
> // const unsigned int currentLevel = filter->GetCurrentLevel();
> }
> else if( typeid( event ) == typeid( itk::IterationEvent ) )
> {
>
> std::cout << filter->GetCurrentMetricValue() << " " <<std::endl;
>
>
> }
> }
> };
>
>
> template <unsigned int VImageDimension>
> int PerformSimpleImageRegistration( int argc, char *argv[] )
> {
> if( argc < 3 )
> {
> std::cout << argv[0] << " fixedImage movingImage outputImage" << std::endl;
> exit( 1 );
> }
>
> typedef float PixelType;
> typedef itk::Image<PixelType, VImageDimension> FixedImageType;
> typedef itk::Image<PixelType, VImageDimension> MovingImageType;
>
> typedef itk::ImageFileReader<FixedImageType> ImageReaderType;
>
> typename ImageReaderType::Pointer fixedImageReader = ImageReaderType::New();
> fixedImageReader->SetFileName( argv[1] );
> fixedImageReader->Update();
> typename FixedImageType::Pointer fixedImage = fixedImageReader->GetOutput();
> fixedImage->Update();
> fixedImage->DisconnectPipeline();
>
> typename ImageReaderType::Pointer movingImageReader = ImageReaderType::New();
> movingImageReader->SetFileName( argv[2] );
> movingImageReader->Update();
> typename MovingImageType::Pointer movingImage = movingImageReader->GetOutput();
> movingImage->Update();
> movingImage->DisconnectPipeline();
>
> typedef itk::AffineTransform<double, VImageDimension> AffineTransformType;
> typedef itk::ImageRegistrationMethodv4<FixedImageType, MovingImageType, AffineTransformType> AffineRegistrationType;
> typedef itk::GradientDescentOptimizerv4 GradientDescentOptimizerv4Type;
> typename AffineRegistrationType::Pointer affineSimple = AffineRegistrationType::New();
> affineSimple->SetFixedImage( fixedImage );
> affineSimple->SetMovingImage( movingImage );
>
> // Shrink the virtual domain by specified factors for each level
> //ORIGINAL Image size [217, 45, 336]
> //ORIGINAL Image Dimensions 1.48 5.5 1.48 (mm)
>
> typename AffineRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorsa;
> typename AffineRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorsb;
> typename AffineRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorsc;
>
> //First level
> ShrinkFactorsa[0] = 3;
> ShrinkFactorsa[1] = 1;
> ShrinkFactorsa[2] = 3;
>
> //Second Level
> ShrinkFactorsb[0] = 2;
> ShrinkFactorsb[1] = 1;
> ShrinkFactorsb[2] = 2;
>
> //Third Level
> ShrinkFactorsc[0] = 1;
> ShrinkFactorsc[1] = 1;
> ShrinkFactorsc[2] = 1;
>
> unsigned int la=0;
> unsigned int lb=1;
> unsigned int lc=2;
>
> affineSimple->SetNumberOfLevels(3);
> affineSimple->SetShrinkFactorsPerDimension( la, ShrinkFactorsa );
> affineSimple->SetShrinkFactorsPerDimension( lb, ShrinkFactorsa );
> affineSimple->SetShrinkFactorsPerDimension( lc, ShrinkFactorsb );
>
>
> //Define the Metric, must use V4 metric with V4 registration
>
> typedef itk::MattesMutualInformationImageToImageMetricv4<FixedImageType,
> FixedImageType> MutualInformationMetricType;
>
>
> typename MutualInformationMetricType::Pointer mutualInformationMetric = MutualInformationMetricType::New();
> mutualInformationMetric->SetNumberOfHistogramBins( 40 );
> mutualInformationMetric->SetFixedImage( fixedImage );
> mutualInformationMetric->SetMovingImage( movingImage );
> mutualInformationMetric->SetUseMovingImageGradientFilter( false );
> mutualInformationMetric->SetUseFixedImageGradientFilter( false );
> affineSimple->SetMetric( mutualInformationMetric );
>
>
> typedef itk::RegistrationParameterScalesFromPhysicalShift<MutualInformationMetricType> AffineScalesEstimatorType;
> typename AffineScalesEstimatorType::Pointer scalesEstimator1 = AffineScalesEstimatorType::New();
> scalesEstimator1->SetMetric( mutualInformationMetric );
> scalesEstimator1->SetTransformForward( true );
>
> affineSimple->SmoothingSigmasAreSpecifiedInPhysicalUnitsOn();
> affineSimple->SetSmoothingSigmasAreSpecifiedInPhysicalUnits( true ); //image spacing is used to smooth in physical units
> if( affineSimple->GetSmoothingSigmasAreSpecifiedInPhysicalUnits() != true )
> {
> std::cerr << "Returned unexpected value of FALSE." << std::endl;
> return EXIT_FAILURE;
> }
>
> // Smooth by specified gaussian sigmas for each level. These values are specified in
> // physical units. Sigmas of zero cause inconsistency between some platforms.
> {
> typename AffineRegistrationType::SmoothingSigmasArrayType smoothingSigmasPerLevel;
> smoothingSigmasPerLevel.SetSize( 1 );
> smoothingSigmasPerLevel[0] = 2;
> smoothingSigmasPerLevel[1] = 1;
> smoothingSigmasPerLevel[2] = 0.3; //0;
> affineSimple->SetSmoothingSigmasPerLevel( smoothingSigmasPerLevel );
> }
>
> typedef itk::GradientDescentOptimizerv4 GradientDescentOptimizerv4Type;
> typename GradientDescentOptimizerv4Type::Pointer affineOptimizer =
> dynamic_cast<GradientDescentOptimizerv4Type * >( affineSimple->GetOptimizer() );
> if( !affineOptimizer )
> {
> itkGenericExceptionMacro( "Error dynamic_cast failed" );
> }
> affineOptimizer->SetNumberOfIterations( 75); //need 75
> //When a ScalesEstimator is assigned, the optimizer is enabled by default to estimate learning rate only once, during the first iteration
> affineOptimizer->SetDoEstimateLearningRateOnce( false ); //true by default
> affineOptimizer->SetDoEstimateLearningRateAtEachIteration( true );
> affineOptimizer->SetScalesEstimator( scalesEstimator1 );
>
> {
> typedef itk::ImageToImageMetricv4<FixedImageType, MovingImageType> ImageMetricType;
> typename ImageMetricType::Pointer imageMetric = dynamic_cast<ImageMetricType*>( affineSimple->GetMetric() );
> //imageMetric->SetUseFloatingPointCorrection(true);
> imageMetric->SetFloatingPointCorrectionResolution(1e4);
> }
>
> typedef CommandIterationUpdate< GradientDescentOptimizerv4Type > AffineCommandType;
> AffineCommandType::Pointer affineobserver = AffineCommandType::New();
> affineOptimizer->AddObserver( itk::IterationEvent(), affineobserver );
>
> try
> {
> std::cout << "Performing Affine:" << std::endl;
> affineSimple->Update();
>
> std::cout << "Last affine iteration is " << affineOptimizer->GetCurrentIteration();
>
> }
> catch( itk::ExceptionObject &e )
> {
> std::cerr << "Exception caught: " << e << std::endl;
> return EXIT_FAILURE;
> }
>
> {
> typedef itk::ImageToImageMetricv4<FixedImageType, MovingImageType> ImageMetricType;
> typename ImageMetricType::Pointer imageMetric = dynamic_cast<ImageMetricType*>( affineOptimizer->GetMetric() );
> std::cout << "Affine parameters after registration: " << std::endl
> << affineOptimizer->GetCurrentPosition() << std::endl
> << "Last LearningRate: " << affineOptimizer->GetLearningRate() << std::endl
> << "Use FltPtCorrex: " << imageMetric->GetUseFloatingPointCorrection() << std::endl
> << "FltPtCorrexRes: " << imageMetric->GetFloatingPointCorrectionResolution() << std::endl
> << "Number of threads used: metric: " << imageMetric->GetNumberOfThreadsUsed()
> << std::endl << " optimizer: " << affineOptimizer->GetNumberOfThreads() << std::endl;
> }
>
>
> //Now create A composite transform of the input
>
> typedef typename AffineRegistrationType::RealType RealType;
>
> typedef itk::CompositeTransform<RealType, VImageDimension> CompositeTransformType;
> typename CompositeTransformType::Pointer compositeTransform = CompositeTransformType::New();
> compositeTransform->AddTransform( const_cast<typename AffineRegistrationType::OutputTransformType *>( affineSimple->GetOutput()->Get() ) );
>
> ////////////////
> //Now for the BsplineSYN method
>
> //DEFINE THE DISPLACEMENT FIELD
> typedef itk::Vector<RealType, VImageDimension> VectorType;
> VectorType zeroVector( 0.0 );
> typedef itk::Image<VectorType, VImageDimension> DisplacementFieldType;
> typename DisplacementFieldType::Pointer displacementField = DisplacementFieldType::New();
> displacementField->CopyInformation( fixedImage );
> displacementField->SetRegions( fixedImage->GetBufferedRegion() );
> displacementField->Allocate();
> displacementField->FillBuffer( zeroVector );
>
> //DEFINE THE INVERSE DISPLACEMENT FIELD
> typename DisplacementFieldType::Pointer inverseDisplacementField = DisplacementFieldType::New();
> inverseDisplacementField->CopyInformation( fixedImage );
> inverseDisplacementField->SetRegions( fixedImage->GetBufferedRegion() );
> inverseDisplacementField->Allocate();
> inverseDisplacementField->FillBuffer( zeroVector );
>
> //DEFINE THE SMOOTHING ON UPDATE FIELD WHICH PARAMETERIZEDS THE DISPLACEMENT FIELD
> typedef itk::BSplineSmoothingOnUpdateDisplacementFieldTransform<RealType, VImageDimension>
> BSplineDisplacementFieldTransformType;
>
>
> //DEFINE THE SYN IMAGE REGISTRATION METHOD
> typedef itk::BSplineSyNImageRegistrationMethod<FixedImageType, MovingImageType, BSplineDisplacementFieldTransformType> DisplacementFieldRegistrationType;
> typename DisplacementFieldRegistrationType::Pointer displacementFieldRegistration = DisplacementFieldRegistrationType::New();
>
> //OPTIMIZER VALUES
>
> //DEFINE THE OUTPUT TRANSFORM - parameterized by a Bspline///
> typename BSplineDisplacementFieldTransformType::Pointer outputDisplacementFieldTransform =
> const_cast<BSplineDisplacementFieldTransformType *>( displacementFieldRegistration->GetOutput()->Get() );
>
> //CREATE THE TRANSFORM ADAPTOR
> typedef itk::BSplineSmoothingOnUpdateDisplacementFieldTransformParametersAdaptor< BSplineDisplacementFieldTransformType > DisplacementFieldTransformAdaptorType;
> typename DisplacementFieldRegistrationType::TransformParametersAdaptorsContainerType adaptors;
>
>
> //SET THE NUMBER OF LEVELS AND SHRINK FACTORS AND SMOOTHING VALUES FOR THE VIRTUAL DOMAIN
> unsigned int numberOfLevels = 3;
>
> typename DisplacementFieldRegistrationType::NumberOfIterationsArrayType numberOfIterationsPerLevel;
> numberOfIterationsPerLevel.SetSize( 3 );
> numberOfIterationsPerLevel[0] = 500;
> numberOfIterationsPerLevel[1] = 300;
> numberOfIterationsPerLevel[2] = 300;
>
> typename DisplacementFieldRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorsaa;
> typename DisplacementFieldRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorsbb;
> typename DisplacementFieldRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorscc;
> typename DisplacementFieldRegistrationType::ShrinkFactorsPerDimensionContainerType ShrinkFactorstemp;
>
> //First level
> ShrinkFactorsaa[0] = 3;
> ShrinkFactorsaa[1] = 1;
> ShrinkFactorsaa[2] = 3;
>
> //Second Level
> ShrinkFactorsbb[0] = 2;
> ShrinkFactorsbb[1] = 1;
> ShrinkFactorsbb[2] = 2;
>
> //Third Level
> ShrinkFactorscc[0] = 1;
> ShrinkFactorscc[1] = 1;
> ShrinkFactorscc[2] = 1;
>
>
> displacementFieldRegistration->SetNumberOfLevels(3);
> displacementFieldRegistration->SetShrinkFactorsPerDimension( la, ShrinkFactorsaa );
> displacementFieldRegistration->SetShrinkFactorsPerDimension( lb, ShrinkFactorsbb );
> displacementFieldRegistration->SetShrinkFactorsPerDimension( lc, ShrinkFactorscc );
>
> typename DisplacementFieldRegistrationType::SmoothingSigmasArrayType smoothingSigmasPerLevel;
> smoothingSigmasPerLevel.SetSize( 3 );
> smoothingSigmasPerLevel[0] = 2;
> smoothingSigmasPerLevel[1] = 1;
> smoothingSigmasPerLevel[2] = 0.3;
>
>
> for( unsigned int level = 0; level < numberOfLevels; level++ )
> {
> // We use the shrink image filter to calculate the fixed parameters of the virtual
> // domain at each level.
>
> if (level==0)
>
> {
> ShrinkFactorstemp=ShrinkFactorsaa;
> }
>
> if (level==1)
>
> {
> ShrinkFactorstemp=ShrinkFactorsbb;
> }
>
> if (level==2)
>
> {
> ShrinkFactorstemp=ShrinkFactorscc;
> }
>
> typedef itk::ShrinkImageFilter<DisplacementFieldType, DisplacementFieldType> ShrinkFilterType;
> typename ShrinkFilterType::Pointer shrinkFilter = ShrinkFilterType::New();
>
> std::cout << "using shrink Factor" << ShrinkFactorstemp <<std::endl;
> shrinkFilter->SetShrinkFactors( ShrinkFactorstemp);
> shrinkFilter->SetInput( displacementField );
> shrinkFilter->Update();
> std::cout << "New size " << shrinkFilter->GetOutput()->GetBufferedRegion().GetSize() <<std::endl;
>
> //int VirtSize[3];
> typename FixedImageType::SizeType VirtSize =shrinkFilter->GetOutput()->GetBufferedRegion().GetSize();
>
>
> typename DisplacementFieldTransformAdaptorType::Pointer fieldTransformAdaptor = DisplacementFieldTransformAdaptorType::New();
> fieldTransformAdaptor->SetRequiredSpacing( shrinkFilter->GetOutput()->GetSpacing() );
> fieldTransformAdaptor->SetRequiredSize( shrinkFilter->GetOutput()->GetBufferedRegion().GetSize() );
> fieldTransformAdaptor->SetRequiredDirection( shrinkFilter->GetOutput()->GetDirection() );
> fieldTransformAdaptor->SetRequiredOrigin( shrinkFilter->GetOutput()->GetOrigin() );
> fieldTransformAdaptor->SetTransform( outputDisplacementFieldTransform );
>
> typename BSplineDisplacementFieldTransformType::ArrayType newUpdateMeshSize = newUpdateMeshSize;
> typename BSplineDisplacementFieldTransformType::ArrayType newTotalMeshSize = newTotalMeshSize;
>
> if (level==0)
>
> {
> //First dimension
> newUpdateMeshSize[0] = 0; //Lat
> newTotalMeshSize[0] = VirtSize[0]/12;
> //Second dimension
> newUpdateMeshSize[1] = 0; //Ant-post
> newTotalMeshSize[1] = VirtSize[0]/12; //works better
> //Third dimension
> newUpdateMeshSize[2] = 0; //Sup-Inf
> newTotalMeshSize[2] = VirtSize[2]/12;
>
> }
>
>
> //DEFINE THE MESH SIZE FOR THE UPDATE FIELD AND TOTAL FIELD
>
> if (level==1)
>
> {
>
> //First dimension
> newUpdateMeshSize[0] = 0; //Lat
> newTotalMeshSize[0] = VirtSize[0]/12;
> //Second dimension
> newUpdateMeshSize[1] = 0; //Ant-post
> newTotalMeshSize[1] = VirtSize[0]/12;
> //Third dimension
> newUpdateMeshSize[2] = 0; //Sup-Inf
> newTotalMeshSize[2] = VirtSize[2]/12;
>
> }
>
> if (level==2)
>
> {
> //First dimension
> newUpdateMeshSize[0] = 0; //Lat
> newTotalMeshSize[0] = VirtSize[0]/12;
> //Second dimension
> newUpdateMeshSize[1] = 0; //Ant-post
> newTotalMeshSize[1] = VirtSize[0]/12;
> //Third dimension
> newUpdateMeshSize[2] = 0; //Sup-Inf
> newTotalMeshSize[2] = VirtSize[2]/12;
> }
>
>
> std::cout << "level is " << level <<std::endl;
> std::cout << "newUpdateMeshSize" << newUpdateMeshSize << std::endl;
> std::cout << "newTotalMeshSize" << newTotalMeshSize << std::endl;
>
>
> fieldTransformAdaptor->SetMeshSizeForTheUpdateField( newUpdateMeshSize );
> fieldTransformAdaptor->SetMeshSizeForTheTotalField( newTotalMeshSize );
>
> adaptors.push_back( fieldTransformAdaptor.GetPointer() );
> }
>
>
> typename MutualInformationMetricType::Pointer mutualInformationMetricBspline = MutualInformationMetricType::New();
> mutualInformationMetricBspline->SetNumberOfHistogramBins( 40 );
> mutualInformationMetricBspline->SetFixedImage( fixedImage );
> mutualInformationMetricBspline->SetMovingImage( movingImage );
> mutualInformationMetricBspline->SetUseMovingImageGradientFilter( false );
> mutualInformationMetricBspline->SetUseFixedImageGradientFilter( false );
>
> displacementFieldRegistration->SetFixedImage( fixedImage );
> displacementFieldRegistration->SetMovingImage( movingImage );
> displacementFieldRegistration->SetNumberOfLevels( 3 );
> displacementFieldRegistration->SetMovingInitialTransform( compositeTransform );
> displacementFieldRegistration->SmoothingSigmasAreSpecifiedInPhysicalUnitsOn();
> displacementFieldRegistration->SetSmoothingSigmasPerLevel( smoothingSigmasPerLevel );
> displacementFieldRegistration->SetMetric( mutualInformationMetricBspline );
> displacementFieldRegistration->SetLearningRate( 0.25 );
> //in SYN methods learning rate is set initially and modified at each iteration
> //no need for scale as displacement in x,y,z has equal scale
>
> displacementFieldRegistration->SetTransformParametersAdaptorsPerLevel( adaptors );
> displacementFieldRegistration->SetNumberOfIterationsPerLevel( numberOfIterationsPerLevel );
> displacementFieldRegistration->SetTransformParametersAdaptorsPerLevel( adaptors );
> outputDisplacementFieldTransform->SetDisplacementField( displacementField );
> outputDisplacementFieldTransform->SetInverseDisplacementField( inverseDisplacementField );
> outputDisplacementFieldTransform->SetSplineOrder( 3 );
>
> typedef CommandIterationUpdate <DisplacementFieldRegistrationType> DisplacementFieldCommandType;
> typename DisplacementFieldCommandType::Pointer DisplacementFieldObserver = DisplacementFieldCommandType::New();
> displacementFieldRegistration->AddObserver( itk::IterationEvent(), DisplacementFieldObserver );
>
>
> try
> {
> std::cout << " " << std::endl;
> std::cout << "BSpline SyN registration" << std::endl;
> displacementFieldRegistration->Update();
> }
> catch( itk::ExceptionObject &e )
> {
> std::cerr << "Exception caught: " << e << std::endl;
> return EXIT_FAILURE;
> }
>
> compositeTransform->AddTransform( outputDisplacementFieldTransform );
>
>
> typedef itk::ResampleImageFilter<MovingImageType, FixedImageType> ResampleFilterType;
> typename ResampleFilterType::Pointer resampler = ResampleFilterType::New();
> resampler->SetTransform( compositeTransform );
> resampler->SetInput( movingImage );
> resampler->SetSize( fixedImage->GetLargestPossibleRegion().GetSize() );
> resampler->SetOutputOrigin( fixedImage->GetOrigin() );
> resampler->SetOutputSpacing( fixedImage->GetSpacing() );
> resampler->SetOutputDirection( fixedImage->GetDirection() );
> resampler->SetDefaultPixelValue( 0 );
> resampler->Update();
>
> typedef itk::ImageFileWriter<FixedImageType> WriterType;
> typename WriterType::Pointer writer = WriterType::New();
> writer->SetFileName( argv[3] );
> writer->SetInput( resampler->GetOutput() );
> writer->Update();
>
> return EXIT_SUCCESS;
> }
>
> int main( int argc, char *argv[] )
> {
> if( argc < 3 )
> {
> std::cout << argv[0] << " fixedImage movingImage outputImage numberOfAffineIterations numberOfDeformableIterations" << std::endl;
> exit( 1 );
> }
>
>
> const int imgDim=3;
>
> PerformSimpleImageRegistration<imgDim>( argc, argv );
>
>
> return EXIT_SUCCESS;
> }
>
>
>
>
> On Wed, Jan 22, 2014 at 5:01 PM, brian avants <stnava at gmail.com> wrote:
> multi-resolution set up is usually the key to such issues.
>
> but greater detail in your question / code is needed.
>
> syn & variants are applicable for a broad range of images
>
> as you can see in the literature & documentation.
>
>
> brian
>
>
>
>
> On Wed, Jan 22, 2014 at 11:57 AM, Emma Saunders <emmasaunders123 at gmail.com> wrote:
> Thanks Nick and Brian for your input.
>
> I was attempting to change the learning rate as I am having poor results using both syn and itkBsplinesyn when registering two MR images of the entire thorax during respiration. Are these methods specifically designed for certain types of images (e.g brain). I initialize my framework with an Affine transform, using mattes MI, the metric value first falls off and then increases. The result is thus only a small amount of movement, around 1/3 of what I would hope for.
>
> I can successfully register the images with a regular Bspline method using ITK, but was hoping to get a SYN method working in order to have a valid inverse.
>
> I suppose the question is, if I can get regular Bspline to give a descent result should the same hold for the SYN methods or am I missing something?
>
> Thanks for your help
>
>
>
>
> On Wed, Jan 22, 2014 at 3:30 PM, Emma Saunders <emmasaunders123 at gmail.com> wrote:
> Hi all
>
> I am trying to use the Syn methods mentioned above.
>
> I would simply like to set my framework up to estimate the learning rate at each iteration. So far this has proved problematic as the optimization process is contained within the methods itself and whenever I try to find a solution need to look through ANTS header files and I still have no luck.
>
> Is there a simple way to set the learning rate given a scaleEstimator for these methods (without using the ans registration helper ) or must one simply choose one at the beginning and hope for the best??
>
> I am also finding it difficult to determine which optimizer these methods use as in the ANTS headers there are two options a GradientDescent and a conjugate gradient descent, any ideas??
>
> I initially used the ITK libraries as they provided a method that I could begin using state of the art image processing algorithms without the steep learning curve, is there any pending documentation or software guide for the new ITKV4 framework.
>
> Thanks
>
> Emma
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.itk.org/pipermail/insight-users/attachments/20140124/128a6b34/attachment-0001.html>
More information about the Insight-users
mailing list