[Insight-users] Parameter scales for registration (second try)

brian avants stnava at gmail.com
Tue May 7 10:23:23 EDT 2013


brad

did this issue ever go up on jira?  i do remember discussing with you at a
meeting.   our solution is in the v4 optimizers.

the trivial additive parameter update doesnt work in more general cases
e.g. when you need to compose parameters with parameter updates.

to resolve this limitation, the v4 optimizers pass the update step to the
transformations

this implements the idea that  " the transforms know how to update
themselves "

there are several other differences, as nick pointed out, that reduce the
need for users to experiment with scales .

for basic scenarios like that being discussed by joel, i prefer the
conjugate gradient optimizer with line search.

itkConjugateGradientLineSearchOptimizerv4.h

when combined with the scale estimators, this leads to registration
algorithms with very few parameters to tune.   1 parameter if you dont
consider multi-resolution.


brian




On Tue, May 7, 2013 at 9:27 AM, Nick Tustison <ntustison at gmail.com> wrote:

> Hi Brad,
>
> I certainly don't disagree with Joel's findings.  It seems like a
> good fix which should be put up on gerrit.  There were several
> components that we kept in upgrading the registration framework.
> The optimizers weren't one of them.
>
> Also, could you elaborate a bit more on the "convoluted" aspects
> of parameter advancement?  There's probably a reason for it and
> we could explain why.
>
> Nick
>
>
>
> On May 7, 2013, at 8:58 AM, Bradley Lowekamp <blowekamp at mail.nih.gov>
> wrote:
>
> > Nick,
> >
> > What we are observing is an algorithmic bug in the
> RegularStepGradientOptimzer. The ITKv4 optimizers have quite a convoluted
> way to advance the parameters, and likely don't contain this bug.
> >
> >
> > I think the figure Joel put together does a good job of illustrating the
> issue:
> >
> > http://i.imgur.com/DE6xqQ5.pnggithu
> >
> >
> > It just I think the math here:
> >
> https://github.com/Kitware/ITK/blob/master/Modules/Numerics/Optimizers/src/itkRegularStepGradientDescentOptimizer.cxx#L44
> >
> > newPosition[j] = currentPosition[j] + transformedGradient[j] * factor;
> >
> > should be:
> >
> > newPosition[j] = currentPosition[j] + transformedGradient[j] * factor /
> scales[j];
> >
> > Brad
> >
> >
> > On May 7, 2013, at 8:07 AM, Nick Tustison <ntustison at gmail.com> wrote:
> >
> >> Not quite.  See below for a relevant block of code.
> >> The optimizer can take an optional scales estimator.
> >>
> >>
> >>   typedef itk::RegistrationParameterScalesFromPhysicalShift<MetricType>
> ScalesEstimatorType;
> >>   typename ScalesEstimatorType::Pointer scalesEstimator =
> ScalesEstimatorType::New();
> >>   scalesEstimator->SetMetric( singleMetric );
> >>   scalesEstimator->SetTransformForward( true );
> >>
> >>   typedef itk::ConjugateGradientLineSearchOptimizerv4
> ConjugateGradientDescentOptimizerType;
> >>   typename ConjugateGradientDescentOptimizerType::Pointer optimizer =
> ConjugateGradientDescentOptimizerType::New();
> >>   optimizer->SetLowerLimit( 0 );
> >>   optimizer->SetUpperLimit( 2 );
> >>   optimizer->SetEpsilon( 0.2 );
> >>   //    optimizer->SetMaximumLineSearchIterations( 20 );
> >>   optimizer->SetLearningRate( learningRate );
> >>   optimizer->SetMaximumStepSizeInPhysicalUnits( learningRate );
> >>   optimizer->SetNumberOfIterations( currentStageIterations[0] );
> >>   optimizer->SetScalesEstimator( scalesEstimator );
> >>   optimizer->SetMinimumConvergenceValue( convergenceThreshold );
> >>   optimizer->SetConvergenceWindowSize( convergenceWindowSize );
> >>   optimizer->SetDoEstimateLearningRateAtEachIteration(
> this->m_DoEstimateLearningRateAtEachIteration );
> >>   optimizer->SetDoEstimateLearningRateOnce(
> !this->m_DoEstimateLearningRateAtEachIteration );
> >>
> >>
> >>
> >>
> >>
> >>
> >> On May 7, 2013, at 8:01 AM, Joël Schaerer <joel.schaerer at gmail.com>
> wrote:
> >>
> >>> Hi Nick,
> >>>
> >>> I did indeed have a look at these new classes (not a very thorough
> one, I must confess). However if I understand correctly they allow
> estimating the parameter scales, but don't change the way the scales are
> used by the optimizer?
> >>>
> >>> joel
> >>>
> >>> On 07/05/2013 13:52, Nick Tustison wrote:
> >>>> Hi Brad,
> >>>>
> >>>> Have you seen the work we did with the class
> >>>>
> >>>>
> http://www.itk.org/Doxygen/html/classitk_1_1RegistrationParameterScalesEstimator.html
> >>>>
> >>>> and it's derived classes for the v4 framework?  They describe
> >>>> a couple different approaches to scaling the gradient for use
> >>>> with the v4 optimizers.
> >>>>
> >>>> Nick
> >>>>
> >>>>
> >>>> On May 7, 2013, at 6:59 AM, Bradley Lowekamp <blowekamp at mail.nih.gov>
> wrote:
> >>>>
> >>>>> Hello Joel,
> >>>>>
> >>>>> I have encountered the same issue. I ended up creating my own
> "ScaledRegularStepGradientDescentOptimizer" derived from the an ITK one.
> Please find it attached. Please note, I don't think I have migrated this
> code to ITKv4.... but I not certain.
> >>>>>
> >>>>> I reported this issue to the ITKv4 registration team, but I am not
> sure what happened to it.
> >>>>>
> >>>>> I also tried to make the change in ITK a while ago, and a large
> number of the registration tests failed... not sure if the results were
> better or worse, they were just different.
> >>>>>
> >>>>> Brad
> >>>>>
> >>>>> <itkScaledRegularStepGradientDescentOptimizer.h>
> >>>>>
> >>>>> On Apr 25, 2013, at 11:10 AM, Joël Schaerer <joel.schaerer at gmail.com>
> wrote:
> >>>>>
> >>>>>
> >>>
> >>
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.itk.org/pipermail/insight-users/attachments/20130507/8f42ba3f/attachment.htm>


More information about the Insight-users mailing list