<div dir="ltr">brad<div><br></div><div style>did this issue ever go up on jira? i do remember discussing with you at a meeting. our solution is in the v4 optimizers.</div><div style><br></div><div style>the trivial additive parameter update doesnt work in more general cases e.g. when you need to compose parameters with parameter updates. </div>
<div style><br></div><div style>to resolve this limitation, the v4 optimizers pass the update step to the transformations </div><div style><br></div><div style>this implements the idea that " the transforms know how to update themselves " </div>
<div style><br></div><div style>there are several other differences, as nick pointed out, that reduce the need for users to experiment with scales .</div><div style><br></div><div style>for basic scenarios like that being discussed by joel, i prefer the conjugate gradient optimizer with line search. </div>
<div style><br></div><div style><div>itkConjugateGradientLineSearchOptimizerv4.h</div><div><br></div><div style>when combined with the scale estimators, this leads to registration algorithms with very few parameters to tune. 1 parameter if you dont consider multi-resolution.</div>
</div></div><div class="gmail_extra"><br clear="all"><div><div><br></div>brian<br><div><br></div><div><br></div></div>
<br><br><div class="gmail_quote">On Tue, May 7, 2013 at 9:27 AM, Nick Tustison <span dir="ltr"><<a href="mailto:ntustison@gmail.com" target="_blank">ntustison@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Brad,<br>
<br>
I certainly don't disagree with Joel's findings. It seems like a<br>
good fix which should be put up on gerrit. There were several<br>
components that we kept in upgrading the registration framework.<br>
The optimizers weren't one of them.<br>
<br>
Also, could you elaborate a bit more on the "convoluted" aspects<br>
of parameter advancement? There's probably a reason for it and<br>
we could explain why.<br>
<span class="HOEnZb"><font color="#888888"><br>
Nick<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
On May 7, 2013, at 8:58 AM, Bradley Lowekamp <<a href="mailto:blowekamp@mail.nih.gov">blowekamp@mail.nih.gov</a>> wrote:<br>
<br>
> Nick,<br>
><br>
> What we are observing is an algorithmic bug in the RegularStepGradientOptimzer. The ITKv4 optimizers have quite a convoluted way to advance the parameters, and likely don't contain this bug.<br>
><br>
><br>
> I think the figure Joel put together does a good job of illustrating the issue:<br>
><br>
> <a href="http://i.imgur.com/DE6xqQ5.pnggithu" target="_blank">http://i.imgur.com/DE6xqQ5.pnggithu</a><br>
><br>
><br>
> It just I think the math here:<br>
> <a href="https://github.com/Kitware/ITK/blob/master/Modules/Numerics/Optimizers/src/itkRegularStepGradientDescentOptimizer.cxx#L44" target="_blank">https://github.com/Kitware/ITK/blob/master/Modules/Numerics/Optimizers/src/itkRegularStepGradientDescentOptimizer.cxx#L44</a><br>
><br>
> newPosition[j] = currentPosition[j] + transformedGradient[j] * factor;<br>
><br>
> should be:<br>
><br>
> newPosition[j] = currentPosition[j] + transformedGradient[j] * factor / scales[j];<br>
><br>
> Brad<br>
><br>
><br>
> On May 7, 2013, at 8:07 AM, Nick Tustison <<a href="mailto:ntustison@gmail.com">ntustison@gmail.com</a>> wrote:<br>
><br>
>> Not quite. See below for a relevant block of code.<br>
>> The optimizer can take an optional scales estimator.<br>
>><br>
>><br>
>> typedef itk::RegistrationParameterScalesFromPhysicalShift<MetricType> ScalesEstimatorType;<br>
>> typename ScalesEstimatorType::Pointer scalesEstimator = ScalesEstimatorType::New();<br>
>> scalesEstimator->SetMetric( singleMetric );<br>
>> scalesEstimator->SetTransformForward( true );<br>
>><br>
>> typedef itk::ConjugateGradientLineSearchOptimizerv4 ConjugateGradientDescentOptimizerType;<br>
>> typename ConjugateGradientDescentOptimizerType::Pointer optimizer = ConjugateGradientDescentOptimizerType::New();<br>
>> optimizer->SetLowerLimit( 0 );<br>
>> optimizer->SetUpperLimit( 2 );<br>
>> optimizer->SetEpsilon( 0.2 );<br>
>> // optimizer->SetMaximumLineSearchIterations( 20 );<br>
>> optimizer->SetLearningRate( learningRate );<br>
>> optimizer->SetMaximumStepSizeInPhysicalUnits( learningRate );<br>
>> optimizer->SetNumberOfIterations( currentStageIterations[0] );<br>
>> optimizer->SetScalesEstimator( scalesEstimator );<br>
>> optimizer->SetMinimumConvergenceValue( convergenceThreshold );<br>
>> optimizer->SetConvergenceWindowSize( convergenceWindowSize );<br>
>> optimizer->SetDoEstimateLearningRateAtEachIteration( this->m_DoEstimateLearningRateAtEachIteration );<br>
>> optimizer->SetDoEstimateLearningRateOnce( !this->m_DoEstimateLearningRateAtEachIteration );<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> On May 7, 2013, at 8:01 AM, Joël Schaerer <<a href="mailto:joel.schaerer@gmail.com">joel.schaerer@gmail.com</a>> wrote:<br>
>><br>
>>> Hi Nick,<br>
>>><br>
>>> I did indeed have a look at these new classes (not a very thorough one, I must confess). However if I understand correctly they allow estimating the parameter scales, but don't change the way the scales are used by the optimizer?<br>
>>><br>
>>> joel<br>
>>><br>
>>> On 07/05/2013 13:52, Nick Tustison wrote:<br>
>>>> Hi Brad,<br>
>>>><br>
>>>> Have you seen the work we did with the class<br>
>>>><br>
>>>> <a href="http://www.itk.org/Doxygen/html/classitk_1_1RegistrationParameterScalesEstimator.html" target="_blank">http://www.itk.org/Doxygen/html/classitk_1_1RegistrationParameterScalesEstimator.html</a><br>
>>>><br>
>>>> and it's derived classes for the v4 framework? They describe<br>
>>>> a couple different approaches to scaling the gradient for use<br>
>>>> with the v4 optimizers.<br>
>>>><br>
>>>> Nick<br>
>>>><br>
>>>><br>
>>>> On May 7, 2013, at 6:59 AM, Bradley Lowekamp <<a href="mailto:blowekamp@mail.nih.gov">blowekamp@mail.nih.gov</a>> wrote:<br>
>>>><br>
>>>>> Hello Joel,<br>
>>>>><br>
>>>>> I have encountered the same issue. I ended up creating my own "ScaledRegularStepGradientDescentOptimizer" derived from the an ITK one. Please find it attached. Please note, I don't think I have migrated this code to ITKv4.... but I not certain.<br>
>>>>><br>
>>>>> I reported this issue to the ITKv4 registration team, but I am not sure what happened to it.<br>
>>>>><br>
>>>>> I also tried to make the change in ITK a while ago, and a large number of the registration tests failed... not sure if the results were better or worse, they were just different.<br>
>>>>><br>
>>>>> Brad<br>
>>>>><br>
>>>>> <itkScaledRegularStepGradientDescentOptimizer.h><br>
>>>>><br>
>>>>> On Apr 25, 2013, at 11:10 AM, Joël Schaerer <<a href="mailto:joel.schaerer@gmail.com">joel.schaerer@gmail.com</a>> wrote:<br>
>>>>><br>
>>>>><br>
>>><br>
>><br>
><br>
<br>
</div></div></blockquote></div><br></div>