[Insight-users] Parameter scales for registration (second try)

Joël Schaerer joel.schaerer at gmail.com
Tue May 7 04:27:40 EDT 2013


Hi Matt,

Thanks for your answer!

Short answer: no, not really. Suppose you have only two parameters, one 
that varies between 1e-6 and 2e-6, and one that varies between 1000 and 
2000. You can make the step size small enough so that it won't vary too 
fast in the first direction, but that means it will go extremely slowly 
in the second direction, and it will take you millions of iterations to 
get anywhere.

You could cheat by tweaking the scales to ensure that the gradient 
vector will be almost completely in the direction of the second 
parameter, and increasing the step size. However that means the gradient 
won't be normalized in the first direction (since even if it gets "big" 
in the first direction, that component will still be negligible compared 
to the second one), and so that may lead (and actually does, I've tried 
it) to instability in the optimization process.

joel

On 06/05/2013 17:46, Matt McCormick wrote:
> Hi Joel,
>
> Can you get the same effect by tweaking the scales and/or increasing
> the step size?
>
> Thanks,
> Matt=
>
> On Thu, Apr 25, 2013 at 3:10 PM, Joël Schaerer <joel.schaerer at gmail.com> wrote:
>> Hi all,
>>
>> Certain registration transforms have parameters with very different ranges
>> of acceptable values. If uncorrected, this leads to serious problems with
>> simple optimizers such as ITK's regular step gradient optimizer.
>> Fortunately, ITK provides a parameter scale scheme to cope with this
>> problem.
>>
>> Currently, this scheme is implemented by multiplying components of the
>> parameter gradient by the parameter scales. The gradient vector is then
>> uniformly scaled so that its norm is equal to the current step size.
>>
>> The problem with this is that it would make more sense to make larger steps
>> in the directions along which the metric varies slowly (eg. the translations
>> in the affine transform).
>>
>> My solution so far is to re-use the scale parameters to re-scale the
>> resulting vector in the StepAlongGradient method of the
>> itkRegularStepGradientOptimizer class:
>>
>>    for ( unsigned int j = 0; j < spaceDimension; j++ )
>>      {
>>      newPosition[j] = currentPosition[j] + transformedGradient[j] * factor /
>> scales[j];
>>      }
>>
>> I've made a little graph to explain the situation:
>> http://i.imgur.com/DE6xqQ5.png
>>
>> Does this sound reasonable? I have good results on the particular
>> (unfortunately confidential) transform I am currently using. However if
>> there is interest I could test the effect on affine registration.
>>
>> Joel
>>
>> PS: I've looked briefly at the v4 optimizers. There is now a way to set the
>> scales automatically, but the way they are used doesn't seem to have
>> changed.
>> _____________________________________
>> Powered by www.kitware.com
>>
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>>
>> Kitware offers ITK Training Courses, for more information visit:
>> http://www.kitware.com/products/protraining.php
>>
>> Please keep messages on-topic and check the ITK FAQ at:
>> http://www.itk.org/Wiki/ITK_FAQ
>>
>> Follow this link to subscribe/unsubscribe:
>> http://www.itk.org/mailman/listinfo/insight-users



More information about the Insight-users mailing list