[Insight-users] Parameter scales for registration (second try)

Nick Tustison ntustison at gmail.com
Tue May 7 07:52:51 EDT 2013


Hi Brad,

Have you seen the work we did with the class

http://www.itk.org/Doxygen/html/classitk_1_1RegistrationParameterScalesEstimator.html

and it's derived classes for the v4 framework?  They describe
a couple different approaches to scaling the gradient for use
with the v4 optimizers.

Nick


On May 7, 2013, at 6:59 AM, Bradley Lowekamp <blowekamp at mail.nih.gov> wrote:

> Hello Joel,
> 
> I have encountered the same issue. I ended up creating my own "ScaledRegularStepGradientDescentOptimizer" derived from the an ITK one. Please find it attached. Please note, I don't think I have migrated this code to ITKv4.... but I not certain.
> 
> I reported this issue to the ITKv4 registration team, but I am not sure what happened to it.
> 
> I also tried to make the change in ITK a while ago, and a large number of the registration tests failed... not sure if the results were better or worse, they were just different.
> 
> Brad
> 
> <itkScaledRegularStepGradientDescentOptimizer.h>
> 
> On Apr 25, 2013, at 11:10 AM, Joël Schaerer <joel.schaerer at gmail.com> wrote:
> 
>> Hi all,
>> 
>> Certain registration transforms have parameters with very different ranges of acceptable values. If uncorrected, this leads to serious problems with simple optimizers such as ITK's regular step gradient optimizer. Fortunately, ITK provides a parameter scale scheme to cope with this problem.
>> 
>> Currently, this scheme is implemented by multiplying components of the parameter gradient by the parameter scales. The gradient vector is then uniformly scaled so that its norm is equal to the current step size.
>> 
>> The problem with this is that it would make more sense to make larger steps in the directions along which the metric varies slowly (eg. the translations in the affine transform).
>> 
>> My solution so far is to re-use the scale parameters to re-scale the resulting vector in the StepAlongGradient method of the itkRegularStepGradientOptimizer class:
>> 
>> for ( unsigned int j = 0; j < spaceDimension; j++ )
>>   {
>>   newPosition[j] = currentPosition[j] + transformedGradient[j] * factor / scales[j];
>>   }
>> 
>> I've made a little graph to explain the situation: http://i.imgur.com/DE6xqQ5.png
>> 
>> Does this sound reasonable? I have good results on the particular (unfortunately confidential) transform I am currently using. However if there is interest I could test the effect on affine registration.
>> 
>> Joel
>> 
>> PS: I've looked briefly at the v4 optimizers. There is now a way to set the scales automatically, but the way they are used doesn't seem to have changed.
>> _____________________________________
>> Powered by www.kitware.com
>> 
>> Visit other Kitware open-source projects at
>> http://www.kitware.com/opensource/opensource.html
>> 
>> Kitware offers ITK Training Courses, for more information visit:
>> http://www.kitware.com/products/protraining.php
>> 
>> Please keep messages on-topic and check the ITK FAQ at:
>> http://www.itk.org/Wiki/ITK_FAQ
>> 
>> Follow this link to subscribe/unsubscribe:
>> http://www.itk.org/mailman/listinfo/insight-users
> 
> _____________________________________
> Powered by www.kitware.com
> 
> Visit other Kitware open-source projects at
> http://www.kitware.com/opensource/opensource.html
> 
> Kitware offers ITK Training Courses, for more information visit:
> http://www.kitware.com/products/protraining.php
> 
> Please keep messages on-topic and check the ITK FAQ at:
> http://www.itk.org/Wiki/ITK_FAQ
> 
> Follow this link to subscribe/unsubscribe:
> http://www.itk.org/mailman/listinfo/insight-users



More information about the Insight-users mailing list