[Insight-users] behavior of RegularStepGradientDescentOptimizer

lpaul l.paul at uclouvain.be
Tue Nov 29 05:04:18 EST 2011


Hi all,

I use ITK for registrations tasks for 7 years now (mainly meansquare and
Mutual info metrics) but I still don't understand the behavior of the
optimimzer.
I use a specialized version of the RegularStepGradientDescentOptimizer, the
VersorRigid3DTransformOptimizer since the transform I use is a
VersorRigid3D.
My question is about the metric value.

Here are the starting parameters:
200 max iterations
4 as starting step
0.01 as minimum step
Stop when either minimum step or max iterations is reached.

The optimizer produces the lowest metric value (296585) for iteration n°10
and step value of 0.25... but keeps going to 200 iterations for a metric
value of 306925 and step value of 0.0625.
The result is good at iteration 200, while it is crappy at iteration 10.
Finally, I could conclude that the optimizer looks for the best positionning
when maximizing the metric value, while it is set up on MinimizeOn!

My application is to match an entire object with a smaller part of a similar
object. The key point is that for iteration n°10, a part of the entire
object is out of the image, what produces a lower metric. This is logical.

But how does the optimizer manage to reach a correct result while the metric
is increasing, and seems to diverge?
Is the metric value minimized by the optimizer or is it only a similarity
measure that doesn't really mean something?
Thanks,
Laurent.


--
View this message in context: http://itk-insight-users.2283740.n2.nabble.com/behavior-of-RegularStepGradientDescentOptimizer-tp7042268p7042268.html
Sent from the ITK Insight Users mailing list archive at Nabble.com.


More information about the Insight-users mailing list