[Insight-users] Another question about the optimizer.

Raghavendra Chandrashekara rc3 at doc . ic . ac . uk
Mon, 23 Jun 2003 16:22:00 +0100


Hi Luis,

But what would happen if there are two minima which are very close 
together but I set the initial step length to be too large. Isn't it 
possible to jump from one minimum to other when stepping along the 
gradient direction. Now when the step length is halved, because of the 
change in direction, wouldn't the optimizer get stuck in the second 
minimum when the first minimum is what we really want?

Thanks,

Raghavendra

Luis Ibanez wrote:
> 
> Hi Raghavendra
> 
> The RegularStepGradientDescentOptimizer is
> already doing all this for you.
> 
> Please look at the code in
> 
> Insight/Code/Numerics/
>    itkRegularStepGradientDescentOptimizer.cxx
> 
> in particular look at lines 203 to 224.
> 
> The optimizer is checking if the new gradient
> vector has an angle of more than 90 degrees
> with the previous gradient, and if so, it
> reduces the m_CurrentStepLength by half.
> 
> You may want to look at this code, and verify
> if it is doing what you want.  If after that,
> you find that it is worth to create a variant
> of the optimizer, we will be happy to add it
> to the set of optimizers in the toolkit.
> 
> Please let us know what you find.
> 
> 
> Thanks
> 
> 
>    Luis
> 
> 
> ------------------------------------
> Raghavendra Chandrashekara wrote:
> 
>> Hi Luis,
>>
>> I am trying to control the itk::RegularStepGradientDescentOptimizer so 
>> that it doesn't move in the gradient direction if there is no 
>> improvement. What I am doing is storing the metric measure value in 
>> the previous iteration and the current iteration. After the optimizer 
>> has moved in the gradient direction, I check to see if there is any 
>> improvement. If not then I would like to reduce the step length by 2 
>> and try again.
>>
>> But I've come across two problems and I'm not sure what's the best way 
>> to solve them:
>>
>> (1) There is no function to set the current step length. So I am 
>> reducing the maximum step length by a factor of 2.
>>
>> (2) Because the optimizer has already moved in the gradient direction 
>> I would like to move back one step, but there is no function which 
>> allows me to do this.
>>
>> Please can you tell me if what I am trying to do is sensible or if 
>> there is another way of achieving the same thing?
>>
>> Thanks,
>>
>> Raghavendra
>>
>> Luis Ibanez wrote:
>>
>>>
>>> Hi Raghavendra,
>>>
>>> Your intuition is correct. A gradient descent optimizer shouldn't in
>>> principle have two contiguous increments in the cost function. That is,
>>> if it finds one increment, it has to change direction or stop.
>>>
>>> This is very clear in a one-dimensional parametric space. (e.g. if you
>>> are optimizing a single parameter). But, if you thing about
>>> multidimensional optimization, we can imagine many cases in which the
>>> optimizer is caught in multiple increments.
>>>
>>> For example: Imagine the optimizer in a 2D setting with the simple cost
>>> function
>>>
>>>                   f(x,y) = x^2 + y^2
>>>
>>> If the step lengths are not small enough, it is possible for the
>>> optimizer to star jumping from one wall to the oposite one passing
>>> over the minimun value. Even though the optimizer is trying to do
>>> the right thing: moving in the direction where the gradient indicates
>>> that the function decreases. It may still end up in a location where
>>> the value of the cost function increases, just because the jump was
>>> too long.
>>>
>>> Possibilities of such behavior increases when you add more dimensions
>>> to the parametric space.
>>>
>>> You may want to reduce the step length of your optimization in order
>>> to make sure that the succesive steps behave monotonically.
>>>
>>> Note that the criteria for stopping the optimizer are:
>>>
>>> 1) when the step length reaches a minimum value (user-selected)
>>>    The time step is divided by 2 each time an increase in the cost
>>>    function is found. (the direction of advance is also reversed)
>>>
>>> 2) when the magnitude of the gradient goes under a minumum value
>>>    (also user-selected)
>>>
>>> 3) when the maximum number of iterations is reached.
>>>    (the user selects the maximum number of iterations).
>>>
>>>
>>> Regards,
>>>
>>>
>>>    Luis
>>>
>>>
>>>
>>> -------------------------------
>>> Raghavendra Chandrashekara wrote:
>>>
>>>> Dear All,
>>>>
>>>> I've written a simple program which registers a sphere to a cube 
>>>> using the itk::MeanSquaresImageToImageMetric, 
>>>> itk::RegularGradientDescentOptimizer, and the 
>>>> itk::BSplineDeformableTransform classes. At the end of each 
>>>> iteration I've printed out the value of the metric and these are the 
>>>> values I get:
>>>>
>>>> Iteration 0 = 12499.2
>>>> Iteration 1 = 10638.3
>>>> Iteration 2 = 9153.67
>>>> Iteration 3 = 7949.65
>>>> Iteration 4 = 7119.97
>>>> Iteration 5 = 6549.05
>>>> Iteration 6 = 6009.38
>>>> Iteration 7 = 5607.8
>>>> Iteration 8 = 5330.65
>>>> Iteration 9 = 4985.47
>>>> Iteration 10 = 4669.74
>>>> Iteration 11 = 4395.71
>>>> Iteration 12 = 4067.56
>>>> Iteration 13 = 3753.94
>>>> Iteration 14 = 3608.51
>>>> Iteration 15 = 3327.5
>>>> Iteration 16 = 3067.84
>>>> Iteration 17 = 2744.56
>>>> Iteration 18 = 2453.56
>>>> Iteration 19 = 2269.56
>>>> Iteration 20 = 2045.05
>>>> Iteration 21 = 1795.79
>>>> Iteration 22 = 1685.94
>>>> Iteration 23 = 1544.54
>>>> Iteration 24 = 1510.56
>>>> Iteration 25 = 1377.79
>>>> Iteration 26 = 1465.86
>>>> Iteration 27 = 1566.17
>>>> Iteration 28 = 1198.16
>>>> Iteration 29 = 1420.57
>>>> Iteration 30 = 1043.68
>>>> Iteration 31 = 1392.55
>>>> Iteration 32 = 1011.74
>>>> Iteration 33 = 1285.96
>>>> Iteration 34 = 897.961
>>>> Iteration 35 = 1304.93
>>>> Iteration 36 = 1022.35
>>>> Iteration 37 = 946.78
>>>> Iteration 38 = 918.44
>>>> Iteration 39 = 950.961
>>>> Iteration 40 = 915.875
>>>> Iteration 41 = 964.306
>>>> Iteration 42 = 807.392
>>>> Iteration 43 = 951.763
>>>> Iteration 44 = 731.597
>>>> Iteration 45 = 824.336
>>>> Iteration 46 = 828.187
>>>> Iteration 47 = 792.121
>>>> Iteration 48 = 781.585
>>>> Iteration 49 = 808.127
>>>>
>>>> Everything seems okay until iteration 26, when the metric value 
>>>> increases. Shouldn't the optimizer stop the registration at this 
>>>> point since it can't improve the metric anymore?
>>>>
>>>> Thanks,
>>>>
>>>> Raghavendra
>>>>
>>>> _______________________________________________
>>>> Insight-users mailing list
>>>> Insight-users at public . kitware . com
>>>> http://public . kitware . com/mailman/listinfo/insight-users
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Insight-users mailing list
>>> Insight-users at public . kitware . com
>>> http://public . kitware . com/mailman/listinfo/insight-users
>>
>>
>>
>>
>>
> 
> 
> 
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk . org
> http://www . itk . org/mailman/listinfo/insight-users