[Insight-developers] Re: itk optimizers : VNL optimizers & Iteration updates

Luis Ibanez luis.ibanez at kitware.com
Wed Mar 30 17:31:29 EST 2005



Hi Stefan,


You are right,

We should also call the ReportIteration() method from gradf()
and from compute().

The only concern is that optimizers that invoke both the f()
and the gradf() method will receive double updates... but...
in any case, we are already sending more updates than necessary,
so we probably can live with this as a 'feature' on how the
iteration reporting is done.

You suggestion about adding more event is the right solution for
dealing with this potential conflict, so we just added such new
events in the itkEventsObject.h file. We changed the naming a bit
in order to follow the style of other events.

As you correctly pointed out, users can still catch all of the
iteration events if they set their Command Observers to listen
for IterationEvent(). They can also tune only for the more specific
FunctionEvaluationIterationEvent() and so on...


The method ReportIteration() now takes an Event as an argument
so you can specify from where the method is being called.  The 
invocation was added to both gradf() and compute(),


The tests in Numerics are still passing   :-)


If you have a chance, please update your CVS checkout, give it
a try to the new methods and let us know if you find any problem.


Thanks a lot for your useful suggestions !


    Luis



-----------------
Stefan Klein wrote:

> Hi Luis,
> 
> Thanks for your answer and the modifications! It's indeed a kind of 
> hack, but it looks like a very good way to solve the problem!
> 
> I'm afraid though that it will not work with the LBFGS optimizer. This 
> optimizer seems to call only the compute(x,f,g) method, which means that 
> the iteration event will never be generated. Would it be an idea to 
> generated an IterationEvent in all three methods?:
> 
> f( const InternalParametersType & inparameters )
> 
> gradf(  const InternalParametersType   & inparameters,
>           InternalDerivativeType   & gradient       )
> 
> compute( const InternalParametersType   & x,
>            InternalMeasureType      * f,
>            InternalDerivativeType   * g   )
> 
> Also for the ConjugateGradientOptimizer (which calls all three methods) 
> you will be sure then that after every function or gradient evaluation 
> an IterationEvent is generated.
> 
> What would be nice then, is to know which method generated the 
> IterationEvent. With this information users could for example decide to 
> ignore function evaluations, and only consider gradient evaluations as 
> an iteration. Or they may only print the CachedDerivative if it actually 
> was just updated. You may implement this by defining three extra itkEvents:
> 
> itkEventMacro( IterationEventFunctionEvaluation, IterationEvent );
> itkEventMacro( IterationEventGradientEvaluation, IterationEvent );
> itkEventMacro( IterationEventFunctionAndGradientEvaluation, 
> IterationEvent );
> 
> and generate the appropriate one depending on which function (f, gradf, 
> or compute) was called.
> 
> Any users that want to react on all events in the same way, can still 
> add an observer for IterationEvents. Because the three newly defined 
> events all inherit from the IterationEvent, they will all trigger the 
> observer, right?
> 
> Please let me know what you think about it!
> 
> Thanks again!
> Stefan.
> 
> 
> 
> 
> 
> 
> At 14:39 26/03/05, Luis Ibanez wrote:
> 
> 
>> Hi Stefan,
>>
>> Thanks for your interest in these modifications.
>>
>> Your question is right on the point. In fact it seems that I forgot
>> to introduce the call for ReportIteration() in the f() method of the
>> Vnl cost function adaptor.  I just fixed this in the cvs repository.
>>
>> Note that this is still somehow a hack, since we are going to get
>> Events per *evaluation* of the metric instead of *per iteration of
>> the optimizer*. For example, an optimizer that computes derivatives
>> by using finite differences in an Affintransform in 3D will report
>> something like 30 Evaluations of the Metric per Iteration of the
>> optimizer. It also has the drawback that you will not know for certain
>> which one of those evaluation is the one that the optimizer is taking
>> to the next step.  So any plotting of these metric values is going
>> to be noiser than what you will get if we had the option of reporting
>> iterations directly from the VNL optimizers.
>>
>> Your suggestion of caching the Value and Derivatives of the Metric
>> so it can be read by the IterationCommand is excellent. I added
>> therefore the methods
>>
>>     GetCachedValue()
>>     GetCachedDerivative()
>>     GetCachedCurrentPosition()
>>
>> to the optmizer adaptor.
>>
>> and the methods
>>
>>     GetCachedValue()
>>     GetCachedDerivative()
>>     GetCachedCurrentParameters()
>>
>> to the cost function adaptor. Note the use of "Position" versus
>> "Parameters" this is for maintaining consistency with the existing
>> method "GetCurrentPosition()" in the optimizers.
>>
>> The Examples ImageRegistration10 and ImageRegistration16 were modified
>> in order to take advantage of this new iteration updates.
>>
>>
>> If you have a chance, please update your CVS checkout, and let me
>> know if the current implementation works fine or whether it requires
>> some modifications.
>>
>>
>>   Thanks a lot for your feedback,
>>
>>
>>      Luis
>>
>>
>>
>>
>> --------------------
>> Stefan Klein wrote:
>>
>>> Hi Luis,
>>> Watching the itk-cvs repository changes I noticed that you are 
>>> currently working on iteration reports in the vnl-based 
>>> ITK-optimizers. This is good news, since I just was thinking of 
>>> hacking them myselves, in order to make them produce IterationEvents! 
>>> For sure my hacks would have been ugly, since I want to avoid 
>>> changing the ITK-classes directly. So, your timing is perfect! :)
>>> My first question is: when and from where will the ReportIteration 
>>> method be called? Will it be called after every evaluation of the 
>>> function value or its derivative (or both)? Or do i miss something, 
>>> and is the ReportIteration method already invoked somewhere? I 
>>> couldn't find in the code where that would be.
>>> Then, my next question is: will it be possible to read the current 
>>> position, function value and, if computed, the derivative? In my 
>>> IterationCommand I would like to store the intermediate positions and 
>>> give feedback about the function value.
>>> Last question: When will this be implemented completely? (not meant 
>>> to hurry you, just to know if I should make a fast workaround for my 
>>> self, or if I'd better wait for your implementation)
>>> Sorry for bothering you with these questions!
>>> Stefan.
>>>
>>
>>
> 





More information about the Insight-developers mailing list