ITK  4.1.0
Insight Segmentation and Registration Toolkit
Public Types | Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes | Private Member Functions | Private Attributes
itk::GradientDescentOptimizerv4 Class Reference

#include <itkGradientDescentOptimizerv4.h>

+ Inheritance diagram for itk::GradientDescentOptimizerv4:
+ Collaboration diagram for itk::GradientDescentOptimizerv4:

List of all members.

Public Types

typedef SmartPointer< const SelfConstPointer
typedef
itk::Function::WindowConvergenceMonitoringFunction
< double > 
ConvergenceMonitoringType
typedef Superclass::DerivativeType DerivativeType
typedef
Superclass::InternalComputationValueType 
InternalComputationValueType
typedef Superclass::MeasureType MeasureType
typedef SmartPointer< SelfPointer
typedef GradientDescentOptimizerv4 Self
typedef
GradientDescentOptimizerBasev4 
Superclass

Public Member Functions

virtual ::itk::LightObject::Pointer CreateAnother (void) const
virtual const
InternalComputationValueType
GetLearningRate ()
virtual const char * GetNameOfClass () const
virtual void ResumeOptimization ()
virtual void SetConvergenceWindowSize (SizeValueType _arg)
virtual void SetLearningRate (InternalComputationValueType _arg)
virtual void SetMaximumStepSizeInPhysicalUnits (InternalComputationValueType _arg)
virtual void SetMinimumConvergenceValue (InternalComputationValueType _arg)
virtual void SetScalesEstimator (OptimizerParameterScalesEstimator *_arg)
virtual void StartOptimization ()
virtual void SetDoEstimateScales (bool _arg)
virtual const bool & GetDoEstimateScales ()
virtual void DoEstimateScalesOn ()
virtual void DoEstimateScalesOff ()
virtual void SetDoEstimateLearningRateAtEachIteration (bool _arg)
virtual const bool & GetDoEstimateLearningRateAtEachIteration ()
virtual void DoEstimateLearningRateAtEachIterationOn ()
virtual void DoEstimateLearningRateAtEachIterationOff ()
virtual void SetDoEstimateLearningRateOnce (bool _arg)
virtual const bool & GetDoEstimateLearningRateOnce ()
virtual void DoEstimateLearningRateOnceOn ()
virtual void DoEstimateLearningRateOnceOff ()

Static Public Member Functions

static Pointer New ()

Protected Member Functions

virtual void AdvanceOneStep (void)
virtual void EstimateLearningRate ()
 GradientDescentOptimizerv4 ()
virtual void PrintSelf (std::ostream &os, Indent indent) const
virtual ~GradientDescentOptimizerv4 ()
virtual void ModifyGradientByScalesOverSubRange (const IndexRangeType &subrange)
virtual void ModifyGradientByLearningRateOverSubRange (const IndexRangeType &subrange)

Protected Attributes

ConvergenceMonitoringType::Pointer m_ConvergenceMonitoring
SizeValueType m_ConvergenceWindowSize
InternalComputationValueType m_LearningRate
InternalComputationValueType m_MaximumStepSizeInPhysicalUnits
InternalComputationValueType m_MinimumConvergenceValue
OptimizerParameterScalesEstimator::Pointer m_ScalesEstimator

Private Member Functions

 GradientDescentOptimizerv4 (const Self &)
void operator= (const Self &)

Private Attributes

bool m_DoEstimateLearningRateAtEachIteration
bool m_DoEstimateLearningRateOnce
bool m_DoEstimateScales

Detailed Description

Gradient descent optimizer.

GradientDescentOptimizer implements a simple gradient descent optimizer. At each iteration the current position is updated according to

\[ p_{n+1} = p_n + \mbox{learningRate} \, \frac{\partial f(p_n) }{\partial p_n} \]

The user can scale each component of the df / dp in two ways: 1) manually, by setting a scaling vector using method SetScales(). Or, 2) automatically, by assigning a ScalesEstimator using SetScalesEstimator(). When ScalesEstimator is assigned, the optimizer is enabled by default to estimate scales, and can be changed via SetDoEstimateScales(). The scales are estimated and assigned once, during the call to StartOptimization(). This option will override any manually-assigned scales.

The learing rate defaults to 1.0, and can be set in two ways: 1) manually, via SetLearningRate(). Or, 2) automatically, either at each iteration or only at the first iteration, by assigning a ScalesEstimator via SetScalesEstimator(). When a ScalesEstimator is assigned, the optimizer is enabled by default to estimate learning rate only once, during the first iteration. This behavior can be changed via SetDoEstimateLearningRateAtEveryIteration() and SetDoEstimateLearningRateOnce(). For learning rate to be estimated at each iteration, the user must call SetDoEstimateLearningRateAtEveryIteration(true) and SetDoEstimateLearningRateOnce(false). When enabled, the optimizer computes learning rate(s) such that at each step, each voxel's change in physical space will be less than m_MaximumStepSizeInPhysicalUnits. m_LearningRate = m_MaximumStepSizeInPhysicalUnits / m_ScalesEstimator->EstimateStepScale(scaledGradient) where m_MaximumStepSizeInPhysicalUnits defaults to the voxel spacing returned by m_ScalesEstimator->EstimateMaximumStepSize() (which is typically 1 voxel), and can be set by the user via SetMaximumStepSizeInPhysicalUnits(). When SetDoEstimateLearningRateOnce is enabled, the voxel change may become being greater than m_MaximumStepSizeInPhysicalUnits in later iterations.

Note:
Unlike the previous version of GradientDescentOptimizer, this version does not have a "maximize/minimize" option to modify the effect of the metric derivative. The assigned metric is assumed to return a parameter derivative result that "improves" the optimization when *added* to the current parameters via the metric::UpdateTransformParameters method, after the optimizer applies scales and a learning rate.

Definition at line 79 of file itkGradientDescentOptimizerv4.h.


Member Typedef Documentation

Type for the convergence checker

Definition at line 104 of file itkGradientDescentOptimizerv4.h.

Derivative type

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::MultiGradientOptimizerv4.

Definition at line 93 of file itkGradientDescentOptimizerv4.h.

Internal computation type, for maintaining a desired precision

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::MultiGradientOptimizerv4, and itk::QuasiNewtonOptimizerv4.

Definition at line 100 of file itkGradientDescentOptimizerv4.h.

Metric type over which this class is templated

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::MultiGradientOptimizerv4.

Definition at line 99 of file itkGradientDescentOptimizerv4.h.

Standard class typedefs.

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::QuasiNewtonOptimizerv4, and itk::MultiGradientOptimizerv4.

Definition at line 84 of file itkGradientDescentOptimizerv4.h.


Constructor & Destructor Documentation

Default constructor

Destructor


Member Function Documentation

virtual void itk::GradientDescentOptimizerv4::AdvanceOneStep ( void  ) [protected, virtual]

Advance one Step following the gradient direction. Includes transform update.

Reimplemented in itk::QuasiNewtonOptimizerv4.

virtual::itk::LightObject::Pointer itk::GradientDescentOptimizerv4::CreateAnother ( void  ) const [virtual]

Create an object from an instance, potentially deferring to a factory. This method allows you to create an instance of an object that is exactly the same type as the referring object. This is useful in cases where an object has been cast back to a base class.

Reimplemented from itk::Object.

Reimplemented in itk::QuasiNewtonOptimizerv4, and itk::MultiGradientOptimizerv4.

Option to use ScalesEstimator for learning rate estimation at *each* iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.

See also:
SetDoEstimateLearningRateOnce()
SetScalesEstimator()

Option to use ScalesEstimator for learning rate estimation at *each* iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.

See also:
SetDoEstimateLearningRateOnce()
SetScalesEstimator()

Option to use ScalesEstimator for learning rate estimation only *once*, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.

See also:
SetDoEstimateLearningRateAtEachIteration()
SetScalesEstimator()

Option to use ScalesEstimator for learning rate estimation only *once*, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.

See also:
SetDoEstimateLearningRateAtEachIteration()
SetScalesEstimator()

Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.

Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.

virtual void itk::GradientDescentOptimizerv4::EstimateLearningRate ( ) [protected, virtual]

Estimate the learning rate

Implements itk::GradientDescentOptimizerBasev4.

Option to use ScalesEstimator for learning rate estimation at *each* iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.

See also:
SetDoEstimateLearningRateOnce()
SetScalesEstimator()

Option to use ScalesEstimator for learning rate estimation only *once*, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.

See also:
SetDoEstimateLearningRateAtEachIteration()
SetScalesEstimator()
virtual const bool& itk::GradientDescentOptimizerv4::GetDoEstimateScales ( ) [virtual]

Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.

Get the learning rate.

virtual const char* itk::GradientDescentOptimizerv4::GetNameOfClass ( ) const [virtual]

Run-time type information (and related methods).

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::QuasiNewtonOptimizerv4, and itk::MultiGradientOptimizerv4.

virtual void itk::GradientDescentOptimizerv4::ModifyGradientByLearningRateOverSubRange ( const IndexRangeType subrange) [protected, virtual]

Modify the gradient over a given index range.

Implements itk::GradientDescentOptimizerBasev4.

virtual void itk::GradientDescentOptimizerv4::ModifyGradientByScalesOverSubRange ( const IndexRangeType subrange) [protected, virtual]

Modify the gradient over a given index range.

Implements itk::GradientDescentOptimizerBasev4.

New macro for creation of through a Smart Pointer

Reimplemented from itk::Object.

Reimplemented in itk::QuasiNewtonOptimizerv4, and itk::MultiGradientOptimizerv4.

void itk::GradientDescentOptimizerv4::operator= ( const Self ) [private]

Mutex lock to protect modification to the reference count

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::QuasiNewtonOptimizerv4, and itk::MultiGradientOptimizerv4.

virtual void itk::GradientDescentOptimizerv4::PrintSelf ( std::ostream &  os,
Indent  indent 
) const [protected, virtual]

Methods invoked by Print() to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.

Reimplemented from itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::QuasiNewtonOptimizerv4, and itk::MultiGradientOptimizerv4.

Resume the optimization. Can be called after StopOptimization to resume. The bulk of the optimization work loop is here.

Implements itk::GradientDescentOptimizerBasev4.

Reimplemented in itk::MultiGradientOptimizerv4.

Window size for the convergence checker. The convergence checker calculates convergence value by fitting to a window of the energy (metric value) profile.

The default m_ConvergenceWindowSize is set to 50 to pass all tests. It is suggested to use 10 for less stringent convergence checking.

Option to use ScalesEstimator for learning rate estimation at *each* iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.

See also:
SetDoEstimateLearningRateOnce()
SetScalesEstimator()

Option to use ScalesEstimator for learning rate estimation only *once*, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.

See also:
SetDoEstimateLearningRateAtEachIteration()
SetScalesEstimator()
virtual void itk::GradientDescentOptimizerv4::SetDoEstimateScales ( bool  _arg) [virtual]

Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.

Set the learning rate.

Set the maximum step size, in physical space units.

Only relevant when m_ScalesEstimator is set by user, and automatic learning rate estimation is enabled. See main documentation.

Minimum convergence value for convergence checking. The convergence checker calculates convergence value by fitting to a window of the energy profile. When the convergence value reaches a small value, it would be treated as converged.

The default m_MinimumConvergenceValue is set to 1e-8 to pass all tests. It is suggested to use 1e-6 for less stringent convergence checking.

Set the scales estimator.

A ScalesEstimator is required for the scales and learning rate estimation options to work. See the main documentation.

See also:
SetDoEstimateScales()
SetDoEstimateLearningRateAtEachIteration()
SetDoEstimateLearningOnce()

Start and run the optimization

Reimplemented from itk::ObjectToObjectOptimizerBase.

Reimplemented in itk::MultiGradientOptimizerv4, and itk::QuasiNewtonOptimizerv4.


Member Data Documentation

The convergence checker.

Definition at line 240 of file itkGradientDescentOptimizerv4.h.

Window size for the convergence checker. The convergence checker calculates convergence value by fitting to a window of the energy (metric value) profile.

Definition at line 237 of file itkGradientDescentOptimizerv4.h.

Flag to control use of the ScalesEstimator (if set) for automatic learning step estimation at *each* iteration.

Definition at line 251 of file itkGradientDescentOptimizerv4.h.

Flag to control use of the ScalesEstimator (if set) for automatic learning step estimation only *once*, during first iteration.

Definition at line 256 of file itkGradientDescentOptimizerv4.h.

Flag to control use of the ScalesEstimator (if set) for automatic scale estimation during StartOptimization()

Definition at line 246 of file itkGradientDescentOptimizerv4.h.

Manual learning rate to apply. It is overridden by automatic learning rate estimation if enabled. See main documentation.

Definition at line 206 of file itkGradientDescentOptimizerv4.h.

The maximum step size in physical units, to restrict learning rates. Only used with automatic learning rate estimation. See main documentation.

Definition at line 211 of file itkGradientDescentOptimizerv4.h.

Minimum convergence value for convergence checking. The convergence checker calculates convergence value by fitting to a window of the energy profile. When the convergence value reaches a small value, such as 1e-8, it would be treated as converged.

Definition at line 231 of file itkGradientDescentOptimizerv4.h.

Definition at line 224 of file itkGradientDescentOptimizerv4.h.


The documentation for this class was generated from the following file: