ITK
4.2.0
Insight Segmentation and Registration Toolkit
|
#include <itkGradientDescentOptimizerv4.h>
Static Public Member Functions | |
static Pointer | New () |
Protected Member Functions | |
virtual void | AdvanceOneStep (void) |
GradientDescentOptimizerv4 () | |
virtual void | PrintSelf (std::ostream &os, Indent indent) const |
virtual | ~GradientDescentOptimizerv4 () |
virtual void | ModifyGradientByScalesOverSubRange (const IndexRangeType &subrange) |
virtual void | ModifyGradientByLearningRateOverSubRange (const IndexRangeType &subrange) |
Protected Member Functions inherited from itk::GradientDescentOptimizerBasev4 | |
GradientDescentOptimizerBasev4 () | |
virtual | ~GradientDescentOptimizerBasev4 () |
Protected Member Functions inherited from itk::ObjectToObjectOptimizerBase | |
ObjectToObjectOptimizerBase () | |
virtual | ~ObjectToObjectOptimizerBase () |
Protected Member Functions inherited from itk::Object | |
Object () | |
bool | PrintObservers (std::ostream &os, Indent indent) const |
virtual void | SetTimeStamp (const TimeStamp &time) |
virtual | ~Object () |
Protected Member Functions inherited from itk::LightObject | |
virtual LightObject::Pointer | InternalClone () const |
LightObject () | |
virtual void | PrintHeader (std::ostream &os, Indent indent) const |
virtual void | PrintTrailer (std::ostream &os, Indent indent) const |
virtual | ~LightObject () |
Private Member Functions | |
GradientDescentOptimizerv4 (const Self &) | |
void | operator= (const Self &) |
Private Attributes | |
bool | m_DoEstimateLearningRateAtEachIteration |
bool | m_DoEstimateLearningRateOnce |
bool | m_DoEstimateScales |
Additional Inherited Members | |
Protected Types inherited from itk::GradientDescentOptimizerBasev4 | |
typedef GradientDescentOptimizerBasev4ModifyGradientByScalesThreader::IndexRangeType | IndexRangeType |
Gradient descent optimizer.
GradientDescentOptimizer implements a simple gradient descent optimizer. At each iteration the current position is updated according to
Optionally, the best metric value and matching parameters can be stored and retried via GetValue() and GetCurrentPosition(). See SetReturnBestParametersAndValue().
The user can scale each component of the df / dp in two ways: 1) manually, by setting a scaling vector using method SetScales(). Or, 2) automatically, by assigning a ScalesEstimator using SetScalesEstimator(). When ScalesEstimator is assigned, the optimizer is enabled by default to estimate scales, and can be changed via SetDoEstimateScales(). The scales are estimated and assigned once, during the call to StartOptimization(). This option will override any manually-assigned scales.
The learing rate defaults to 1.0, and can be set in two ways: 1) manually, via SetLearningRate()
. Or, 2) automatically, either at each iteration or only at the first iteration, by assigning a ScalesEstimator via SetScalesEstimator(). When a ScalesEstimator is assigned, the optimizer is enabled by default to estimate learning rate only once, during the first iteration. This behavior can be changed via SetDoEstimateLearningRateAtEveryIteration() and SetDoEstimateLearningRateOnce(). For learning rate to be estimated at each iteration, the user must call SetDoEstimateLearningRateAtEveryIteration(true) and SetDoEstimateLearningRateOnce(false). When enabled, the optimizer computes learning rate(s) such that at each step, each voxel's change in physical space will be less than m_MaximumStepSizeInPhysicalUnits. m_LearningRate = m_MaximumStepSizeInPhysicalUnits / m_ScalesEstimator->EstimateStepScale(scaledGradient) where m_MaximumStepSizeInPhysicalUnits defaults to the voxel spacing returned by m_ScalesEstimator->EstimateMaximumStepSize() (which is typically 1 voxel), and can be set by the user via SetMaximumStepSizeInPhysicalUnits(). When SetDoEstimateLearningRateOnce is enabled, the voxel change may become being greater than m_MaximumStepSizeInPhysicalUnits in later iterations.
Definition at line 83 of file itkGradientDescentOptimizerv4.h.
typedef SmartPointer< const Self > itk::GradientDescentOptimizerv4::ConstPointer |
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
Definition at line 91 of file itkGradientDescentOptimizerv4.h.
typedef itk::Function::WindowConvergenceMonitoringFunction<double> itk::GradientDescentOptimizerv4::ConvergenceMonitoringType |
Type for the convergence checker
Reimplemented in itk::GradientDescentLineSearchOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
Definition at line 108 of file itkGradientDescentOptimizerv4.h.
typedef Superclass::DerivativeType itk::GradientDescentOptimizerv4::DerivativeType |
Derivative type
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::MultiGradientOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
Definition at line 97 of file itkGradientDescentOptimizerv4.h.
typedef Superclass::InternalComputationValueType itk::GradientDescentOptimizerv4::InternalComputationValueType |
Internal computation type, for maintaining a desired precision
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::MultiGradientOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, itk::QuasiNewtonOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
Definition at line 104 of file itkGradientDescentOptimizerv4.h.
typedef Superclass::MeasureType itk::GradientDescentOptimizerv4::MeasureType |
Metric type over which this class is templated
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::MultiGradientOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
Definition at line 103 of file itkGradientDescentOptimizerv4.h.
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
Definition at line 90 of file itkGradientDescentOptimizerv4.h.
Standard class typedefs.
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
Definition at line 88 of file itkGradientDescentOptimizerv4.h.
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
Definition at line 89 of file itkGradientDescentOptimizerv4.h.
|
protected |
Default constructor
|
protectedvirtual |
Destructor
|
private |
|
protectedvirtual |
Advance one Step following the gradient direction. Includes transform update.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
|
virtual |
Create an object from an instance, potentially deferring to a factory. This method allows you to create an instance of an object that is exactly the same type as the referring object. This is useful in cases where an object has been cast back to a base class.
Reimplemented from itk::Object.
Reimplemented in itk::GradientDescentLineSearchOptimizerv4, itk::QuasiNewtonOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
|
virtual |
Option to use ScalesEstimator for learning rate estimation at each iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.
|
virtual |
Option to use ScalesEstimator for learning rate estimation at each iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.
|
virtual |
Option to use ScalesEstimator for learning rate estimation only once, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.
|
virtual |
Option to use ScalesEstimator for learning rate estimation only once, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.
|
virtual |
Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.
|
virtual |
Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.
|
virtual |
Estimate the learning rate based on the current gradient.
|
virtual |
Get current convergence value
|
virtual |
Option to use ScalesEstimator for learning rate estimation at each iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.
|
virtual |
Option to use ScalesEstimator for learning rate estimation only once, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.
|
virtual |
Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.
|
virtual |
Get the learning rate.
|
virtual |
Get the maximum step size, in physical space units.
|
virtual |
Run-time type information (and related methods).
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
|
virtual |
Flag. Set to have the optimizer track and return the best best metric value and corresponding best parameters that were calculated during the optimization. This captures the best solution when the optimizer oversteps or osciallates near the end of an optimization. Results are stored in m_CurrentMetricValue and in the assigned metric's parameters, retrievable via optimizer->GetCurrentPosition(). This option requires additional memory to store the best parameters, which can be large when working with high-dimensional transforms such as DisplacementFieldTransform.
|
protectedvirtual |
Modify the gradient over a given index range.
Implements itk::GradientDescentOptimizerBasev4.
|
protectedvirtual |
Modify the gradient over a given index range.
Implements itk::GradientDescentOptimizerBasev4.
|
static |
New macro for creation of through a Smart Pointer
Reimplemented from itk::Object.
Reimplemented in itk::GradientDescentLineSearchOptimizerv4, itk::QuasiNewtonOptimizerv4, itk::ConjugateGradientLineSearchOptimizerv4, and itk::MultiGradientOptimizerv4.
|
private |
Mutex lock to protect modification to the reference count
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::MultiGradientOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
|
protectedvirtual |
Methods invoked by Print() to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::QuasiNewtonOptimizerv4, itk::MultiGradientOptimizerv4, itk::GradientDescentLineSearchOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
|
virtual |
Resume optimization. This runs the optimization loop, and allows continuation of stopped optimization
Implements itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::MultiGradientOptimizerv4.
|
virtual |
Flag. Set to have the optimizer track and return the best best metric value and corresponding best parameters that were calculated during the optimization. This captures the best solution when the optimizer oversteps or osciallates near the end of an optimization. Results are stored in m_CurrentMetricValue and in the assigned metric's parameters, retrievable via optimizer->GetCurrentPosition(). This option requires additional memory to store the best parameters, which can be large when working with high-dimensional transforms such as DisplacementFieldTransform.
|
virtual |
Flag. Set to have the optimizer track and return the best best metric value and corresponding best parameters that were calculated during the optimization. This captures the best solution when the optimizer oversteps or osciallates near the end of an optimization. Results are stored in m_CurrentMetricValue and in the assigned metric's parameters, retrievable via optimizer->GetCurrentPosition(). This option requires additional memory to store the best parameters, which can be large when working with high-dimensional transforms such as DisplacementFieldTransform.
|
virtual |
Window size for the convergence checker. The convergence checker calculates convergence value by fitting to a window of the energy (metric value) profile.
The default m_ConvergenceWindowSize is set to 50 to pass all tests. It is suggested to use 10 for less stringent convergence checking.
|
virtual |
Option to use ScalesEstimator for learning rate estimation at each iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is false.
|
virtual |
Option to use ScalesEstimator for learning rate estimation only once, during first iteration. The estimation overrides the learning rate set by SetLearningRate(). Default is true.
|
virtual |
Option to use ScalesEstimator for scales estimation. The estimation is performed once at begin of optimization, and overrides any scales set using SetScales(). Default is true.
|
virtual |
Set the learning rate.
|
virtual |
Set the maximum step size, in physical space units.
Only relevant when m_ScalesEstimator is set by user, and automatic learning rate estimation is enabled. See main documentation.
|
virtual |
Minimum convergence value for convergence checking. The convergence checker calculates convergence value by fitting to a window of the energy profile. When the convergence value reaches a small value, it would be treated as converged.
The default m_MinimumConvergenceValue is set to 1e-8 to pass all tests. It is suggested to use 1e-6 for less stringent convergence checking.
|
virtual |
Flag. Set to have the optimizer track and return the best best metric value and corresponding best parameters that were calculated during the optimization. This captures the best solution when the optimizer oversteps or osciallates near the end of an optimization. Results are stored in m_CurrentMetricValue and in the assigned metric's parameters, retrievable via optimizer->GetCurrentPosition(). This option requires additional memory to store the best parameters, which can be large when working with high-dimensional transforms such as DisplacementFieldTransform.
|
virtual |
Set the scales estimator.
A ScalesEstimator is required for the scales and learning rate estimation options to work. See the main documentation.
|
virtual |
Start and run the optimization
Reimplemented from itk::ObjectToObjectOptimizerBase.
Reimplemented in itk::MultiGradientOptimizerv4, itk::QuasiNewtonOptimizerv4, and itk::ConjugateGradientLineSearchOptimizerv4.
|
virtual |
Stop optimization. The object is left in a state so the optimization can be resumed by calling ResumeOptimization.
Reimplemented from itk::GradientDescentOptimizerBasev4.
Reimplemented in itk::MultiGradientOptimizerv4.
|
protected |
Definition at line 273 of file itkGradientDescentOptimizerv4.h.
|
protected |
The convergence checker.
Definition at line 269 of file itkGradientDescentOptimizerv4.h.
|
protected |
Current convergence value.
Definition at line 266 of file itkGradientDescentOptimizerv4.h.
|
protected |
Window size for the convergence checker. The convergence checker calculates convergence value by fitting to a window of the energy (metric value) profile.
Definition at line 263 of file itkGradientDescentOptimizerv4.h.
|
protected |
Store the best value and related paramters
Definition at line 272 of file itkGradientDescentOptimizerv4.h.
|
private |
Flag to control use of the ScalesEstimator (if set) for automatic learning step estimation at each iteration.
Definition at line 287 of file itkGradientDescentOptimizerv4.h.
|
private |
Flag to control use of the ScalesEstimator (if set) for automatic learning step estimation only once, during first iteration.
Definition at line 292 of file itkGradientDescentOptimizerv4.h.
|
private |
Flag to control use of the ScalesEstimator (if set) for automatic scale estimation during StartOptimization()
Definition at line 282 of file itkGradientDescentOptimizerv4.h.
|
protected |
Manual learning rate to apply. It is overridden by automatic learning rate estimation if enabled. See main documentation.
Definition at line 235 of file itkGradientDescentOptimizerv4.h.
|
protected |
The maximum step size in physical units, to restrict learning rates. Only used with automatic learning rate estimation. See main documentation.
Definition at line 240 of file itkGradientDescentOptimizerv4.h.
|
protected |
Minimum convergence value for convergence checking. The convergence checker calculates convergence value by fitting to a window of the energy profile. When the convergence value reaches a small value, such as 1e-8, it would be treated as converged.
Definition at line 257 of file itkGradientDescentOptimizerv4.h.
|
protected |
Flag to control returning of best value and parameters.
Definition at line 276 of file itkGradientDescentOptimizerv4.h.
|
protected |
Definition at line 250 of file itkGradientDescentOptimizerv4.h.