ITK  5.4.0
Insight Toolkit
Public Types | Public Member Functions | Static Public Member Functions | Protected Member Functions | Static Protected Member Functions | Private Member Functions | Private Attributes | List of all members
itk::LBFGS2Optimizerv4Template< TInternalComputationValueType > Class Template Reference

#include <itkLBFGS2Optimizerv4.h>

Detailed Description

template<typename TInternalComputationValueType>
class itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >

Wrap of the libLBFGS[1] algorithm for use in ITKv4 registration framework. LibLBFGS is a translation of LBFGS code by Nocedal [2] and adds the orthantwise limited-memory Quasi-Newton method [3] for optimization with L1-norm on the parameters.

LBFGS is a quasi-Newton method uses an approximate estimate of the inverse Hessian \( (\nabla^2 f(x) )^-1 \) to scale the gradient step:

\[ x_{n+1} = x_n - s (\nabla^2 f(x_n) )^-1 \nabla f(x) \]

with \( s \) the step size.

The inverse Hessian is approximated from the gradients of previous iteration and thus only the gradient of the objective function is required.

The step size \( s \) is determined through line search which defaults to the approach by More and Thuente [4]. This line search approach finds a step size such that

\[ \lVert \nabla f(x + s (\nabla^2 f(x_n) )^{-1} \nabla f(x) ) \rVert \le \nu \lVert \nabla f(x) \rVert \]

The parameter \(\nu\) is set through SetLineSearchAccuracy() (default 0.9) and SetGradientLineSearchAccuracy()

Instead of the More-Tunete method, backtracking with three different conditions [7] are available and can be set through SetLineSearch():

The optimization stops when either the gradient satisfies the condition

\[ \lVert \nabla f(x) \rVert \le \epsilon \max(1, \lVert X \rVert) \]

or a maximum number of function evaluations has been reached. The tolerance \(\epsilon\) is set through SetSolutionAccuracy() (default 1e-5) and the maximum number of function evaluations is set through SetMaximumIterations() (default 0 = no maximum).

References:

[1] libLBFGS

[2] NETLIB lbfgs

[3] Galen Andrew and Jianfeng Gao. Scalable training of L1-regularized log-linear models. 24th International Conference on Machine Learning, pp. 33-40, 2007.

[4] Jorge Nocedal. Updating Quasi-Newton Matrices with Limited Storage. Mathematics of Computation, Vol. 35, No. 151, pp. 773-782, 1980.

[5] Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming B, Vol. 45, No. 3, pp. 503-528, 1989.

[6] More, J. J. and D. J. Thuente. Line Search Algorithms with Guaranteed Sufficient Decrease. ACM Transactions on Mathematical Software 20, no. 3 (1994): 286-307.

[7] John E. Dennis and Robert B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Englewood Cliffs, 1983.

Definition at line 165 of file itkLBFGS2Optimizerv4.h.

+ Inheritance diagram for itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >:
+ Collaboration diagram for itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >:

Public Types

using ConstPointer = SmartPointer< const Self >
 
using LineSearchMethodEnum = LBFGS2Optimizerv4Enums::LineSearchMethod
 
using MetricType = typename Superclass::MetricType
 
using ParametersType = typename Superclass::ParametersType
 
using Pointer = SmartPointer< Self >
 
using PrecisionType = double
 
using ScalesType = typename Superclass::ScalesType
 
using Self = LBFGS2Optimizerv4Template
 
using Superclass = GradientDescentOptimizerv4Template< TInternalComputationValueType >
 
- Public Types inherited from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >
using ConstPointer = SmartPointer< const Self >
 
using InternalComputationValueType = TInternalComputationValueType
 
using Pointer = SmartPointer< Self >
 
using Self = GradientDescentOptimizerv4Template
 
using Superclass = GradientDescentOptimizerBasev4Template< TInternalComputationValueType >
 
- Public Types inherited from itk::GradientDescentOptimizerBasev4Template< TInternalComputationValueType >
using ConstPointer = SmartPointer< const Self >
 
using ConvergenceMonitoringType = itk::Function::WindowConvergenceMonitoringFunction< TInternalComputationValueType >
 
using IndexRangeType = ThreadedIndexedContainerPartitioner::IndexRangeType
 
using InternalComputationValueType = TInternalComputationValueType
 
using MetricTypePointer = typename MetricType::Pointer
 
using Pointer = SmartPointer< Self >
 
using Self = GradientDescentOptimizerBasev4Template
 
using Superclass = ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >
 
- Public Types inherited from itk::ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >
using ConstPointer = SmartPointer< const Self >
 
using DerivativeType = typename MetricType::DerivativeType
 
using MeasureType = typename MetricType::MeasureType
 
using MetricType = ObjectToObjectMetricBaseTemplate< TInternalComputationValueType >
 
using MetricTypePointer = typename MetricType::Pointer
 
using NumberOfParametersType = typename MetricType::NumberOfParametersType
 
using ParametersType = OptimizerParameters< TInternalComputationValueType >
 
using Pointer = SmartPointer< Self >
 
using ScalesEstimatorType = OptimizerParameterScalesEstimatorTemplate< TInternalComputationValueType >
 
using ScalesType = OptimizerParameters< TInternalComputationValueType >
 
using Self = ObjectToObjectOptimizerBaseTemplate
 
using StopConditionDescriptionType = std::ostringstream
 
using StopConditionReturnStringType = std::string
 
using Superclass = Object
 
- Public Types inherited from itk::Object
using ConstPointer = SmartPointer< const Self >
 
using Pointer = SmartPointer< Self >
 
using Self = Object
 
using Superclass = LightObject
 
- Public Types inherited from itk::LightObject
using ConstPointer = SmartPointer< const Self >
 
using Pointer = SmartPointer< Self >
 
using Self = LightObject
 

Public Member Functions

virtual PrecisionType GetCurrentGradientNorm () const
 
virtual PrecisionType GetCurrentNumberOfEvaluations () const
 
virtual PrecisionType GetCurrentParameterNorm () const
 
virtual PrecisionType GetCurrentStepSize () const
 
const char * GetNameOfClass () const override
 
virtual const StopConditionReturnStringType GetStopConditionDescription () const override
 
void ResumeOptimization () override
 
void StartOptimization (bool doOnlyInitialization=false) override
 
void SetHessianApproximationAccuracy (int m)
 
int GetHessianApproximationAccuracy () const
 
void SetSolutionAccuracy (PrecisionType epsilon)
 
PrecisionType GetSolutionAccuracy () const
 
void SetDeltaConvergenceDistance (int nPast)
 
int GetDeltaConvergenceDistance () const
 
void SetDeltaConvergenceTolerance (PrecisionType tol)
 
PrecisionType GetDeltaConvergenceTolerance () const
 
void SetMaximumIterations (int maxIterations)
 
int GetMaximumIterations () const
 
SizeValueType GetNumberOfIterations () const override
 
void SetNumberOfIterations (const SizeValueType _arg) override
 
void SetLineSearch (const LineSearchMethodEnum &linesearch)
 
LineSearchMethodEnum GetLineSearch () const
 
void SetMaximumLineSearchEvaluations (int n)
 
int GetMaximumLineSearchEvaluations () const
 
void SetMinimumLineSearchStep (PrecisionType step)
 
PrecisionType GetMinimumLineSearchStep () const
 
void SetMaximumLineSearchStep (PrecisionType step)
 
PrecisionType GetMaximumLineSearchStep () const
 
void SetLineSearchAccuracy (PrecisionType ftol)
 
PrecisionType GetLineSearchAccuracy () const
 
void SetWolfeCoefficient (PrecisionType wc)
 
PrecisionType GetWolfeCoefficient () const
 
void SetLineSearchGradientAccuracy (PrecisionType gtol)
 
PrecisionType GetLineSearchGradientAccuracy () const
 
void SetMachinePrecisionTolerance (PrecisionType xtol)
 
PrecisionType GetMachinePrecisionTolerance () const
 
void SetOrthantwiseCoefficient (PrecisionType orthant_c)
 
PrecisionType GetOrthantwiseCoefficient () const
 
void SetOrthantwiseStart (int start)
 
int GetOrthantwiseStart () const
 
void SetOrthantwiseEnd (int end)
 
int GetOrthantwiseEnd () const
 
virtual void SetEstimateScalesAtEachIteration (bool _arg)
 
virtual const bool & GetEstimateScalesAtEachIteration () const
 
virtual void EstimateScalesAtEachIterationOn ()
 
- Public Member Functions inherited from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >
virtual void EstimateLearningRate ()
 
virtual void SetMinimumConvergenceValue (TInternalComputationValueType _arg)
 
void StopOptimization () override
 
virtual void SetLearningRate (TInternalComputationValueType _arg)
 
virtual const TInternalComputationValueType & GetLearningRate () const
 
virtual void SetMaximumStepSizeInPhysicalUnits (TInternalComputationValueType _arg)
 
virtual const TInternalComputationValueType & GetMaximumStepSizeInPhysicalUnits () const
 
virtual void SetDoEstimateLearningRateAtEachIteration (bool _arg)
 
virtual const bool & GetDoEstimateLearningRateAtEachIteration () const
 
virtual void DoEstimateLearningRateAtEachIterationOn ()
 
virtual void SetDoEstimateLearningRateOnce (bool _arg)
 
virtual const bool & GetDoEstimateLearningRateOnce () const
 
virtual void DoEstimateLearningRateOnceOn ()
 
virtual void SetReturnBestParametersAndValue (bool _arg)
 
virtual const bool & GetReturnBestParametersAndValue () const
 
virtual void ReturnBestParametersAndValueOn ()
 
- Public Member Functions inherited from itk::GradientDescentOptimizerBasev4Template< TInternalComputationValueType >
virtual const DerivativeTypeGetGradient () const
 
virtual const StopConditionObjectToObjectOptimizerEnumGetStopCondition () const
 
virtual void ModifyGradientByScales ()
 
virtual void ModifyGradientByLearningRate ()
 
- Public Member Functions inherited from itk::ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >
virtual bool CanUseScales () const
 
virtual SizeValueType GetCurrentIteration () const
 
virtual const MeasureTypeGetCurrentMetricValue () const
 
virtual const ParametersTypeGetCurrentPosition () const
 
const char * GetNameOfClass () const override
 
virtual const ThreadIdTypeGetNumberOfWorkUnits () const
 
virtual const ScalesTypeGetScales () const
 
virtual const bool & GetScalesAreIdentity () const
 
bool GetScalesInitialized () const
 
virtual const MeasureTypeGetValue () const
 
virtual const ScalesTypeGetWeights () const
 
virtual const bool & GetWeightsAreIdentity () const
 
virtual void SetNumberOfWorkUnits (ThreadIdType number)
 
virtual void SetScalesEstimator (ScalesEstimatorType *_arg)
 
virtual void SetWeights (ScalesType _arg)
 
virtual void SetMetric (MetricType *_arg)
 
virtual MetricTypeGetModifiableMetric ()
 
virtual void SetScales (const ScalesType &scales)
 
virtual void SetDoEstimateScales (bool _arg)
 
virtual const bool & GetDoEstimateScales () const
 
virtual void DoEstimateScalesOn ()
 
- Public Member Functions inherited from itk::Object
unsigned long AddObserver (const EventObject &event, Command *)
 
unsigned long AddObserver (const EventObject &event, Command *) const
 
unsigned long AddObserver (const EventObject &event, std::function< void(const EventObject &)> function) const
 
LightObject::Pointer CreateAnother () const override
 
virtual void DebugOff () const
 
virtual void DebugOn () const
 
CommandGetCommand (unsigned long tag)
 
bool GetDebug () const
 
MetaDataDictionaryGetMetaDataDictionary ()
 
const MetaDataDictionaryGetMetaDataDictionary () const
 
virtual ModifiedTimeType GetMTime () const
 
virtual const TimeStampGetTimeStamp () const
 
bool HasObserver (const EventObject &event) const
 
void InvokeEvent (const EventObject &)
 
void InvokeEvent (const EventObject &) const
 
virtual void Modified () const
 
void Register () const override
 
void RemoveAllObservers ()
 
void RemoveObserver (unsigned long tag)
 
void SetDebug (bool debugFlag) const
 
void SetReferenceCount (int) override
 
void UnRegister () const noexcept override
 
void SetMetaDataDictionary (const MetaDataDictionary &rhs)
 
void SetMetaDataDictionary (MetaDataDictionary &&rrhs)
 
virtual void SetObjectName (std::string _arg)
 
virtual const std::string & GetObjectName () const
 
- Public Member Functions inherited from itk::LightObject
Pointer Clone () const
 
virtual void Delete ()
 
virtual int GetReferenceCount () const
 
void Print (std::ostream &os, Indent indent=0) const
 

Static Public Member Functions

static Pointer New ()
 
- Static Public Member Functions inherited from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >
static Pointer New ()
 
- Static Public Member Functions inherited from itk::Object
static bool GetGlobalWarningDisplay ()
 
static void GlobalWarningDisplayOff ()
 
static void GlobalWarningDisplayOn ()
 
static Pointer New ()
 
static void SetGlobalWarningDisplay (bool val)
 
- Static Public Member Functions inherited from itk::LightObject
static void BreakOnError ()
 
static Pointer New ()
 

Protected Member Functions

PrecisionType EvaluateCost (const PrecisionType *x, PrecisionType *g, const int n, const PrecisionType step)
 
 LBFGS2Optimizerv4Template ()
 
void PrintSelf (std::ostream &os, Indent indent) const override
 
int UpdateProgress (const PrecisionType *x, const PrecisionType *g, const PrecisionType fx, const PrecisionType xnorm, const PrecisionType gnorm, const PrecisionType step, int n, int k, int ls)
 
 ~LBFGS2Optimizerv4Template () override
 
- Protected Member Functions inherited from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >
 GradientDescentOptimizerv4Template ()
 
void ModifyGradientByLearningRateOverSubRange (const IndexRangeType &subrange) override
 
void ModifyGradientByScalesOverSubRange (const IndexRangeType &subrange) override
 
 ~GradientDescentOptimizerv4Template () override=default
 
- Protected Member Functions inherited from itk::GradientDescentOptimizerBasev4Template< TInternalComputationValueType >
 GradientDescentOptimizerBasev4Template ()
 
 ~GradientDescentOptimizerBasev4Template () override=default
 
- Protected Member Functions inherited from itk::ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >
void PrintSelf (std::ostream &os, Indent indent) const override
 
 ObjectToObjectOptimizerBaseTemplate ()
 
 ~ObjectToObjectOptimizerBaseTemplate () override
 
- Protected Member Functions inherited from itk::Object
 Object ()
 
bool PrintObservers (std::ostream &os, Indent indent) const
 
virtual void SetTimeStamp (const TimeStamp &timeStamp)
 
 ~Object () override
 
- Protected Member Functions inherited from itk::LightObject
virtual LightObject::Pointer InternalClone () const
 
 LightObject ()
 
virtual void PrintHeader (std::ostream &os, Indent indent) const
 
virtual void PrintTrailer (std::ostream &os, Indent indent) const
 
virtual ~LightObject ()
 

Static Protected Member Functions

static PrecisionType EvaluateCostCallback (void *instance, const PrecisionType *x, PrecisionType *g, const int n, const PrecisionType step)
 
static int UpdateProgressCallback (void *instance, const PrecisionType *x, const PrecisionType *g, const PrecisionType fx, const PrecisionType xnorm, const PrecisionType gnorm, const PrecisionType step, int n, int k, int ls)
 

Private Member Functions

void AdvanceOneStep () override
 
void SetMinimumConvergenceValue (PrecisionType) override
 
void SetConvergenceWindowSize (SizeValueType) override
 
const PrecisionTypeGetConvergenceValue () const override
 

Private Attributes

double m_CurrentGradientNorm {}
 
int m_CurrentNumberOfEvaluations {}
 
double m_CurrentParameterNorm {}
 
double m_CurrentStepSize {}
 
bool m_EstimateScalesAtEachIteration {}
 
lbfgs_parameter_t m_Parameters {}
 
int m_StatusCode {}
 

Additional Inherited Members

- Protected Attributes inherited from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >
ParametersType m_BestParameters {}
 
TInternalComputationValueType m_ConvergenceValue {}
 
MeasureType m_CurrentBestValue {}
 
TInternalComputationValueType m_LearningRate {}
 
TInternalComputationValueType m_MinimumConvergenceValue {}
 
DerivativeType m_PreviousGradient {}
 
bool m_ReturnBestParametersAndValue { false }
 
- Protected Attributes inherited from itk::GradientDescentOptimizerBasev4Template< TInternalComputationValueType >
ConvergenceMonitoringType::Pointer m_ConvergenceMonitoring {}
 
SizeValueType m_ConvergenceWindowSize {}
 
bool m_DoEstimateLearningRateAtEachIteration {}
 
bool m_DoEstimateLearningRateOnce {}
 
DerivativeType m_Gradient {}
 
TInternalComputationValueType m_MaximumStepSizeInPhysicalUnits {}
 
DomainThreader< ThreadedIndexedContainerPartitioner, Self >::Pointer m_ModifyGradientByLearningRateThreader {}
 
DomainThreader< ThreadedIndexedContainerPartitioner, Self >::Pointer m_ModifyGradientByScalesThreader {}
 
bool m_Stop { false }
 
StopConditionObjectToObjectOptimizerEnum m_StopCondition {}
 
StopConditionDescriptionType m_StopConditionDescription {}
 
bool m_UseConvergenceMonitoring {}
 
- Protected Attributes inherited from itk::ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >
SizeValueType m_CurrentIteration {}
 
MeasureType m_CurrentMetricValue {}
 
bool m_DoEstimateScales {}
 
MetricTypePointer m_Metric {}
 
SizeValueType m_NumberOfIterations {}
 
ThreadIdType m_NumberOfWorkUnits {}
 
ScalesType m_Scales {}
 
bool m_ScalesAreIdentity {}
 
ScalesEstimatorType::Pointer m_ScalesEstimator {}
 
ScalesType m_Weights {}
 
bool m_WeightsAreIdentity {}
 
- Protected Attributes inherited from itk::LightObject
std::atomic< int > m_ReferenceCount {}
 

Member Typedef Documentation

◆ ConstPointer

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::ConstPointer = SmartPointer<const Self>

Definition at line 200 of file itkLBFGS2Optimizerv4.h.

◆ LineSearchMethodEnum

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::LineSearchMethodEnum = LBFGS2Optimizerv4Enums::LineSearchMethod

Definition at line 173 of file itkLBFGS2Optimizerv4.h.

◆ MetricType

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::MetricType = typename Superclass::MetricType

Definition at line 202 of file itkLBFGS2Optimizerv4.h.

◆ ParametersType

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::ParametersType = typename Superclass::ParametersType

Definition at line 203 of file itkLBFGS2Optimizerv4.h.

◆ Pointer

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::Pointer = SmartPointer<Self>

Definition at line 199 of file itkLBFGS2Optimizerv4.h.

◆ PrecisionType

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::PrecisionType = double

TODO: currently only double is used in lbfgs need to figure out how to make it a template parameter and set the required define so lbfgs.h uses the correct version

Definition at line 192 of file itkLBFGS2Optimizerv4.h.

◆ ScalesType

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::ScalesType = typename Superclass::ScalesType

Definition at line 204 of file itkLBFGS2Optimizerv4.h.

◆ Self

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::Self = LBFGS2Optimizerv4Template

Standard "Self" type alias.

Definition at line 197 of file itkLBFGS2Optimizerv4.h.

◆ Superclass

template<typename TInternalComputationValueType >
using itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::Superclass = GradientDescentOptimizerv4Template<TInternalComputationValueType>

Definition at line 198 of file itkLBFGS2Optimizerv4.h.

Constructor & Destructor Documentation

◆ LBFGS2Optimizerv4Template()

template<typename TInternalComputationValueType >
itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::LBFGS2Optimizerv4Template ( )
protected

◆ ~LBFGS2Optimizerv4Template()

template<typename TInternalComputationValueType >
itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::~LBFGS2Optimizerv4Template ( )
overrideprotected

Member Function Documentation

◆ AdvanceOneStep()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::AdvanceOneStep ( )
inlineoverrideprivatevirtual

Advance one step following the gradient direction. Includes transform update.

Reimplemented from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >.

Definition at line 567 of file itkLBFGS2Optimizerv4.h.

◆ EstimateScalesAtEachIterationOn()

template<typename TInternalComputationValueType >
virtual void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::EstimateScalesAtEachIterationOn ( )
virtual

Option to use ScalesEstimator for estimating scales at each iteration. The estimation overrides the scales set by SetScales(). Default is true.

◆ EvaluateCost()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::EvaluateCost ( const PrecisionType x,
PrecisionType g,
const int  n,
const PrecisionType  step 
)
protected

◆ EvaluateCostCallback()

template<typename TInternalComputationValueType >
static PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::EvaluateCostCallback ( void *  instance,
const PrecisionType x,
PrecisionType g,
const int  n,
const PrecisionType  step 
)
staticprotected

Function evaluation callback from libLBFGS forward to instance

◆ GetConvergenceValue()

template<typename TInternalComputationValueType >
const PrecisionType& itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetConvergenceValue ( ) const
inlineoverrideprivatevirtual

itkGradientDecentOptimizerv4Template specific non supported methods.

Reimplemented from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >.

Definition at line 558 of file itkLBFGS2Optimizerv4.h.

◆ GetCurrentGradientNorm()

template<typename TInternalComputationValueType >
virtual PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetCurrentGradientNorm ( ) const
virtual

Get gradient norm of current iteration

◆ GetCurrentNumberOfEvaluations()

template<typename TInternalComputationValueType >
virtual PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetCurrentNumberOfEvaluations ( ) const
virtual

Get number of evaluations for current iteration

◆ GetCurrentParameterNorm()

template<typename TInternalComputationValueType >
virtual PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetCurrentParameterNorm ( ) const
virtual

Get parameter norm of current iteration

◆ GetCurrentStepSize()

template<typename TInternalComputationValueType >
virtual PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetCurrentStepSize ( ) const
virtual

Get step size of current iteration

◆ GetDeltaConvergenceDistance()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetDeltaConvergenceDistance ( ) const

Set/Get distance for delta-based convergence test. This parameter determines the distance, in iterations, to compute the rate of decrease of the objective function. If the value of this parameter is zero, the library does not perform the delta-based convergence test. The default value is 0.

◆ GetDeltaConvergenceTolerance()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetDeltaConvergenceTolerance ( ) const

Delta for convergence test. This parameter determines the minimum rate of decrease of the objective function. The library stops iterations when the following condition is met: \((f' - f) / f < \delta\), where f' is the objective value of past iterations ago, and f is the objective value of the current iteration. The default value is 0.

◆ GetEstimateScalesAtEachIteration()

template<typename TInternalComputationValueType >
virtual const bool& itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetEstimateScalesAtEachIteration ( ) const
virtual

Option to use ScalesEstimator for estimating scales at each iteration. The estimation overrides the scales set by SetScales(). Default is true.

◆ GetHessianApproximationAccuracy()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetHessianApproximationAccuracy ( ) const

Set/Get the number of corrections to approximate the inverse hessian matrix. The L-BFGS routine stores the computation results of previous m iterations to approximate the inverse hessian matrix of the current iteration. This parameter controls the size of the limited memories (corrections). The default value is 6. Values less than 3 are not recommended. Large values will result in excessive computing time.

◆ GetLineSearch()

template<typename TInternalComputationValueType >
LineSearchMethodEnum itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetLineSearch ( ) const

The line search algorithm. This parameter specifies a line search algorithm to be used by the L-BFGS routine. See lbfgs.h for enumeration of line search type. Defaults to More-Thuente's method.

◆ GetLineSearchAccuracy()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetLineSearchAccuracy ( ) const

A parameter to control the accuracy of the line search routine. The default value is 1e-4. This parameter should be greater than zero and smaller than 0.5.

◆ GetLineSearchGradientAccuracy()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetLineSearchGradientAccuracy ( ) const

A parameter to control the gradient accuracy of the More-Thuente line search routine. The default value is 0.9. If the function and gradient evaluations are inexpensive with respect to the cost of the iteration (which is sometimes the case when solving very large problems) it may be advantageous to set this parameter to a small value. A typical small value is 0.1. This parameter should be greater than the ftol parameter (1e-4) and smaller than 1.0.

◆ GetMachinePrecisionTolerance()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetMachinePrecisionTolerance ( ) const

The machine precision for floating-point values. This parameter must be a positive value set by a client program to estimate the machine precision. The line search routine will terminate with the status code (LBFGSERR_ROUNDING_ERROR) if the relative width of the interval of uncertainty is less than this parameter.

◆ GetMaximumIterations()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetMaximumIterations ( ) const

The maximum number of iterations. The lbfgs() function terminates an optimization process with LBFGSERR_MAXIMUMITERATION status code when the iteration count exceeds this parameter. Setting this parameter to zero continues an optimization process until a convergence or error. The default value is 0.

◆ GetMaximumLineSearchEvaluations()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetMaximumLineSearchEvaluations ( ) const

The maximum number of trials for the line search. This parameter controls the number of function and gradients evaluations per iteration for the line search routine. The default value is 20.

◆ GetMaximumLineSearchStep()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetMaximumLineSearchStep ( ) const

The maximum step of the line search. The default value is 1e+20. This value need not be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased).

◆ GetMinimumLineSearchStep()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetMinimumLineSearchStep ( ) const

The minimum step of the line search routine. The default value is 1e-20. This value need not be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased).

◆ GetNameOfClass()

template<typename TInternalComputationValueType >
const char* itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetNameOfClass ( ) const
overridevirtual

◆ GetNumberOfIterations()

template<typename TInternalComputationValueType >
SizeValueType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetNumberOfIterations ( ) const
inlineoverridevirtual

Aliased to Set/Get MaximumIterations to match base class interface.

Reimplemented from itk::ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >.

Definition at line 302 of file itkLBFGS2Optimizerv4.h.

◆ GetOrthantwiseCoefficient()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetOrthantwiseCoefficient ( ) const

Coefficient for the L1 norm of variables. This parameter should be set to zero for standard minimization problems. Setting this parameter to a positive value activates Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method, which minimizes the objective function F(x) combined with the L1 norm |x| of the variables, \(F(x) + C |x|}. \). This parameter is the coefficient for the |x|, i.e., C. As the L1 norm |x| is not differentiable at zero, the library modifies function and gradient evaluations from a client program suitably; a client program thus have only to return the function value F(x) and gradients G(x) as usual. The default value is zero.

◆ GetOrthantwiseEnd()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetOrthantwiseEnd ( ) const

End index for computing L1 norm of the variables. This parameter is valid only for OWL-QN method (i.e., \( orthantwise_c != 0 \)). This parameter e, (0 < e <= N) specifies the index number at which the library stops computing the L1 norm of the variables x,

◆ GetOrthantwiseStart()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetOrthantwiseStart ( ) const

Start index for computing L1 norm of the variables. This parameter is valid only for OWL-QN method (i.e., \( orthantwise_c != 0 \)). This parameter b (0 <= b < N) specifies the index number from which the library computes the L1 norm of the variables x,

\[ |x| := |x_{b}| + |x_{b+1}| + ... + |x_{N}| . \]

In other words, variables \(x_1, ..., x_{b-1}\) are not used for computing the L1 norm. Setting b, (0 < b < N), one can protect variables, \(x_1, ..., x_{b-1}\) (e.g., a bias term of logistic regression) from being regularized. The default value is zero.

◆ GetSolutionAccuracy()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetSolutionAccuracy ( ) const

Set/Get epsilon for convergence test. This parameter determines the accuracy with which the solution is to be found. A minimization terminates when \(||g|| < \epsilon * max(1, ||x||)\), where ||.|| denotes the Euclidean (L2) norm. The default value is 1e-5.

◆ GetStopConditionDescription()

template<typename TInternalComputationValueType >
virtual const StopConditionReturnStringType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetStopConditionDescription ( ) const
overridevirtual

Get the reason for termination

Reimplemented from itk::GradientDescentOptimizerBasev4Template< TInternalComputationValueType >.

◆ GetWolfeCoefficient()

template<typename TInternalComputationValueType >
PrecisionType itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::GetWolfeCoefficient ( ) const

A coefficient for the Wolfe condition. This parameter is valid only when the backtracking line-search algorithm is used with the Wolfe condition, LINESEARCH_BACKTRACKING_STRONG_WOLFE or LINESEARCH_BACKTRACKING_WOLFE . The default value is 0.9. This parameter should be greater than the ftol parameter and smaller than 1.0.

◆ New()

template<typename TInternalComputationValueType >
static Pointer itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::New ( )
static

Method for creation through the object factory.

◆ PrintSelf()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::PrintSelf ( std::ostream &  os,
Indent  indent 
) const
overrideprotectedvirtual

Methods invoked by Print() to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.

Reimplemented from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >.

◆ ResumeOptimization()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::ResumeOptimization ( )
overridevirtual

Resume optimization. This runs the optimization loop, and allows continuation of stopped optimization

Reimplemented from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >.

◆ SetConvergenceWindowSize()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetConvergenceWindowSize ( SizeValueType  )
inlineoverrideprivatevirtual

itkGradientDecentOptimizerv4Template specific non supported methods.

Reimplemented from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >.

Definition at line 553 of file itkLBFGS2Optimizerv4.h.

◆ SetDeltaConvergenceDistance()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetDeltaConvergenceDistance ( int  nPast)

Set/Get distance for delta-based convergence test. This parameter determines the distance, in iterations, to compute the rate of decrease of the objective function. If the value of this parameter is zero, the library does not perform the delta-based convergence test. The default value is 0.

◆ SetDeltaConvergenceTolerance()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetDeltaConvergenceTolerance ( PrecisionType  tol)

Delta for convergence test. This parameter determines the minimum rate of decrease of the objective function. The library stops iterations when the following condition is met: \((f' - f) / f < \delta\), where f' is the objective value of past iterations ago, and f is the objective value of the current iteration. The default value is 0.

◆ SetEstimateScalesAtEachIteration()

template<typename TInternalComputationValueType >
virtual void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetEstimateScalesAtEachIteration ( bool  _arg)
virtual

Option to use ScalesEstimator for estimating scales at each iteration. The estimation overrides the scales set by SetScales(). Default is true.

◆ SetHessianApproximationAccuracy()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetHessianApproximationAccuracy ( int  m)

Set/Get the number of corrections to approximate the inverse hessian matrix. The L-BFGS routine stores the computation results of previous m iterations to approximate the inverse hessian matrix of the current iteration. This parameter controls the size of the limited memories (corrections). The default value is 6. Values less than 3 are not recommended. Large values will result in excessive computing time.

◆ SetLineSearch()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetLineSearch ( const LineSearchMethodEnum linesearch)

The line search algorithm. This parameter specifies a line search algorithm to be used by the L-BFGS routine. See lbfgs.h for enumeration of line search type. Defaults to More-Thuente's method.

◆ SetLineSearchAccuracy()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetLineSearchAccuracy ( PrecisionType  ftol)

A parameter to control the accuracy of the line search routine. The default value is 1e-4. This parameter should be greater than zero and smaller than 0.5.

◆ SetLineSearchGradientAccuracy()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetLineSearchGradientAccuracy ( PrecisionType  gtol)

A parameter to control the gradient accuracy of the More-Thuente line search routine. The default value is 0.9. If the function and gradient evaluations are inexpensive with respect to the cost of the iteration (which is sometimes the case when solving very large problems) it may be advantageous to set this parameter to a small value. A typical small value is 0.1. This parameter should be greater than the ftol parameter (1e-4) and smaller than 1.0.

◆ SetMachinePrecisionTolerance()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetMachinePrecisionTolerance ( PrecisionType  xtol)

The machine precision for floating-point values. This parameter must be a positive value set by a client program to estimate the machine precision. The line search routine will terminate with the status code (LBFGSERR_ROUNDING_ERROR) if the relative width of the interval of uncertainty is less than this parameter.

◆ SetMaximumIterations()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetMaximumIterations ( int  maxIterations)

The maximum number of iterations. The lbfgs() function terminates an optimization process with LBFGSERR_MAXIMUMITERATION status code when the iteration count exceeds this parameter. Setting this parameter to zero continues an optimization process until a convergence or error. The default value is 0.

◆ SetMaximumLineSearchEvaluations()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetMaximumLineSearchEvaluations ( int  n)

The maximum number of trials for the line search. This parameter controls the number of function and gradients evaluations per iteration for the line search routine. The default value is 20.

◆ SetMaximumLineSearchStep()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetMaximumLineSearchStep ( PrecisionType  step)

The maximum step of the line search. The default value is 1e+20. This value need not be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased).

◆ SetMinimumConvergenceValue()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetMinimumConvergenceValue ( PrecisionType  )
inlineoverrideprivate

itkGradientDecentOptimizerv4Template specific non supported methods.

Definition at line 549 of file itkLBFGS2Optimizerv4.h.

◆ SetMinimumLineSearchStep()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetMinimumLineSearchStep ( PrecisionType  step)

The minimum step of the line search routine. The default value is 1e-20. This value need not be modified unless the exponents are too large for the machine being used, or unless the problem is extremely badly scaled (in which case the exponents should be increased).

◆ SetNumberOfIterations()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetNumberOfIterations ( const SizeValueType  _arg)
inlineoverridevirtual

Aliased to Set/Get MaximumIterations to match base class interface.

Reimplemented from itk::ObjectToObjectOptimizerBaseTemplate< TInternalComputationValueType >.

Definition at line 307 of file itkLBFGS2Optimizerv4.h.

◆ SetOrthantwiseCoefficient()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetOrthantwiseCoefficient ( PrecisionType  orthant_c)

Coefficient for the L1 norm of variables. This parameter should be set to zero for standard minimization problems. Setting this parameter to a positive value activates Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) method, which minimizes the objective function F(x) combined with the L1 norm |x| of the variables, \(F(x) + C |x|}. \). This parameter is the coefficient for the |x|, i.e., C. As the L1 norm |x| is not differentiable at zero, the library modifies function and gradient evaluations from a client program suitably; a client program thus have only to return the function value F(x) and gradients G(x) as usual. The default value is zero.

◆ SetOrthantwiseEnd()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetOrthantwiseEnd ( int  end)

End index for computing L1 norm of the variables. This parameter is valid only for OWL-QN method (i.e., \( orthantwise_c != 0 \)). This parameter e, (0 < e <= N) specifies the index number at which the library stops computing the L1 norm of the variables x,

◆ SetOrthantwiseStart()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetOrthantwiseStart ( int  start)

Start index for computing L1 norm of the variables. This parameter is valid only for OWL-QN method (i.e., \( orthantwise_c != 0 \)). This parameter b (0 <= b < N) specifies the index number from which the library computes the L1 norm of the variables x,

\[ |x| := |x_{b}| + |x_{b+1}| + ... + |x_{N}| . \]

In other words, variables \(x_1, ..., x_{b-1}\) are not used for computing the L1 norm. Setting b, (0 < b < N), one can protect variables, \(x_1, ..., x_{b-1}\) (e.g., a bias term of logistic regression) from being regularized. The default value is zero.

◆ SetSolutionAccuracy()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetSolutionAccuracy ( PrecisionType  epsilon)

Set/Get epsilon for convergence test. This parameter determines the accuracy with which the solution is to be found. A minimization terminates when \(||g|| < \epsilon * max(1, ||x||)\), where ||.|| denotes the Euclidean (L2) norm. The default value is 1e-5.

◆ SetWolfeCoefficient()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::SetWolfeCoefficient ( PrecisionType  wc)

A coefficient for the Wolfe condition. This parameter is valid only when the backtracking line-search algorithm is used with the Wolfe condition, LINESEARCH_BACKTRACKING_STRONG_WOLFE or LINESEARCH_BACKTRACKING_WOLFE . The default value is 0.9. This parameter should be greater than the ftol parameter and smaller than 1.0.

◆ StartOptimization()

template<typename TInternalComputationValueType >
void itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::StartOptimization ( bool  doOnlyInitialization = false)
overridevirtual

Start optimization with an initial value.

Reimplemented from itk::GradientDescentOptimizerv4Template< TInternalComputationValueType >.

◆ UpdateProgress()

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::UpdateProgress ( const PrecisionType x,
const PrecisionType g,
const PrecisionType  fx,
const PrecisionType  xnorm,
const PrecisionType  gnorm,
const PrecisionType  step,
int  n,
int  k,
int  ls 
)
protected

Update the progress as reported from libLBFSG and notify itkObject

◆ UpdateProgressCallback()

template<typename TInternalComputationValueType >
static int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::UpdateProgressCallback ( void *  instance,
const PrecisionType x,
const PrecisionType g,
const PrecisionType  fx,
const PrecisionType  xnorm,
const PrecisionType  gnorm,
const PrecisionType  step,
int  n,
int  k,
int  ls 
)
staticprotected

Progress callback from libLBFGS forwards it to the specific instance

Member Data Documentation

◆ m_CurrentGradientNorm

template<typename TInternalComputationValueType >
double itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_CurrentGradientNorm {}
private

Definition at line 542 of file itkLBFGS2Optimizerv4.h.

◆ m_CurrentNumberOfEvaluations

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_CurrentNumberOfEvaluations {}
private

Definition at line 543 of file itkLBFGS2Optimizerv4.h.

◆ m_CurrentParameterNorm

template<typename TInternalComputationValueType >
double itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_CurrentParameterNorm {}
private

Definition at line 541 of file itkLBFGS2Optimizerv4.h.

◆ m_CurrentStepSize

template<typename TInternalComputationValueType >
double itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_CurrentStepSize {}
private

Definition at line 540 of file itkLBFGS2Optimizerv4.h.

◆ m_EstimateScalesAtEachIteration

template<typename TInternalComputationValueType >
bool itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_EstimateScalesAtEachIteration {}
private

Definition at line 539 of file itkLBFGS2Optimizerv4.h.

◆ m_Parameters

template<typename TInternalComputationValueType >
lbfgs_parameter_t itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_Parameters {}
private

Definition at line 537 of file itkLBFGS2Optimizerv4.h.

◆ m_StatusCode

template<typename TInternalComputationValueType >
int itk::LBFGS2Optimizerv4Template< TInternalComputationValueType >::m_StatusCode {}
private

Definition at line 544 of file itkLBFGS2Optimizerv4.h.


The documentation for this class was generated from the following file: