[Insight-users] Re: vnl_powell: three parameter fit (Mathieu Malaterre)

Mathieu Malaterre mathieu.malaterre at gmail.com
Fri May 18 05:47:08 EDT 2007


Hi Luis,

  Thanks again for your comments. I believe I haven't describred my
problem properly, I simply need to compute a T1 cartography out of
Signal (original image: 256x256 matrix). We are using a particular
acquisition protocol over time and the blood concentration should
follow a particular equation. Thus I only need to use LM over a
particular ROI (should be considered small compared to 256x256).

  Anyway I finally found out how to use the vnl LM implementation (not
as obvious as the one in gnuplot for instance). It should look like
that (*).

  AFAIK this looks like to be the best optimizer for my particular
equation (no linear relation in between my parameter == cannot use
Kalman, and I know the algrebraic equation of my derivatives to
express the Jacobian for the descent).

-Mathieu
Ps: Thanks to Neil Muller for fixing my code
http://sourceforge.net/mailarchive/forum.php?thread_name=bf0c3b3f0705160909m18d7126fj778ab6b0f241722a%40mail.gmail.com&forum_name=vxl-users

(*)
static const double signal[] = {
  -13.5942,
  -10.2358,
  -7.54658,
  -5.39323,
  -3.66897,
  -2.28828,
  -1.18272,
  -0.297451,
};

struct vnl_rosenbrock : public vnl_least_squares_function
{
  vnl_rosenbrock(bool with_grad): vnl_least_squares_function(3, 8,
with_grad ? use_gradient : no_gradient) {}

  static const unsigned int nsignals = 8;
  static double compute(double t, double a, double b, double c) {
    return a * ( 1. - b * exp( -t / c ) );
    }
  static double compute_a(double t, double a, double b, double c) {
    return ( 1. - b * exp( -t / c) );
    }
  static double compute_b(double t, double a, double b, double c) {
    return -a * exp( -t / c );
    }
  static double compute_c(double t, double a, double b, double c) {
    return - t * a * b / (c * c) * exp( -t / c );
    }

  void f(vnl_vector<double> const& x, vnl_vector<double>& y) {
    for (unsigned int i=0; i<nsignals; ++i)       {
      y[i] = compute(i, x(0), x(1), x(2) ) - signal[i];
      }
  }

  void gradf(vnl_vector<double> const& x, vnl_matrix<double> &J) {
    for (unsigned int i=0; i<nsignals; ++i) {
      J(i,0) = compute_a(i, x(0), x(1), x(2) );
      }
    for (unsigned int i=0; i<nsignals; ++i) {
      J(i,1) = compute_b(i, x(0), x(1), x(2) );
      }
    for (unsigned int i=0; i<nsignals; ++i) {
      J(i,2) = compute_c(i, x(0), x(1), x(2) );
      }
  }

};

int main(int, char *[])
{
  vnl_rosenbrock f(true);

  vnl_double_3 x0(1.,1.,1.);
  vnl_levenberg_marquardt lm(f);

  vnl_vector<double> x1 = x0.as_vector();
  if (f.has_gradient())
    lm.minimize_using_gradient(x1);
  else
    lm.minimize_without_gradient(x1);
  lm.diagnose_outcome(std::cout);

  vnl_double_3 solution(3.26, 5.17, 4.5);
  double err = (x1 - solution).magnitude();
  std::cout << "err = " << err << std::endl;

  return 0;
}



On 5/16/07, Luis Ibanez <luis.ibanez at kitware.com> wrote:
>
> Hi Mathieu,
>
> The Levenberg Marquardt Optimizer in ITK:
>
>      itkLevenbergMarquardtOptimizer
>
> was built on top of the VNL one.
>
>       vnl/algo/vnl_levenberg_marquardt.h
>
>
>
> You can consider this optimizer to be a form of Gradient Descent,
> where you use a weighted approach for computing how big the step
> in the parametric space will be at every iteration.
>
> However,...
> LM may be an overkill for what you are doing...
>
>
> If I remember correctly, your original problem is that
> you are trying to adjust an equation of 2 parameters to
> all the pixels values of an image.
>
> If you use LM for this, your "array" will have the same
> number of elements as pixels in the image. That may bring
> some memory issues...
>
> If memory is not a problem, then LM is a suitable mechanism
> for adjusting the parameters of your non-linear equation
> by perfoming incremental linear steps.
>
>
> For an example on the use of this optimizer you may
> want to take a look at:
>
>
>       Insight/Testing/Code/Numerics
>           itkLevenbergMarquardtOptimizerTest.cxx
>
>
>
>     Regards,
>
>
>         Luis
>
>
> -------------------------------
> m.weigert at fz-juelich.de wrote:
> > Hi Mathieu,
> >
> > this is also what I know so far about LM that
> > it is a method for performing least squares fits.
> > I don't think (I dont know, to be honest) if it can be used to perform unbounded optimization as with
> > Powell or other optimization algorithms
> > (although there are optimization schemes that seem to combine LM and gradient descent for example).
> > However, if you can express your target function
> > in case of a sum of differences, you can use it to fit you model parameters.
> > For example, it should be possible to do image registration for sum of squared differences as target function with LM when you look at the pixelwise difference instead of the sum and you can
> > introduce further cost functions as LM is multivalued.
> >
> >
> > Best regards,
> > Markus
> >
> >
> >
> >
> >
> > ----- Original Message -----
> > From: Mathieu Malaterre <mathieu.malaterre at gmail.com>
> > Date: Tuesday, May 15, 2007 3:19 pm
> > Subject: Re: vnl_powell: three parameter fit (Mathieu Malaterre)
> >
> >
> >>Hi Markus,
> >>
> >> Thanks for the comments. I am also currently investigating LM,
> >>as it
> >>seems very promising. Unfortunately I cannot figure out how to use it.
> >>According to:
> >>
> >>http://paine.wiau.man.ac.uk/pub/doc_vxl/books/core/book_6.html#SEC44
> >>
> >> I think the vnl implementation lack general implementation and only
> >>provide least square API (need to decompose my function as a sum of
> >>square function...). See my post:
> >>
> >>http://sourceforge.net/mailarchive/forum.php?thread_name=bf0c3b3f0705150506q527466a4jc530bf42455f540c%40mail.gmail.com&forum_name=vxl-users
> >>
> >> I am waiting for a vnl guru to confirm the vnl - LM optimizer cannot
> >>do general cost function minimization.
> >>
> >> In which case I am going now toward a lbfgs approach. I can compute
> >>the first derivative of my function, and it would be dumb not to use
> >>this information (powell method is too generic).
> >>
> >> A final note. I could get powel to converge to the correct solution
> >>when giving the optimizer a relative close solution. I was not able
> >>(read: I don't have the time to dig in powell theory) to figure out
> >>'how good' my initial solution should be.
> >>
> >>I'll let you know how it goes,
> >>
> >>Thanks for your interest,
> >>-Mathieu
> >>
> >>
> >>On 5/15/07, m.weigert at fz-juelich.de <m.weigert at fz-juelich.de> wrote:
> >>
> >>
> >>>Hi Mathieu,
> >>>
> >>>I think there is also a Levenberg-Marquardt implementation in
> >>
> >>ITK which may be better suited to
> >>
> >>>perform non-linear least squares fits.
> >>>I don't know how to do this with vnl_powell, I am currently
> >>
> >>investigating Levenberg-Marquardt for
> >>
> >>>parameter fitting.
> >>>
> >>>Kind regards,
> >>>Markus
> >>>
> >>>
> >>
> >>
> >>--
> >>Mathieu
> >>
> >>
> >
> >
> > _______________________________________________
> > Insight-users mailing list
> > Insight-users at itk.org
> > http://www.itk.org/mailman/listinfo/insight-users
> >
>


-- 
Mathieu


More information about the Insight-users mailing list