[Insight-users] Re: Using KalmanLinearEstimator

Mathieu Malaterre mathieu.malaterre at gmail.com
Mon May 7 13:02:17 EDT 2007


Ok guys,

  I gave up on Kalman, I really have no clue what is going on, why the
system is so slow to converge to anything decent. I used a different
solution to my linear regression problem: least square (much more
conventional):


#include <vnl/algo/vnl_lsqr.h>
#include <vnl/vnl_sparse_matrix_linear_system.h>
#include <vnl/vnl_least_squares_function.h>
#include <iostream>

int main(int, char *[])
{
  static const double points[][2] = {
    {-16.6817,167.095},
    {-44.4257,158.118},
    {-68.0884,150.462},
    {-87.7176,144.111},
    {-103.447,139.021},
    {-115.437,135.142},
  };
  const unsigned int N = sizeof(points) / ( 2 * sizeof( double ));
  //---------------------------------------
  vnl_sparse_matrix<double> A(6,2); vnl_vector<double> b(6);
  for(unsigned int i = 0; i < N; ++i)
    {
    A(i,0) = points[i][0];
    A(i,1) = 1;
    b[i]   = points[i][1];
    }
  vnl_sparse_matrix_linear_system<double> ls(A,b);
  vnl_vector<double> x(2); x[0]=x[1]=0.0;
  vnl_lsqr lsqr(ls);
  lsqr.minimize(x);

  //lsqr.diagnose_outcome(std::cerr);
  std::cerr << x << std::endl;

  return 0;
}

2 cents,
-Mathieu


On 5/7/07, Mathieu Malaterre <mathieu.malaterre at gmail.com> wrote:
> As usual Luis thanks a bunch for your help. I will look into your suggestion.
>
> just for reference I was able to get a correct approximate solution by doing:
>
>   for(unsigned int duplicate = 0; duplicate < 100000; ++duplicate)
>     for(unsigned int ax=0; ax < N; ax++)
>       {
>       predictor(0) = points[ax][0];
>       measure = points[ax][1];
>       //std::cout << predictor(0) << "," << measure << std::endl;
>       filter.UpdateWithNewMeasure(measure,predictor);
>       }
>
> If you plot the values using excel or gnuplot you can see the linear
> regression is pretty obvious. Anyway I do not know enough about Kalman
> algorithm.
>
> Thanks
> -Mathieu
>
> On 5/7/07, Luis Ibanez <luis.ibanez at kitware.com> wrote:
> >
> > Hi Mathieu,
> >
> >
> > 1) Do you have only 6 data points in your real problem ?
> >
> >     They may not be enough data for bringing the
> >     Kalman Estimator to convergence to a good
> >     estimation.
> >
> >     I was assuming that you had in the order
> >     of hundreds to thousands of values.
> >
> >
> > 2) You may find useful to print out the estimations
> >     as you go adding new points. One of the interesting
> >     advantages of the Kalman estimator is that it
> >     provides at estimation using the currently provided
> >     samples.
> >
> >     E.g., move the line:
> >
> >  >  VectorType estimation = filter.GetEstimator();
> >
> >     inside the for loop and print out the value of
> >     the estimation. Then you will see how it get
> >     close to the [A,B] values.
> >
> >     Note that the estimator is initialized with [0,0]
> >     so it may take a while (many measurement) to shift
> >     the estimation to your real A,B values.
> >
> >     Maybe we should add a method to initialize the
> >     Estimator to a known value. In this way, the
> >     Kalman estimator would focus on refining the
> >     estimation.
> >
> >
> >
> >     Regards,
> >
> >
> >       Luis
> >
> >
> > -------------------------
> > Mathieu Malaterre wrote:
> > > Hello,
> > >
> > >  I must be something obviously really wrong when using the
> > > KalmanLinearEstimator class, because the results are way off. Could
> > > someone please help find the obvious mistake I must have done.
> > >
> > > thanks !
> > > -Mathieu
> > >
> > > Code is:
> > >
> > > #include "itkKalmanLinearEstimator.h"
> > > #include <iostream>
> > >
> > > int main(int, char* [] )
> > > {
> > >  typedef itk::KalmanLinearEstimator<double,2> KalmanFilterType;
> > >
> > >  typedef KalmanFilterType::VectorType    VectorType;
> > >  typedef KalmanFilterType::MatrixType    MatrixType;
> > >  typedef KalmanFilterType::ValueType     ValueType;
> > >
> > >  KalmanFilterType filter;
> > >
> > >  filter.ClearEstimation();
> > >  filter.SetVariance(1.0);
> > >
> > >  ValueType     measure;
> > >  VectorType    predictor;
> > >
> > >  // A ~ 0.323
> > >  // B ~ 172.4
> > >  const double points[][2] = {
> > >    {-16.6817,167.095},
> > >    {-44.4257,158.118},
> > >    {-68.0884,150.462},
> > >    {-87.7176,144.111},
> > >    {-103.447,139.021},
> > >    {-115.437,135.142},
> > >  };
> > >  const unsigned int N = sizeof(points) / ( 2 * sizeof( double ));
> > >
> > >  predictor(1)  =  1.0;
> > >  for(unsigned int ax=0; ax < N; ax++)
> > >    {
> > >    predictor(0) = points[ax][0];
> > >    measure = points[ax][1];
> > >    //std::cout << predictor(0) << "," << measure << std::endl;
> > >    filter.UpdateWithNewMeasure(measure,predictor);
> > >    }
> > >
> > >  VectorType estimation = filter.GetEstimator();
> > >
> > >  std::cout << std::endl << "The Estimation is : " << std::endl;
> > >  std::cout << estimation;
> > >
> > >  std::cout << std::endl << std::endl;
> > >  return EXIT_SUCCESS;
> > > }
> > >
> > >
> > > On 5/6/07, Luis Ibanez <luis.ibanez at kitware.com> wrote:
> > >
> > >>
> > >> Hi Mathieu,
> > >>
> > >> You may want to try the KalmanLinearEstimator.
> > >>
> > >> http://www.itk.org/Insight/Doxygen/html/classitk_1_1KalmanLinearEstimator.html
> > >>
> > >>
> > >>
> > >> Your estimator will be the vector [a,b],
> > >>
> > >> and your measures will be the vector [1,Y]
> > >>
> > >>
> > >> The Kalman Linear Estimator should converge to the vector [a,b] that
> > >> minimizes the sum of squared errors between X and (a+bY).
> > >>
> > >> For an example, you may want to look at:
> > >>
> > >>
> > >>
> > >>      Insight/Testing/Code/Algorithms/
> > >>           itkKalmanLinearEstimatorTest.cxx
> > >>
> > >>
> > >>
> > >>     Regards,
> > >>
> > >>
> > >>         Luis
> > >>
> > >>
> > >> ---------------------------
> > >> Mathieu Malaterre wrote:
> > >> > Hello,
> > >> >
> > >> >  I have a pretty simple problem to solve and I was wondering if I
> > >> > could reuse any brick already existing in ITK.
> > >> >
> > >> >  I am working with a 4D dataset (x,y,y and theta), where I can
> > >> > express for each pixel that:
> > >> >
> > >> > I(x,y,z)/sin(theta) = F(x,y,z) + b*I(x,y,z)/tan(theta)
> > >> >
> > >> >  My goal here is to find out the b constant, I do not need the
> > >> > F(x,y,z) part. Obvisouly all I need to do is draw the line
> > >> >
> > >> >   X = a + b Y
> > >> >
> > >> > where X = I(x,y,z)/sin(theta) and Y = I(x,y,z)/tan(theta) and the
> > >> > slope will give me 'b'.
> > >> >
> > >> > Any suggestions on which class to reuse ?
> > >> > thanks !
> > >>
> > >
> > >
> >
>
>
> --
> Mathieu
> Tel: +33 6 32 13 33 40
>


-- 
Mathieu
Tel: +33 6 32 13 33 40


More information about the Insight-users mailing list