[ITK-users] Inverse of Versor Rigid Transform and RIRE

Gabriel A. Giménez gabrielgimenez85 at gmail.com
Thu Apr 9 21:17:21 EDT 2015


Hi Matt!

If you are working with a current version of ITK, when you get the
> inverse transform, it should transfer the Center for you. The Center
> point locations are the FixedParameters for the parent class,
> MatrixOffsetTransformBase. The relationship with parent classes can be
> found by examining the Doxygen page for the class [1].


I took a few days to read the RIRE documentation and I came to the
conclusion that the direction of registration is not really
relevant...taking up CT as the fexed image and MR as the moving image can
be used the transformation that provides ITK...without having to calculate
the inverse.


Do you get the same result by applying the TransformPoint() method of
> the inverse transform? This is the API call that should be applied to
> transform a point.


Yes Matt, I get the same results using the API.

But...I made some changes in my optimizers and I got very good results,
like this ( with RegularStepGradientDescentOptimizerv4 and
MattesMutualInformationImageToImageMetricv4):

iterations = 200
Metric value  = -0.709388
versor X      = 0.0155583
 versor Y      = 0.00642035
 versor Z      = -0.0487144
 Translation X = 7.82977
 Translation Y = -60.1034
 Translation Z = -23.6258
+-----------------------------------------------------------------------------------------------+
|           X GT|           Y GT|           Z GT|            X R|
 Y R|            Z R|
-----------------------------------------------------------------------------------------------+
|      -7.573100|     -41.253400|     -27.309300|      -7.661395|
-40.915138|     -26.044441|
|     324.872200|     -72.815900|     -32.906300|     324.712907|
-73.345104|     -30.833635|
|      24.160700|     291.039300|     -16.272700|      24.902019|
291.325008|     -15.874605|
|     356.606000|     259.476800|     -21.869700|     357.276322|
258.895042|     -20.663798|
|      -6.055400|     -45.115700|      84.613700|      -6.394922|
-44.465633|      85.892103|
|     326.389900|     -76.678200|      79.016800|     325.979381|
-76.895599|      81.102910|
|      25.678400|     287.176900|      95.650300|      26.168493|
287.774513|      96.061940|
|     358.123700|     255.614500|      90.053400|     358.542796|
255.344547|      91.272747|
+-----------------------------------------------------------------------------------------------+
[X, Y, Z]GT are  the "ground truth" values and [X, Y, Z]R are my results

Now, something I find strange is that when increasing the number of
iterations...metric value limprovement is too little but the result is
little worse..., example:

Iterations    = 334
Metric value  = -0.710918
 versor X      = 0.0216566
 versor Y      = 0.00700629
 versor Z      = -0.0508766
 Translation X = 7.80722
 Translation Y = -60.5124
 Translation Z = -24.1047

+-----------------------------------------------------------------------------------------------+
|           X GT|           Y GT|           Z GT|            X R|
 Y R|            Z R|
+-----------------------------------------------------------------------------------------------+
|      -7.573100|     -41.253400|     -27.309300|      -8.342271|
-39.764911|     -28.121895|
|     324.872200|     -72.815900|     -32.906300|     323.882938|
-73.594962|     -33.530625|
|      24.160700|     291.039300|     -16.272700|      25.690487|
292.179801|     -13.916412|
|     356.606000|     259.476800|     -21.869700|     357.915696|
258.349750|     -19.325143|
|      -6.055400|     -45.115700|      84.613700|      -7.022108|
-44.688304|      83.762051|
|     326.389900|     -76.678200|      79.016800|     325.203101|
-78.518355|      78.353321|
|      25.678400|     287.176900|      95.650300|      27.010650|
287.256408|      97.967534|
|     358.123700|     255.614500|      90.053400|     359.235859|
253.426357|      92.558803|
+-----------------------------------------------------------------------------------------------+

This pattern is repeated with other optimizers ( like OnePlusOne and  a GA
approach that I am implementing ), that you think about it?

Other questions Matt...:

How works the multithreaded in metrics ? is customizable? improve
performance? specifically in the case of Mattes Mutual Information...

I tried using the helper CenteredVersorTransformInitializer... but the
transformation that generates makes, incredibly and also very strange,the
optimizers does not advance...using CenteredTransformInitializer this does
not happen...

Really Thanks in advance Matt!
Regards,



2015-04-07 15:03 GMT-04:00 Matt McCormick <matt.mccormick at kitware.com>:

> Hi Gabriel!
>
> > I am use RIRE project, specifically  CT (movig) and MR_PD (fixed) images.
> > Basically, I hava a set of point (in millimeters) of the CT image to
> which
> > apply the trasform result of the registration and updaload this results
> in
> > the web for the evaluation. Example of set of points and his "ground
> truth"
> > :
> >
> > Point      x          y          z        new_x       new_y       new_z
> >
> >   1       0.0000     0.0000     0.0000   -7.5731    -41.2534    -27.3093
> >   2     333.9870     0.0000     0.0000  324.8722    -72.8159    -32.9063
> >   3       0.0000   333.9870     0.0000   24.1607    291.0393    -16.2727
> >   4     333.9870   333.9870     0.0000  356.6060    259.4768    -21.8697
> >   5       0.0000     0.0000   112.0000   -6.0554    -45.1157     84.6137
> >   6     333.9870     0.0000   112.0000  326.3899    -76.6782     79.0168
> >   7       0.0000   333.9870   112.0000   25.6784    287.1769     95.6503
> >   8     333.9870   333.9870   112.0000  358.1237    255.6145     90.0534
>
> Trying to reproduce previous results is a good path forward.
>
> > So, the first I need is the transformation to apply, for that I do the
> > following :
> >
> >    //get the inverse transform
> >    TransformType::Pointer inverseTransform = TransformType::New();
> >    inverseTransform->SetCenter( finalTransform->GetCenter() );
> >    bool response = finalTransform->GetInverse(inverseTransform);
> >
> >
> > It makes sense to use the same center in the inverse transform?. A
> > "quaternion" define an "axis" (right part) of rotation and an angle to
> use
> > for rotate the image about this axis...why use a center of rotation...?
>
> If you are working with a current version of ITK, when you get the
> inverse transform, it should transfer the Center for you. The Center
> point locations are the FixedParameters for the parent class,
> MatrixOffsetTransformBase. The relationship with parent classes can be
> found by examining the Doxygen page for the class [1].
>
>
> > Second, apply this transform...as follows:
> >
> > NewPoint = RotationMatrix * OriginalPoint + Offset
> >
> > The rotation matrix and the offset are obtained from the inverse
> transforme
> > objetc. Found something wrong? something that is not taking into account
> ?
> > The results do not appear to be correct...the calculated error is too big
> > and does not correspond with the visual result.
>
> Do you get the same result by applying the TransformPoint() method of
> the inverse transform? This is the API call that should be applied to
> transform a point.
>
> Thanks,
> Matt
>
>
> [1]
> http://www.itk.org/Doxygen/html/classitk_1_1VersorRigid3DTransform.html
>



-- 
*Gabriel Alberto Giménez.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://public.kitware.com/pipermail/insight-users/attachments/20150409/e285f56b/attachment.html>


More information about the Insight-users mailing list