<br>Hi Peter,<br><br>Please look at the ITK Software Guide<br><br> <a href="http://www.itk.org/ItkSoftwareGuide.pdf">http://www.itk.org/ItkSoftwareGuide.pdf</a><br><br><br>In particular the "Image Registration" chapter.<br>
<br>and pay special attention to section<br><br> 8.3.1. "Direction of the Transform Mapping"<br><br>in pdf-page 358.<br><br><br>You will see that the Transform computed <br>by the registration framework is one that <br>
maps points from the physical space of the <br>Fixed image to the physical space of the<br>Moving image.<br><br><br>In your case, it seems that you want to <br>map points from the physical space of<br>the Moving image, into the physical space <br>
of the Fixed image. Therefore you should<br>use the inverse of the transform that you<br>get from the image registration process.<br><br><br>Also, please make sure that the points are<br>actually representing physical coordinates,<br>
and not image indexes.<br><br><br> Regards,<br><br><br> Luis<br><br><br>-------------------------------------------------------------------<br><div class="gmail_quote">On Tue, May 4, 2010 at 7:56 AM, Peter Varga <span dir="ltr"><<a href="mailto:vpeter@ilsb.tuwien.ac.at">vpeter@ilsb.tuwien.ac.at</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Dear Dr. Ibanez,<br>
<br>
I would have a question regarding the VersorRigid3DTransform of ITK.<br>
Let's say I registered two images and I'd like to apply the evaluated transformation to a point could (which is prepared from the movingImage outside of ITK before the registration task and read in as an input).<br>
<br>
So the usual thing:The relevant parts of the code are the following:<br>
<br>
typedef itk::VersorRigid3DTransform< double > TransformType;<br>
TransformType::Pointer transform = TransformType::New();<br>
registration->SetTransform( transform );<br>
<br>
then the transform is initialized,<br>
<br>
registration->SetInitialTransformParameters( transform->GetParameters() );<br>
<br>
then the registration is executed,<br>
<br>
OptimizerType::ParametersType finalParameters = registration->GetLastTransformParameters();<br>
transform->SetParameters( finalParameters );<br>
<br>
then a resampler is applied to the movingImage, with<br>
<br>
finalResampler->SetTransform( transform );<br>
<br>
then I read in a pointSet from an ASCII file (movingMeshPointSet) and would like to apply the above evaluated transformation on this pointSet<br>
<br>
TransformType::Pointer pointTransform = TransformType::New();<br>
pointTransform->SetCenter( transform->GetCenter() );<br>
pointTransform->SetParameters( transform->GetParameters() );<br>
<br>
PointSetType::Pointer outputMeshPointSet = PointSetType::New();<br>
PointsContainer::Pointer outputMeshPointContainer = PointsContainer::New();<br>
PointType outputMeshPoint;<br>
<br>
for(int i = 0; i < movingMeshPointSet->GetNumberOfPoints(); i++)<br>
{<br>
movingMeshPointSet->GetPoint( i, &movingMeshPoint );<br>
outputMeshPoint = pointTransform->TransformPoint( movingMeshPoint );<br>
outputMeshPointContainer->InsertElement( i, outputMeshPoint );<br>
}<br>
outputMeshPointSet->SetPoints(outputMeshPointContainer);<br>
<br>
<br>
OK, sorry for the long intro, the question is the following:<br>
If I understand it right, "transform" and "pointTransform " should be the same transformations.<br>
When running this code through several examples, I experienced something really weird: in some cases it worked really nicely, the mesh (point could) was transformed exactly as the movingImage. BUT in other cases (and I cannot find a rule for that) I had to invert (*(-1)) the rotation or the translation (or both) parts of the transformation to get the correct result.<br>
I guess the "transform->TransformPoint()" command should do the same thing that a resampling filter with the "transform" taken in the "->SetTransform()"....<br>
Or is it related to the fact that in the registration the inverse transformation is evaluated (that maps fixedImage=>movingImage)? Still it should be consistent.<br>
<br>
Just one more sentence: I have similar problems in the inverse case: there's an application where I estimate the transformation using PC2PC registration of meshes and apply that on the corresponding movingImage --- in some cases copying the parameters of the PC2PC transformation provided the correct results, in other cases I had to use -1*parameters to have it right... strange...<br>
<br>
Maybe I did something wrong... but I don't get how I could get the correct results in some cases and not in the others.<br>
<br>
Thank you very much for your answer in advance!<br>
<br>
Best regards,<br>
Peter<br>
<br>
-- <br>
Peter Varga<br>
Institute for Lightweight Design and Structural Biomechanics<br>
Vienna University of Technology (TU-Wien)<br>
Gusshausstrasse 27-29<br>
A-1040 Vienna<br>
Austria<br>
Tel. +43 1 58801 317 32<br>
Fax +43 1 58801 317 99<br>
E-mail: <a href="mailto:vpeter@ilsb.tuwien.ac.at" target="_blank">vpeter@ilsb.tuwien.ac.at</a><br>
DVR: 0005886<br>
<br>
</blockquote></div><br>