[Insight-users] Rigid registration testing and reproducibility

Jakub Bican jakub.bican at matfyz.cz
Thu Jan 29 08:50:53 EST 2009


Hello,

a variantion of target registration error seems very fine to me:

a) take input image

b) generate a (regular) grid of points that uniformly sample full
volume (or region) of the input image

c) generate a transform

d) transfrom the input image and the generated points

>>Now you've got everything to set up the test (i.e. input data and baseline).<<

e) register input image with the transformed image so that the
resultant transform has same direction

f) transfrom the points with the new transform

g) measure the euclidean distance of each original transformed point
(baseline) to its counterpart obtained in step f). Compute RMSE of the
distances (errors).

The result can be called "estimate of mean registration error of every
point of input volume". The tolerance has been mentioned elsewhere in
the thread.

You can elaborate with the number of points in the grid - this has to
correspond to the complexity of the transform. It would be even
possible to compute minimum number of grid points for given family of
transforms and dimension.

I use such approach to measure registration performance of a new 3D
rigid-body method.

It is very simple and similar to those mentioned here but the good is
that you do not need to invert the transform.

Jakub

2009/1/28 Blezek, Daniel J. <Blezek.Daniel at mayo.edu>:
> Just 2 cents here.
>
> 1) For a linear registration, you could expect the registration results
> to be ~0.1 degrees and ~0.1 mm cross platform.  The major source of this
> problem is differences in floating point representations under different
> compilers/hardware.  I shouldn't worry about this, it'd be difficult for
> a human to see.  For something like BSplines, you might have a bit more
> error.
>
> 2) A very simple way to do this is to do a forward transformation using
> one transform (ground truth), then the inverse transform of the newly
> calculated transform.  I think this is called Target Registration Error
> by Fitzpatrick and Co., but you should look it up.  This is where you
> need to decide on your tolerance.  As Karthik mentioned, only simple ITK
> transforms have an inverse, which is really a shame.  They are so
> useful, even if they are numeric.
>
> 2a) I suppose you could forward transform the same point using two
> different transforms and see how far apart they are.  This seems
> reasonable, but you'd have to sample a bunch of points to account for
> rotation, and transform centers, etc.  And you'd only get a distance
> measure, not a rotational measure.
>
> For transforms with an inverse, you can do what you are asking, and it
> would be a valuable contribution to ITK, but it's not general, as not
> all transforms support an inverse.  And you could always test the
> transform you care about...
>
> Incoherent as usual,
> -dan
>
> -----Original Message-----
> From: insight-users-bounces at itk.org
> [mailto:insight-users-bounces at itk.org] On Behalf Of Andriy Fedorov
> Sent: Tuesday, January 27, 2009 6:18 PM
> To: ITK Users
> Cc: Miller, James V (GE, Research)
> Subject: [Insight-users] Rigid registration testing and reproducibility
>
> Hi,
>
> I would like to do regression tests of rigid registration with real
> images, and compare the result with the baseline transform. Here are two
> related questions.
>
> 1) ITK registration experts, could you speculate on what is the normal
> variability in the rigid registration results run with the same
> parameters, metric initialized with the same seed, when run across
> different platforms? What could be the sources of this variability,
> given the same initialization and same parameters?
>
> 2) If I understand correctly, the current testing of registration that
> comes with ITK generates a synthetic image with known transform
> parameters, which are compared with the registration-derived transforms.
>
> The testing framework implemented in itkTestMain.h allows to compare
> pixelwise two images, but this does not seem to be practical for
> registration regression testing, because of the variability in the
> registration results I mentioned earlier. Accounting for this
> variability using tolerance values of just 1 pixel hugely increases the
> test runtime, but in my experience, the comparison may still fail.
>
> Would it not make sense to augment itkTestMain.h with the capability to
> compare not only images, but transforms? Is this a valid feature
> request, or am I missing something?
>
> Thanks
>
> Andriy Fedorov
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk.org
> http://www.itk.org/mailman/listinfo/insight-users
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk.org
> http://www.itk.org/mailman/listinfo/insight-users
>
>


More information about the Insight-users mailing list