[Insight-users] Rigid registration testing and reproducibility

Bill Lorensen bill.lorensen at gmail.com
Wed Jan 28 19:41:28 EST 2009


We don't worry about Experimental builds typically.

If you need another baseline for a machine you don't have, you need to
contact the onbwer of that machine and ask them to contribute a
baseline.

As for changing platforms, that is the beauty of regression testing.
If the platform changes, and the regression fails, then there may be a
problem with the platform. Or the old baseline.

Even if you add custom test code, I think it is still useful to do
image regression testing.

Bill

On Wed, Jan 28, 2009 at 4:00 PM, Andriy Fedorov <fedorov at bwh.harvard.edu> wrote:
> On Wed, Jan 28, 2009 at 2:29 PM, Bill Lorensen <bill.lorensen at gmail.com> wrote:
>> Baselines are generated on each platform when they are deemed to be
>> correct, but not exact as a current baseline.
>>
>> Regression testing is meant to detect changes in code  and/or
>> platforms.
>
> This last item is exactly my objective.
>
> But how exactly can one possibly generate baseline images from
> different platforms, if those platforms are not under one's control
> and dynamically change (eg, in a real dashboard with experimental
> builds)?
>
> I don't see how the currently implemented mechanisms of testing can
> help to resolve this. If I am missing something, I am ready to provide
> detailed test cases of what I am trying to achieve and learn, since I
> failed to find similar testing examples in either ITK or Slicer so far
> (regression testing of rigid registration with baselines).
>
> My current solution is to implement custom testing within the
> registration module, and compare transforms, as Jim, Karthik and Dan
> previously suggested.
>
>
>>
>> Bill
>>
>>
>> On Wed, Jan 28, 2009 at 12:17 PM, Andriy Fedorov
>> <fedorov at bwh.harvard.edu> wrote:
>>> Daniel, Bill -- thanks for replies.
>>>
>>> I wonder, how are these multiple baselines generated? Are they
>>> collected from running the test on different platforms? Or by
>>> adjusting rotation and translation components in all possible
>>> combinations by some (0.1?) tolerance, and applying the transform to
>>> the input? Seems like a daunting task in 3d...
>>>
>>> From what I see, ImageRegistration13Test doesn't use baselines, it
>>> compares the true transform parameters not to exceed thresholds --
>>> 0.025 for scale and 1.0 for translations.
>>>
>>>
>>> On Wed, Jan 28, 2009 at 11:30 AM, Bill Lorensen <bill.lorensen at gmail.com> wrote:
>>>> Also, their can be multiple baselines for a given test. If you look at:
>>>>
>>>> http://public.kitware.com/cgi-bin/viewcvs.cgi/Testing/Data/Baseline/Registration/?root=Insight
>>>>
>>>> you will see that some tests have multiple baselines because of the
>>>> type of variability discussed in this thread.
>>>>
>>>> For example, ImageRegistration13Test has 5 baselines.
>>>>
>>>> Bill
>>>>
>>>> On Wed, Jan 28, 2009 at 11:14 AM, Blezek, Daniel J.
>>>> <Blezek.Daniel at mayo.edu> wrote:
>>>>> Just 2 cents here.
>>>>>
>>>>> 1) For a linear registration, you could expect the registration results
>>>>> to be ~0.1 degrees and ~0.1 mm cross platform.  The major source of this
>>>>> problem is differences in floating point representations under different
>>>>> compilers/hardware.  I shouldn't worry about this, it'd be difficult for
>>>>> a human to see.  For something like BSplines, you might have a bit more
>>>>> error.
>>>>>
>>>>> 2) A very simple way to do this is to do a forward transformation using
>>>>> one transform (ground truth), then the inverse transform of the newly
>>>>> calculated transform.  I think this is called Target Registration Error
>>>>> by Fitzpatrick and Co., but you should look it up.  This is where you
>>>>> need to decide on your tolerance.  As Karthik mentioned, only simple ITK
>>>>> transforms have an inverse, which is really a shame.  They are so
>>>>> useful, even if they are numeric.
>>>>>
>>>>> 2a) I suppose you could forward transform the same point using two
>>>>> different transforms and see how far apart they are.  This seems
>>>>> reasonable, but you'd have to sample a bunch of points to account for
>>>>> rotation, and transform centers, etc.  And you'd only get a distance
>>>>> measure, not a rotational measure.
>>>>>
>>>>> For transforms with an inverse, you can do what you are asking, and it
>>>>> would be a valuable contribution to ITK, but it's not general, as not
>>>>> all transforms support an inverse.  And you could always test the
>>>>> transform you care about...
>>>>>
>>>>> Incoherent as usual,
>>>>> -dan
>>>>>
>>>>> -----Original Message-----
>>>>> From: insight-users-bounces at itk.org
>>>>> [mailto:insight-users-bounces at itk.org] On Behalf Of Andriy Fedorov
>>>>> Sent: Tuesday, January 27, 2009 6:18 PM
>>>>> To: ITK Users
>>>>> Cc: Miller, James V (GE, Research)
>>>>> Subject: [Insight-users] Rigid registration testing and reproducibility
>>>>>
>>>>> Hi,
>>>>>
>>>>> I would like to do regression tests of rigid registration with real
>>>>> images, and compare the result with the baseline transform. Here are two
>>>>> related questions.
>>>>>
>>>>> 1) ITK registration experts, could you speculate on what is the normal
>>>>> variability in the rigid registration results run with the same
>>>>> parameters, metric initialized with the same seed, when run across
>>>>> different platforms? What could be the sources of this variability,
>>>>> given the same initialization and same parameters?
>>>>>
>>>>> 2) If I understand correctly, the current testing of registration that
>>>>> comes with ITK generates a synthetic image with known transform
>>>>> parameters, which are compared with the registration-derived transforms.
>>>>>
>>>>> The testing framework implemented in itkTestMain.h allows to compare
>>>>> pixelwise two images, but this does not seem to be practical for
>>>>> registration regression testing, because of the variability in the
>>>>> registration results I mentioned earlier. Accounting for this
>>>>> variability using tolerance values of just 1 pixel hugely increases the
>>>>> test runtime, but in my experience, the comparison may still fail.
>>>>>
>>>>> Would it not make sense to augment itkTestMain.h with the capability to
>>>>> compare not only images, but transforms? Is this a valid feature
>>>>> request, or am I missing something?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Andriy Fedorov
>>>>> _______________________________________________
>>>>> Insight-users mailing list
>>>>> Insight-users at itk.org
>>>>> http://www.itk.org/mailman/listinfo/insight-users
>>>>> _______________________________________________
>>>>> Insight-users mailing list
>>>>> Insight-users at itk.org
>>>>> http://www.itk.org/mailman/listinfo/insight-users
>>>>>
>>>>
>>>
>>
>


More information about the Insight-users mailing list