[Insight-users] Testing Robustness of Registration

Luis Ibanez luis.ibanez at kitware.com
Mon May 7 12:58:31 EDT 2007


Hi Andriy,


To put it short:


                      There are none.


What get closest to be a good validation/evaluation framework is
the Retrospective Registration project that Fitzpatrick led at
Vanderbilt, but still had many issues...

Note however that their goal was not the evaluation of robustness
itself, but just a comparison of multiple methods.



As you already pointed out, a decent evaluation system *MUST* be
open for verification by any third party. In the case of the
Retrospective Registration project, the source code of the
participants was not available. The results of the registration
were submitted to the evaluators, and registration metrics were
computed on them.


This doesn't tell us much about:

   1) The stability of each one of those registration
      method to perturbations in their parameters.

   2) The stability of the registration to variations in the
      input images. (bias, noise, artifacts, resolutions)




An honest attempt to evaluate robustness of registration method
will require a community-wide collaboration. Mostly because the
cost will be very high, and because it is very unlikely that a
single institution will have enough know-how and financial
resources to make it on their own.


Such an effort will involve:

1) Gathering a large collection of M images and making them
    publicly available. Such collection should have in the
    order of hundreds to thousands of images, and should be
    categorized by similar groups: e.g. 500 head CT,
    600 abdominal MRIs.. .etc.

    Without a representative image database, any attempt to
    evaluate a registration or segmentation method will turn
    out to be purely *ANECDOTIC*.


2) Exposing the source code and parameters used by N registration
    methods. This is needed because what we usually call "a method"
    is actually a "cluster" of algorithms, due to the myriad of
    small variations find in the implementation, and the large
    number of different combinations of parameters. We must insist
    on the fact that "Parameters" must be considered to be an
    integral part of an algorithm.


3) Recruit a set of building + running machines that will test the
    N registration methods, on all the combinations of pairs from
    the M images in the collection.


4) Select a methodology for generating perturbations of registration
    parameters:

      * Initial transforms,
      * Optimizer parameters, (step, convergence values, #iterations)
      * Metric parameters (e.g. # of bins, # samples in MI)

    Providing these collections of perturbed parameters to the
    build+testing machines, and collect the results in a public
    site that can summarize and categorize the results.


5) Since different algorithms will have different parameters, with
    different dynamic ranges, a large effort will have to be invested
    in normalizing the parameters *IF* we want to dare to compare
    method A to method B.  This is because we cannot claim that
    method A is more robust than method B, based on the observation
    that in method A i can change the step length by 10s and still
    converge while in method B a perturbation of 0.1s

    When you think about it, the entire point of "Robustness" comes
    down to the evaluating the magnitude of the perturbations that
    you can apply to an algorithm, and still get results that fall
    inside a given distance from a target.

    Measuring the magnitude of the perturbations, requires to normalize
    their scales among different methods.  This effort is far from
    trivial, and it will require to dig deep into the code and the
    mathematical concepts behind every participating algorithm.




As you can see, any serious attempt for evaluating the robustness
of a method is a very large scale undertaking.

What we see in papers today, is simply anecdotic:

      "I ran my method in N images and still worked..."

The fact that you *DON'T* see in Journal any papers describing
*NEGATIVE RESULTS" of the sort:

      "I ran my method in the following K cases and it failed
       by error larger than X".

is a very disappointing fact.



It is one more evidence that the publishing system in our community
doesn't deserve at all to be called "scientific".  A community that
does not publish negative results, and that does not publish papers
reporting reproducibility of results, it not performing scientific
work. They are just *playing to pretend* to be scientist. These are
the Journals of a community that is not interested in finding the
any truth, we are just interested in being "cited" and in filling
up annual quotas of intellectual productivity.



At least we should start by introducing more honesty in our published
papers, and admit that *"as anecdotic fact"* we are reporting to have
run the same method with "this X combination of parameters, and found
this Y variability of results".  Such testing, is still worth to be
performed, however, it shouldn't be taken for more than what it
actually is. Such report doesn't give you any guarantee of what will
happen when you apply the same method to a different pair of images,
without first adjusting the parameters for the new data.




      Regards,



          Luis



--------------------
Andriy Fedorov wrote:
> Hi Luis,
> 
> Following on the topic, can you give some examples of what you
> consider *good* evaluations/validations of robustness in the
> literature?
> 
> I understand that important components of such evaluations should be
> open code and public data, but still, in addition to that, are there
> any exemplary studies/methods one should try to follow?
> 
> Thank you
> 
> Andriy Fedorov
> 
> 
> 
> 
>> Message: 1
>> Date: Sun, 06 May 2007 12:01:41 -0400
>> From: Luis Ibanez <luis.ibanez at kitware.com>
>> Subject: Re: [Insight-users] Testing Robustness of Registration
>> To: IsabelleNg <isabelleNg at homeworking.org>
>> Cc: insight-users at itk.org
>> Message-ID: <463DFBE5.8020306 at kitware.com>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>>
>> Hi Isabelle,
>>
>> The CenteredTransformInitializer will set up the
>>
>>
>>          * Center
>>          * Translation
>>
>>
>> of your transform.
>>
>> The Center is not part of the set of Parameters,
>> while the Translation is. This means that when you
>> set the parameters of the transform, after getting
>> it from the initializer, you will modify its Translation
>> but not its center of rotation.
>>
>>
>>
>> That being said...
>>
>>
>>
>>      The test of robustness that you are referring to is
>>      *purely anecdotic* and it doesn't serve any useful
>>      purpose, other than *to fool the reviewers* who still
>>      participate in the masquerade of the peer-review system
>>      that is still practiced in our domain.
>>
>>
>>
>> Here are some arguments on why such test of robustness
>> is useless:
>>
>>
>>
>> 1) The rigid transform has a parametric space of 6 Dimensions
>>
>>     If the point of convergence of the registration is a point Q
>>     in that six-dimensional space, you are trying to illustrate
>>     how large is the region of capture around that 6D point.
>>
>>     The assumption is that you can find all the points P such
>>     that by starting at P in the parametric space, the optimization
>>     will converge to a very close neighborhood of Q.
>>
>>     The loci of all points P could then be considered to be the
>>     area of capture of the registration. The larger this are is,
>>     the more resilient the registration process will be to variations
>>     on the initialization conditions of the transform.
>>
>>     If we look at the Metric as a cost function defined in that 6D
>>     space, you are looking for the watershed that is associated to
>>     the point of convergence Q.  The ideal way of finding the region
>>     of capture will be to find the watershed associated to the scalar
>>     value of the image metric in that 6d space.
>>
>>     The assumption above has the flaw that optimizer are not continuous.
>>
>>     They do not follow smooth continuous paths on the parametric space.
>>     Instead, most of the optimizers perform sequences of discrete steps
>>     that create broken paths on the parametric space.  Even when the
>>     optimizer is initialized in a point that is inside the capture
>>     region, the optimizer settings such as step length, and number
>>     of iterations, may lead to early termination of the optimization
>>     before the path reaches a neighborhood of the point Q.
>>
>>     You may find interesting to look at the diagram in the ITK Software
>>     Guide,
>>
>>                http://www.itk.org/ItkSoftwareGuide.pdf
>>
>>     that shows how the optimizer parameters and the metric parameters
>>     result in very different paths on the transform parametric space.
>>
>>     See for example figure 8.15 in pdf-page 376.  See also the metric
>>     plots, and the plots of translations in the parametric space,
>>     that are shown for most of the registration in that same chapter.
>>
>>
>>     For an illustration of the notion of the capture region in the
>>     metric cost function, you may want to look at the figure 8.46.
>>
>>     As opposed to all the "peer-reviewed" publications, in the ITK
>>     Software Guide you will find the actual code, images and full set
>>     of parameters that were used for generating these plots.
>>
>>     You will also find the instructions for downloading the scripts
>>     that were used for generating the GNUPlot diagrams shown in this
>>     chapter.
>>
>>
>>     Note that the plot in figure 8.46 only explores 2 out of the 6
>>     dimensions that you will have to deal with, in a 3D rigid transform.
>>
>>     This plot is a discretization of that space, (e.g. about 100 x 100 ).
>>     The equivalent plot in a 6D space will require you to do 100^6
>>     samples, that if you store in float numbers will be an 6D image of
>>     400 Megabytes. In order to find the value for each pixel of such
>>     6D image you will need to compute the metric of the two 3D images
>>     for the transform parameters associated to the 6D pixel.
>>
>>
>>     Even if you compute such image, and then run inside it a watershed
>>     from the point Q. That still doesn't guarantee that starting from
>>     any given point P inside the watershed will result in a convergence
>>     to the point Q.  The specific settings of the optimizer parameters
>>     may be or may not be appropriate for producing such discretized path.
>>
>>
>>
>> 2) In practice you are suggesting to have a very coarse sampling of
>>     the capture region by providing *some* paths, and based on the
>>     length of those paths, *induce* that the registration has a certain
>>     "degree of robustness".
>>
>>
>>     This is an interesting but still *ANECDOTIC* piece of information.
>>
>>
>>     It is as useful as to tell you that person X has travel through the
>>     Amazonian forest 27 times and has never been bitten by a snake.
>>
>>     That doesn't mean much, regarding the chances of person Y to make
>>     a new trip in the Amazonian forest and being bitten by a snake.
>>
>>     It certainly provides some degree of *psychological comfort* to be
>>     able to report that you have run the registration under a variety
>>     of perturbed conditions and yet achieved convergence. But, that's
>>     the only thing it is.... "psychological comfort". Because the number
>>     of perturbed conditions that you will be able to explore will be
>>     infinitesimally small compared to the number of potential paths
>>     in the 6D space that surrounds point Q in the parametric space.
>>
>>
>>     It is still a piece of information that will looks nice in typical
>>     decadent journal, and it will be comic to see reviewers buying into
>>     it. Since they never perform the reproduction of the experiments,
>>     they won't realize how useless and insignificant such measure
>>     of robustness will be.
>>
>>
>>     The measure may only start being significant if you manage to
>>     perform a dense-enough sampling of the 6D parametric space, and
>>     to mathematically evaluate the coverage of such sampling.
>>
>>     E.g.
>>     A sampling that covers the equivalent of 60% of the parametric
>>     space at a range of  +/- 100mm of translations and +/- 10 degrees
>>     of otation.
>>
>>
>>      Now,... even if you manage to cover a large fraction of the
>>      parametric space you would have done so with only
>>
>>                     *ONE SPECIFIC PAIR of IMAGES*
>>
>>      as the input of the image metric.  The result can't hardly be
>>      extrapolated to *other images*, e.g. image with bias field,
>>      with other dynamic ranges of intensity, with other pixel spacing,
>>      with different levels of noise.
>>
>>
>>
>>
>> 3) As Richard Feynman put it:
>>
>>
>>            "It is very hard to actually *know* something"
>>
>>
>>     Our imaging community is *very lax* when it comes to differentiate
>>     the *appearance* of knowledge from the actual *posession* of
>>     knowledge.
>>
>>
>>
>>
>>      Please don't be fooled by the many things that you see
>>      in Journals and Conference papers. Keep in mind that
>>      most of it has been published with the sole purpose of
>>      providing material for academic promotions and to fill-up
>>      established yearly quotas for intellectual production.
>>
>>
>>      As a reader, and actual practitioner of the imaging arts
>>      you ought to be *more critical* and exigent when it comes
>>      to the deduce / induce information from what other report
>>      in venues that do not require to demonstrate reproducibility.
>>
>>
>>
>>
>>
>>     Regards,
>>
>>
>>
>>        Luis
>>
>>
>>
>> --------------------
>> IsabelleNg wrote:
>> > Thank you Karthik for your reply.
>> >
>> > I just realize that I omitted some info. I am using an 
>> centered-initializer
>> > to initialize the transform (rigid 3d). In this case, would the 
>> following be
>> > a valid sequence?
>> >
>> > rigidTransform->SetIdentity();
>> > resampler->SetTransform( randomXform );
>> > initializer->SetMovingImage( resample->Getoutput() );
>> > initializer->SetTransform( rigidTransform );
>> >
>> > registration->SetInitialTransformParameters( 
>> rigidTransform->GetParameters()
>> > )
>> >
>> > Thanks again,
>> > Isabelle
>> >
>> >
>> >
>> >
>> >
>> > Karthik Krishnan-2 wrote:
>> >
>> >>On 5/4/07, IsabelleNg <isabelleNg at homeworking.org> wrote:
>> >>
>> >>>
>> >>>ITK-users,
>> >>>
>> >>>I wish to test one of the registration algorithm by applying random
>> >>>transformations to the moving image. This is often done in papers that
>> >>>report registration results as tests of robustness and capture 
>> range. Is
>> >>>it
>> >>>valid to perform these tests by simply initializing the transform with
>> >>>random numbers? i..e by calling
>> >>>
>> >>>registration->SetInitialTransformationParameters( randomXform)?
>> >>
>> >>
>> >>Hi Isabelle
>> >>
>> >>Yes. It is identical.
>> >>
>> >>Or, do we actually need to physically write out the randomly misaligned
>> >>
>> >>>images and then feed back into the algorithm?
>> >>>
>> >>>How would results differ with these 2 approaches?
>> >>
>> >>
>> >>There wouldn't be a difference. There would be a marginal difference
>> >>depending on the transform used to resample the moving image before
>> >>writing
>> >>to the disk. In the first case, the initial transform is the same as 
>> the
>> >>one
>> >>used for registration.
>> >>
>> >>
>> >>--
>> >>Karthik Krishnan
>> >>R&D Engineer,
>> >>Kitware Inc.
>> >>
>> >>_______________________________________________
>> >>Insight-users mailing list
>> >>Insight-users at itk.org
>> >>http://www.itk.org/mailman/listinfo/insight-users
>> >>
>> >>
>> >
>> >
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Sun, 06 May 2007 12:15:33 -0400
>> From: Luis Ibanez <luis.ibanez at kitware.com>
>> Subject: Re: Fw: [Insight-users] 3D Registration using my own data
>>         (.raw and       .mha files)
>> To: Lars Nygard <lnygard at yahoo.com>
>> Cc: Insight Users <insight-users at itk.org>
>> Message-ID: <463DFF25.3060004 at kitware.com>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>>
>> Hi Lars,
>>
>>
>>   The ImageRegistration8.cxx example is using "float" as pixel type.
>>
>>
>> Therefore, for the purpose of this code, it shouldn't matter whether
>> your input images files are of pixel type "char" or of pixel type
>> "unsigned short".
>>
>>
>> The failure to allocate memory that you are experincing when using your
>> own images is *NOT* due to the fact that they have different number of
>> pixels along different dimensions.
>>
>>
>>
>> The problem is most likely due to an incorrect metaimage header.
>> Since you get the message:
>>
>>
>>      > MetaImage: M_ReadElements: data not read completely
>>      >    ideal = 14218274 : actual = 7109137
>>
>>
>> It seems that your metaimage header is "promising" the image to be of
>> pixel type "unsigned short", when it actually is of pixel type "char"
>>
>>
>>     OR
>>
>>
>> it may be that you are promising the image to have 128 slices when
>> it actually has only 64 slices.
>>
>>
>>
>> Just to clarify this again;
>>
>>         *YOU DON'T NEED TO RESAMPLE THE IMAGES*
>>         *BEFORE PERFORMING IMAGE REGISTRATION*
>>
>>
>>
>> Please verify (and fix) the metaimage header of your images.
>>
>> You can do this by simply using the ImageViewer. You can build
>> the viewer from the code available in
>>
>>             InsightApplications/ImageViewer
>>
>> or, you can download a binary version for Windows from:
>>
>>
>>     http://public.kitware.com/pub/itk/InsightApplicationsBin/
>>
>>
>> forget about the registration, until you fix the header of your
>> images. Make sure that you correct the MetaImage headers and
>> that you can read the images with the ImageViewer and you can
>> go through their slices and see a normal image.
>>
>>
>> Once you have fixed the headers, you will be able to go back
>> and test the registration again.
>>
>>
>>
>>
>>      Regards,
>>
>>
>>
>>          Luis
>>
>>
>>
>> --------------------
>> Lars Nygard wrote:
>> > Luis,
>> >
>> > Thanks for clearing that up. I managed to run ImageRegistration8 
>> with some
>> > other data and it works fine when images have same DimSize and Spacing.
>> > However when I run the program with two images with different 
>> DimSize and
>> > ElementType I get the following message:
>> >
>> > MetaImage: M_ReadElements: data not read completely
>> >    ideal = 14218274 : actual = 7109137
>> >
>> > Is this because one image has ElementType short and the other 
>> unsigned char?
>> >
>> > And now what I don't understand is that when I run it with my own 
>> data I get a failed
>> > to allocate memory message (see end of this message). Do you think 
>> this has
>> > something to do with the size of the images( 125 MB for the MRA 
>> image and 23 MB
>> > for the MRT1 image)??
>> > I tried to resample the images because they have different DimSize 
>> and Spacing and
>> > then the registration works but I don't think the results are good.
>> > And as you said, the algorithm is supposed to work without first 
>> doing resampling.
>> > regards,
>> > Lars Nygard
>> >
>> > the error:
>> > *************************************************
>> > itk::ExceptionObject (0130F7A0)
>> > Location: "class itk::CovariantVector<double,3> *__thiscall 
>> itk::ImportImageCont
>> > ainer<unsigned long,class itk::CovariantVector<double,3> 
>> >::AllocateElements(uns
>> > igned long) const"
>> > File: c:\documents and settings\nygard\desktop\insight 
>> toolkit\source\insighttoo
>> > lkit-3.2.0\code\common\itkImportImageContainer.txx
>> > Line: 188
>> > Description: Failed to allocate memory for image.
>> > *************************************************************
>> >
>> >
>> > ----- Original Message ----
>> > From: Luis Ibanez <luis.ibanez at kitware.com>
>> > To: Lars Nygard <lnygard at yahoo.com>
>> > Cc: Insight Users <insight-users at itk.org>
>> > Sent: Thursday, May 3, 2007 7:46:15 PM
>> > Subject: Re: Fw: [Insight-users] 3D Registration using my own data 
>> (.raw and .mha files)
>> >
>> >
>> > Hi Lars,
>> >
>> > Thanks for sending the additional information of your images.
>> >
>> > They look Ok, for the most part...
>> >
>> > The spacing of the image N492_MRT1.mha looks very suspicious,
>> > a spacing of 1x1x1 usually indicates that the real spacing was
>> > not known and the values were filled up with some "innocuous"
>> > choice.
>> >
>> >
>> > Note that you *DO NOT* need to have the two images with the
>> > same number of pixels. The ITK registration framework will
>> > manage that correctly, as long as the origin, and spacing
>> > values truly represent the physical reality of the data.
>> >
>> > So, there is no need for resampling the data before performing
>> > the registration.
>> >
>> >
>> >
>> >      Regards,
>> >
>> >
>> >
>> >         Luis
>> >
>> >
>> >
>> >
>> > --------------------
>> > Lars Nygard wrote:
>> >
>> >>Hi Luis,
>> >>
>> >>Thanks for you mails, they really cleared things up for me. I have
>> >>the registration program working with the data on the website. However
>> >>when I use my own data I still get the same error. I think I also 
>> have found
>> >>the reason, theyre of different sizes (see header files) and 
>> elementspacing.
>> >>I just found out about this. Do you think that this is the problem? 
>> I probably
>> >>need to resample the image first so that both have the same dimensions.
>> >>regards,
>> >>Lars Nygard
>> >>
>> >>
>> >>The header files for my own data are:
>> >>
>> >>N492_MRA.mha:
>> >>
>> >>NDims = 3
>> >>DimSize = 512 512 245
>> >>ElementType = MET_USHORT
>> >>ElementSpacing = 0.410156 0.410156 0.599998
>> >>ElementByteOrderMSB = True
>> >>ElementDataFile = N492_MRA.raw
>> >>-------------------------------------
>> >>
>> >>N492_MRT1.mha:
>> >>
>> >>NDims = 3
>> >>DimSize = 256 256 180
>> >>ElementType = MET_USHORT
>> >>ElementSpacing = 1.000000 1.000000 1.000000
>> >>ElementByteOrderMSB = True
>> >>ElementDataFile = N492_MRT1.raw
>> >>-----------------------------------------------------------
>> >>----- Original Message ----
>> >>From: Luis Ibanez <luis.ibanez at kitware.com>
>> >>To: Lars Nygard <lnygard at yahoo.com>
>> >>Cc: insight-users at itk.org
>> >>Sent: Wednesday, May 2, 2007 12:57:55 AM
>> >>Subject: Re: [Insight-users] 3D Registration using my own data (.raw 
>> and .mha files)
>> >>Hi Lars,
>> >>
>> >>What is the size (in pixels) of the images:
>> >>           1)  N492_MRT1.mha  ?
>> >>           2)  N492_MRA.mha   ?
>> >>
>> >>Also, note that you are passing insufficient arguments to this
>> >>program, and that two of the arguments that you are passing are
>> >>PNG image filenames, where a format capable of storing 3D image
>> >>is expected.
>> >>
>> >>        PNG can only store 2D images.
>> >>
>> >>The arguments that this program expects are:
>> >>
>> >>     1) Fixed image (in a 3D file format)
>> >>     2) Moving image (in a 3D file format)
>> >>     3) Filename for the output resampled
>> >>        registered moving image (in a 3D file format)
>> >>     3) Filename for the difference between
>> >>        fixed image and moving image Before
>> >>        registration (in a 3D file format)
>> >>     4) Filename for the difference between
>> >>        fixed image and moving image After
>> >>        registration (in a 3D file format)
>> >>     5) Filename for 1 slice of the difference
>> >>        between fixed image and moving image
>> >>        Before registration (in a 2D file format)
>> >>     6) Filename for 1 slice of the difference
>> >>        between fixed image and moving image
>> >>        After registration (in a 2D file format)
>> >>
>> >>According to your email, you are missing the argument (3),
>> >>and you are passing wrong file formats for (what should have
>> >>been) arguments (4) and (5).
>> >>A correct command line should look like:
>> >>
>> >>     ImageRegistration8
>> >>       fixedImage.mhd
>> >>       movingImage.mhd
>> >>       registeredMovingImage.mhd
>> >>       differenceBefore.mhd
>> >>       differenceAfter.mhd
>> >>       sliceDifferenceBefore.png
>> >>       sliceDifferenceAfter.png
>> >>
>> >>Please read the description of this example in the
>> >>ITK Software Guide:
>> >>
>> >>       http://www.itk.org/ItkSoftwareGuide.pdf
>> >>in the "Image Registration" chapter.
>> >>
>> >>You will also find a list of the fileformats supported
>> >>in ITK in the Chapter "Reading and Writing Images" in
>> >>the ITK Software Guide.
>> >>
>> >>Regards,
>> >>
>> >>    Luis
>> >>
>> >>--------------------
>> >>Lars Nygard wrote:
>> >>
>> >>
>> >>>Hallo,
>> >>>
>> >>>Im trying to use example eight in the registration part 
>> (ImageRegistration8.cxx) to
>> >>>register a 3D MR Angiography (moving) to an 3D MR T1 image (fixed).
>> >>>When I run it i get an error message (see bottom). Is there anybody 
>> who can give me
>> >>>a hint on what's happening.
>> >>>I haven't been able to run ImageRegistration8.cxx example with the 
>> normal data because
>> >>>I can't connect to the ftp site ( 
>> ftp://public.kitware.com/pub/itk/Data/BrainWeb).
>> >>>Does anybody know where I can download the files 
>> brainweb165a10f17.mha and
>> >>>brainweb165a10f17Rot10Tx15.mha??
>> >>>Ok thanks.
>> >>>greets
>> >>>Lars Nygard
>> >>>
>> >>>error message
>> >>>------------------------------------------------------------------------------------------------------------------ 
>>
>> >>>C:\Documents and Settings\nygard\Desktop\Insight 
>> Toolkit\Examples\Registration\I
>> >>>mageRegistration8\debug>ImageRegistration8 N492_MRT1.mha 
>> N492_MRA.mha diffbefore
>> >>>.png deffafter.png slicebefore.png sliceafter.png
>> >>>
>> >>>ExceptionObject caught !
>> >>>itk::ExceptionObject (0130F7A0)
>> >>>Location: "class itk::CovariantVector<double,3> *__thiscall 
>> itk::ImportImageCont
>> >>>ainer<unsigned long,class itk::CovariantVector<double,3> 
>> >::AllocateElements(uns
>> >>>igned long) const"
>> >>>File: c:\documents and settings\nygard\desktop\insight 
>> toolkit\source\insighttoo
>> >>>lkit-3.2.0\code\common\itkImportImageContainer.txx
>> >>>Line: 188
>> >>>Description: Failed to allocate memory for image.
>> >>>
>> >>>
>> >>>
>> >>>_________________________________________________________
>> >>>Alt i �n. F� Yahoo! Mail med adressekartotek, kalender og
>> >>>notisblokk. http://no.mail.yahoo.com
>> >>>_______________________________________________
>> >>>Insight-users mailing list
>> >>>Insight-users at itk.org
>> >>>http://www.itk.org/mailman/listinfo/insight-users
>> >>>
>> >>
>> >>
>> >>
>> >>----- Forwarded Message ----
>> >>From: Luis Ibanez <luis.ibanez at kitware.com>
>> >>To: Lars Nygard <lnygard at yahoo.com>
>> >>Cc: insight-users at itk.org
>> >>Sent: Wednesday, May 2, 2007 12:57:55 AM
>> >>Subject: Re: [Insight-users] 3D Registration using my own data (.raw 
>> and .mha files)
>> >>
>> >>
>> >>Hi Lars,
>> >>
>> >>
>> >>What is the size (in pixels) of the images:
>> >>
>> >>           1)  N492_MRT1.mha  ?
>> >>
>> >>           2)  N492_MRA.mha   ?
>> >>
>> >>
>> >>
>> >>Also, note that you are passing insufficient arguments to this
>> >>program, and that two of the arguments that you are passing are
>> >>PNG image filenames, where a format capable of storing 3D image
>> >>is expected.
>> >>
>> >>
>> >>        PNG can only store 2D images.
>> >>
>> >>
>> >>The arguments that this program expects are:
>> >>
>> >>
>> >>     1) Fixed image (in a 3D file format)
>> >>     2) Moving image (in a 3D file format)
>> >>     3) Filename for the output resampled
>> >>        registered moving image (in a 3D file format)
>> >>     3) Filename for the difference between
>> >>        fixed image and moving image Before
>> >>        registration (in a 3D file format)
>> >>     4) Filename for the difference between
>> >>        fixed image and moving image After
>> >>        registration (in a 3D file format)
>> >>     5) Filename for 1 slice of the difference
>> >>        between fixed image and moving image
>> >>        Before registration (in a 2D file format)
>> >>     6) Filename for 1 slice of the difference
>> >>        between fixed image and moving image
>> >>        After registration (in a 2D file format)
>> >>
>> >>
>> >>According to your email, you are missing the argument (3),
>> >>and you are passing wrong file formats for (what should have
>> >>been) arguments (4) and (5).
>> >>
>> >>A correct command line should look like:
>> >>
>> >>
>> >>     ImageRegistration8
>> >>       fixedImage.mhd
>> >>       movingImage.mhd
>> >>       registeredMovingImage.mhd
>> >>       differenceBefore.mhd
>> >>       differenceAfter.mhd
>> >>       sliceDifferenceBefore.png
>> >>       sliceDifferenceAfter.png
>> >>
>> >>
>> >>
>> >>Please read the description of this example in the
>> >>ITK Software Guide:
>> >>
>> >>
>> >>       http://www.itk.org/ItkSoftwareGuide.pdf
>> >>
>> >>in the "Image Registration" chapter.
>> >>
>> >>
>> >>You will also find a list of the fileformats supported
>> >>in ITK in the Chapter "Reading and Writing Images" in
>> >>the ITK Software Guide.
>> >>
>> >>
>> >>
>> >>Regards,
>> >>
>> >>
>> >>    Luis
>> >>
>> >>
>> >>
>> >>--------------------
>> >>Lars Nygard wrote:
>> >>
>> >>
>> >>>Hallo,
>> >>>
>> >>>Im trying to use example eight in the registration part 
>> (ImageRegistration8.cxx) to
>> >>>register a 3D MR Angiography (moving) to an 3D MR T1 image (fixed).
>> >>>When I run it i get an error message (see bottom). Is there anybody 
>> who can give me
>> >>>a hint on what's happening.
>> >>>I haven't been able to run ImageRegistration8.cxx example with the 
>> normal data because
>> >>>I can't connect to the ftp site ( 
>> ftp://public.kitware.com/pub/itk/Data/BrainWeb).
>> >>>Does anybody know where I can download the files 
>> brainweb165a10f17.mha and
>> >>>brainweb165a10f17Rot10Tx15.mha??
>> >>>Ok thanks.
>> >>>greets
>> >>>Lars Nygard
>> >>>
>> >>>error message
>> >>>------------------------------------------------------------------------------------------------------------------ 
>>
>> >>>C:\Documents and Settings\nygard\Desktop\Insight 
>> Toolkit\Examples\Registration\I
>> >>>mageRegistration8\debug>ImageRegistration8 N492_MRT1.mha 
>> N492_MRA.mha diffbefore
>> >>>.png deffafter.png slicebefore.png sliceafter.png
>> >>>
>> >>>ExceptionObject caught !
>> >>>itk::ExceptionObject (0130F7A0)
>> >>>Location: "class itk::CovariantVector<double,3> *__thiscall 
>> itk::ImportImageCont
>> >>>ainer<unsigned long,class itk::CovariantVector<double,3> 
>> >::AllocateElements(uns
>> >>>igned long) const"
>> >>>File: c:\documents and settings\nygard\desktop\insight 
>> toolkit\source\insighttoo
>> >>>lkit-3.2.0\code\common\itkImportImageContainer.txx
>> >>>Line: 188
>> >>>Description: Failed to allocate memory for image.
>> >>>
>> >>>
>> >>>
>> >>>_________________________________________________________
>> >>>Alt i �n. F� Yahoo! Mail med adressekartotek, kalender og
>> >>>notisblokk. http://no.mail.yahoo.com
>> >>>_______________________________________________
>> >>>Insight-users mailing list
>> >>>Insight-users at itk.org
>> >>>http://www.itk.org/mailman/listinfo/insight-users
>> >>>
>> >>
>> >>
>> >>
>> >>
>> >>_________________________________________________________
>> >>Alt i �n. F� Yahoo! Mail med adressekartotek, kalender og
>> >>notisblokk. http://no.mail.yahoo.com
>> >>
>> >
>> >
>> >
>> >
>> >
>> >
>> > _________________________________________________________
>> > Alt i �n. F� Yahoo! Mail med adressekartotek, kalender og
>> > notisblokk. http://no.mail.yahoo.com
>> >
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Sun, 06 May 2007 12:18:57 -0400
>> From: Luis Ibanez <luis.ibanez at kitware.com>
>> Subject: Re: [Insight-users] SpatialObjectToImageFilter --
>>         performance.
>> To: Julia Smith <julia.smith at ondiagnostics.com>
>> Cc: Insight Users <insight-users at itk.org>
>> Message-ID: <463DFFF1.7050409 at kitware.com>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>>
>> Hi Julia,
>>
>> Using the SpatialObjetToImageFilter is a very inefficient way of
>> rasterizing a Mesh.
>>
>> The filter that you may want to use for this rasterization is:
>>
>> http://www.itk.org/Insight/Doxygen/html/classitk_1_1TriangleMeshToBinaryImageFilter.html 
>>
>>
>>
>>
>>
>>     Regards,
>>
>>
>>        Luis
>>
>>
>> ---------------------
>> Julia Smith wrote:
>> > I am using a mesh object (3d) and trying to convert it to an masking
>> > image. The image size is not very large 100x70x50 pixels floating 
>> point.
>> > This conversion is taking minutes. The source of the mesh was a mask
>> > image of the same size.
>> >
>> > The exercise is to see what happens going from image->mesh->image.
>> >
>> > The operation has currently been taking a few minutes and it is 
>> still going.
>> >
>> > The version of insight I am using is 2.8.1.
>> >
>> > Suggestions? I hope I have done something rather stupid.
>> >
>> >
>> > 
>> ------------------------------------------------------------------------
>> >
>> > _______________________________________________
>> > Insight-users mailing list
>> > Insight-users at itk.org
>> > http://www.itk.org/mailman/listinfo/insight-users
>>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Sun, 06 May 2007 12:25:01 -0400
>> From: Luis Ibanez <luis.ibanez at kitware.com>
>> Subject: Re: [Insight-users] Statistics: Find out coefficient of
>>         affine  transform
>> To: Mathieu Malaterre <mathieu.malaterre at gmail.com>
>> Cc: insight-users at itk.org
>> Message-ID: <463E015D.7000002 at kitware.com>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>>
>> Hi Mathieu,
>>
>> You may want to try the KalmanLinearEstimator.
>>
>> http://www.itk.org/Insight/Doxygen/html/classitk_1_1KalmanLinearEstimator.html 
>>
>>
>>
>> Your estimator will be the vector [a,b],
>>
>> and your measures will be the vector [1,Y]
>>
>>
>> The Kalman Linear Estimator should converge to the vector [a,b] that
>> minimizes the sum of squared errors between X and (a+bY).
>>
>> For an example, you may want to look at:
>>
>>
>>
>>      Insight/Testing/Code/Algorithms/
>>           itkKalmanLinearEstimatorTest.cxx
>>
>>
>>
>>     Regards,
>>
>>
>>         Luis
>>
>>
>> ---------------------------
>> Mathieu Malaterre wrote:
>> > Hello,
>> >
>> >  I have a pretty simple problem to solve and I was wondering if I
>> > could reuse any brick already existing in ITK.
>> >
>> >  I am working with a 4D dataset (x,y,y and theta), where I can
>> > express for each pixel that:
>> >
>> > I(x,y,z)/sin(theta) = F(x,y,z) + b*I(x,y,z)/tan(theta)
>> >
>> >  My goal here is to find out the b constant, I do not need the
>> > F(x,y,z) part. Obvisouly all I need to do is draw the line
>> >
>> >   X = a + b Y
>> >
>> > where X = I(x,y,z)/sin(theta) and Y = I(x,y,z)/tan(theta) and the
>> > slope will give me 'b'.
>> >
>> > Any suggestions on which class to reuse ?
>> > thanks !
>>
>>
>> ------------------------------
>>
>> _______________________________________________
>> Insight-users mailing list
>> Insight-users at itk.org
>> http://www.itk.org/mailman/listinfo/insight-users
>>
>>
>> End of Insight-users Digest, Vol 37, Issue 16
>> *********************************************
>>


More information about the Insight-users mailing list