[Insight-users] Deformation vectors in the BSplineDeformationTransform?

motes motes mort.motes at gmail.com
Wed Aug 19 07:37:18 EDT 2009


On Tue, Aug 18, 2009 at 3:29 PM, Charl Botha<c.p.botha at tudelft.nl> wrote:
> 2009/8/18 motes motes <mort.motes at gmail.com>:
>> "Mapping the moving image onto the fixed image is much easier when you
>> simply have to iterate over the pixels / voxels of the fixed image,
>> and for each pixel / voxel you have a vector pointing to the position
>> in the moving domain that you have to interpolate.  For example, for a
>> 2D case, you'd have a stock-standard double nested loop (rows /
>> columns) for the fixed image, at each position you'd look up the
>> corresponding vector, use that to determine the position in the moving
>> space, interpolate the moving image pixel, then put that at your
>> current position in the fixed image."

Is it correct that you are talking about resampling here and not registration?

1) Resampling

When doing resampling as described on page 228 in the itkSoftwareGuide
all pixels in the output image is visited. Each of the output pixels
are then converted to physical coordinates and mapped to the input
image. The intensity to be assigned to the pixel in the output image
is the computed from interpolation in the input image.

This matches your description above when replacing fixed image with
output image and moving image with input image.



2) Registration

I found this written by Luis:

In Pseudo code, an iteration of the Optimizer in the registration
framework will do the following:

   1) Take the array of current Transform parameters
   2) Initialize the Transform using these parameters
   3) Compute the Metric (for example a MeanSquares)
       3.1) For Every pixel in the Fixed Image
         3.1.1) Compute the physical coordinates of the
                   pixel by taking into account: image origin
                   spacing and direction
         3.1.2) Use the Transform to map the point from
                    the Fixed image coordinate system to the
                    Moving image coordinate system.
         3.1.3)  Using the Moving image parameters convert
                    the physical coordinates of the mapped point
                    into an image continuous index
                    (in the Moving image grid)
         3.1.4)  Interpolate the intensity of the Moving image
                    at that continuous index position.
          3.1.5) computes the difference between the moving
                    image intensity at that point and the fixed
                    image intensity at the pixel that we took in
                    (3.1.1).   Compute the square, add it to the
                    accumulator.
       3.2) Divide the sum in the accumulator by the number
               of visited pixels.

At this point the optimizer will analyze the metric value
and will decide what parameters of the Transform to test
next.


So basically what he says is that the difference between the pixel
values of the fixed and moving image are accumulated. No pixel values
in the fixed and moving images are updated (unlike when doing
resampling) only the difference is computed after applying the
transform.

After computing the transform it can be used in a resampling filter to
map the moving image into the fixed image space as described earlier.

Is this how it works?


More information about the Insight-users mailing list