[Insight-developers] Image::TransformPhysicalPointToIndex

Luis Ibanez luis.ibanez@kitware.com
Fri, 01 Mar 2002 09:23:14 -0500


These are good points.

So far the treatment of image spacing and origin in filters is not very 
consistent.

How about adding a method to ImageFilters that will take care of the 
Transform ?
GenerateTransform()  ?  UpdateTransform() ? ComputeTransform() ?

In this method, the filter will provide its own Transform representing 
what the
filter is doing to the image. In most cases it will be just an 
IdentityTransform but
for a Shrink filter it will be a ScaleTransform.  This transform will be 
passed to
the Transform obtained from the input image and we will ask the image 
transform
to compose with the FilterTransform.

It seems to be better to delegate this "Compose()" task to the Transform 
itself
in order to prevent the Images from having to be aware of all the possible
Transform classes.

The Transform resulting from the composition will be passed to the 
output image.

That sounds fine for the basic Transforms up to Affine. It start getting 
complicated
with polar coordinates but still will be a matter of defining good typing:

Affine * Affine -> Affine
Affine * Polar -> ??
Cylindrical * Polar -> ??

Some of them will be difficult to represent. Some others will just not 
make sense,
in which case it was pointless to pass such Input image to this 
particular Filter in
the first place.

One option is to return "Composite" transforms (in the way it was 
discussed for
the spatial object) which is usual in graphics.  Just concatenate the 
transforms in
a sort of Array of Transform inidicating   T[0] composed T[1] composed 
T[2]....

but we may not want to transform a million points through this structure....

Maybe the team at Pittsburgh/CMU could provide an example of what can be
a typical chain of processing in ultrasound images. That will help to 
figure out
a reasonable compromise between generality and performance.


     Luis

=======================================================

Miller, James V (CRD) wrote:

>It's one thing when we talk about a user creating an image and
>then setting the transform on it that converts between physical
>and index coordinates.
>
>I would guess, however, that if said image is passed through a filter,
>the transform the user supplied for the input will not be transferred
>to the output.  Furthermore, if a filter shrunk or enlarged an image 
>(or flipped, cropped, padded, etc.), the filter would not know how to 
>appropriately modify the input transform (given that the input transform 
>could be anything from an affine to an ultrasound transform).  
>
>We probably need to think about how a filter's output image's transform can be 
>different from its input image's transform and develop an API that any filter
>can call to modify a transform.
>
>Or, develop a mechanism where a filter either says that it does not modify
>the mapping from physical to world or only knows how to modify the mapping
>if the transform is an affine.
>
>Finally, some of the filters currently use or should use the spacing for 
>proper calculation (gradients being a prime example). If a user swaps the 
>transform on a image to something other than the standard affine, then these
>filters "should" have to query something on the transform to convert the 
>derivatives in index space into derivatives in physical space.
>