ITK Release 4.0 Orientation by Kent

From KitwarePublic
Jump to navigationJump to search
The other solution proposed -- making Get/SetDirection do nothing in
itk::Image, but do something in itk::OrientedImage breaks things for another
class of applications -- those that primarily use direction cosines to
recover the patient orientation when they read an image.  Once images are
read and oriented to a common frame, the application is free to assume that
all images so oriented are nominally consistent in their anatomical
orientation.

Programs written to that paradigm basically ignore the direction cosines
mostly or completely, once all the noses are pointing the same way.

There's three concepts that collide when it comes to orientation. In order
to form a more perfect ITK, need to be considered.

1. Gross anatomical orientation -- you want to compare two volumes, one
acquired head first, and another feet first.  This is addressed by
* itk::SpatialOrientation (defines all 48 possible orientations)
* itk::SpatialOrientationAdapter (converts between spatial oriention codes
and direction cosines)
* Direction cosines (orientation info recovered from image files is
converted and stored therein)

2. Actual orientation ‹ the orientation info recovered can be from oblique
scans and it is stored in direction cosines.  This conflicts (somewhat) with
concept #1, in that going from the actual orientation to the gross
anatomical orientation squares up the direction cosines and loses the
rotational information.

3. General spatial transforms, addressed by itk::Transform and it¹s
children.  These are what are used by Registration and SpatialObjects to
move anatomy or geometry around.

There is a head-on collision between concept #3 and concept #1 and #2 in the
case of itk::Image and itk::OrientedImage participating in the SpatialObject
hierachy.  SpatialObjects live in a nested set of reference frames,  the
outermost of which is the traditionally (and dubiously) named ŒWorld¹
coordinate system.  Anyone who has to deal with this stuff eventually gets a
headache, because you always want to get ahold of some transform T, and have
to pace the corridor for 15 minutes to figure out which direction ŒT¹ has to
go ‹ world to local, local to world? -- and which order transforms need to
be multiplied to get the results you want.

The cream of the jest is that concept 3 defines the world coordinate system
is represented by the identity matrix, but concept 1 and 2 think the
identity matrix is one of 48 different ways to define how voxels are
organized in memory with respect to anatomy, the one labeled 'RAI.'

What needs to be worked out ‹ and believe me I have no idea right now how to
do it ‹ is to find a way to keep these concepts separated in ITK.  I don¹t
entirely follow the problems that direction cosines are causing in Spatial
Objects,  but it is obvious to me that we¹re making the direction cosines do
double duty in a way that isn¹t working.

At any rate, the state of play is this: 1) The status quo is a priori
backwards compatible, but broken 2) Both proposed solutions will break
backwards compatibility. 3) The proposed solutions are mutually
incompatible.

So perhaps it's right to table it for the moment and see if anyone comes up
with a bright idea to resolve things.

Kent Williams