ITK Release 4.0 Orientation by Torsten

From KitwarePublic
Jump to navigationJump to search
While I won't claim I fully understand the current status quo in ITK, I
have a comment regarding Kent's list of options:
> There's three concepts that collide when it comes to orientation. In order
> to form a more perfect ITK, need to be considered.
>
> 1. Gross anatomical orientation -- you want to compare two volumes, one
> acquired head first, and another feet first.  This is addressed by
> * itk::SpatialOrientation (defines all 48 possible orientations)
> * itk::SpatialOrientationAdapter (converts between spatial oriention codes
> and direction cosines)
> * Direction cosines (orientation info recovered from image files is
> converted and stored therein)
>
> 2. Actual orientation ‹ the orientation info recovered can be from oblique
> scans and it is stored in direction cosines.  This conflicts (somewhat) with
> concept #1, in that going from the actual orientation to the gross
> anatomical orientation squares up the direction cosines and loses the
> rotational information.
>
> 3. General spatial transforms, addressed by itk::Transform and it¹s
> children.  These are what are used by Registration and SpatialObjects to
> move anatomy or geometry around.
>
>
For what it's worth, I recently had to reconcile options #1 and #2 in my
own library, and I thought I'd share my experience. To be very clear:
what I am going to say relates ONLY to my own software, which is
completely separate from ITK.

Historically, I have been following Option #1 above, then added Option
#2 for non-human images (bees don't live in RAS etc. space), but
recently needed to support direction cosines also for human data (from
DICOM). Obviously, I didn't want to break compatibility with
registrations computed over the past 10 years, so it was important to
come up with an implementation that achieves that.

So here's what I did, based on the space abstraction laid out by Gordon
Kindlman in the Nrrd file format specification.

All images are read living in some physical space. For DICOM, that's
LPS. For Nrrd, it's what's in the Nrrd "space:" field. For Analyze, it's
whatever the Analyze orientation field is supposed to mean in my reading
of the file format documentation.

My software uses RAS coordinates for all images internally.
Historically, everything was reoriented into RAS, i.e., for (i,j,k)
pixel index, i after reorientation increases from L to R.

Now -- here's the workflow when I read an image:

1. image data is read as stored in file, and coordinate space is set as
described above. Direction cosines are taken from input if they exist
(DICOM, Nrrd), or initialized as identity matrix (Analyze). Basically,
for the Analyze case, I am faking the anatomical orientation to be
identical to the space definition.

2. coordinate space is changed to RAS. That does not affect the storage
order of image pixels, but it changes the direction cosines by
permutation and negation. As an example, "LPS" to "RAS" means x becomes
-x, y becomes -y, and z stays z. This also applies to the coordinate
space origin. Different example: "ARS" to "RAS" means that x and
y elements are swapped in all direction vectors and also in the space
origin.

3. determine coarse anatomical orientation of image from direction
vectors and coordinate space (the latter now RAS). Based on this,
reorient image so that pixel storage is (i,j,k)<=>(L/R,P/A,I/S).
Direction cosines and space origin are modified accordingly so that the
every pixel remains in the same physical position. That means
permutation of direction vectors, negation of direction vectors plus
adding offset to space origin that corresponds to opposite end of image
w.r.t. given index.

After these steps, the image lives in RAS space, is coaresly stored in
RAS order, but has the same physical space coordinates for each pixel as
was stored in the original image file, i.e., all physical space
transformations between two images remain correct (when taking into
consideration that space axes may have been negated in step 2.

When images are written to a file, the procedure is repeated in reverse
order. For that, each image object has meta information that stores its
original coordinate space and storage order, so the original order and
space can be recovered. The original direction cosines are also
recovered in the process.

I am not saying this whole, slightly messy, procedure would be a good
idea for ITK, but it seems to reconcile Options #1 and #2 on Kent's
list, and it's working for me.

Best,
 Torsten