itkImage should contain information regarding the orienttaion of the image volume.
Coordinate systems are an important part of any medical application. Medical scanners create regular, "rectangular" arrays of points and cells. The topology of the arrays is implicit in the representation. The geometric location of each point is also implicit.
vtk and itk Image
vtk and itk store meta-information that can be used to convert between image coordinates (i-j-k) and world coordinates (x-y-z). Each image has an origin that is a 3-D (n-D for itk) vector. The origin (x-y-z) specifies the world coordinate of the first point in memory. The spacing specifies the distance between points along each axis. Using the spacing and origin, the transformation between i-j-k and x-y-z is a fast, simple computation. x(yz) = origin + i(jk) * spacing; i(jk) = (x(yz) - origin) / spacing;
Limitations of the current Image
Many image processing and segmentation algrothms do not need additional spatial information. But, registration and modeling techniques need to respect the orientation of the image arrays.
Proposed Extension to itk::Image
By adding an itk::Matrix containing direction cosines, the i-j-k -> x-y-z transformation could include the orientation in the computation. Adding the matrix will not change the existing API. All index to point calculations are confined to itk::Image. The transformations in matrix form are: XYZ = To * Rc * Ss * IJK where To is a translate to the origin, Rc is the matrix of direction cosines and Ss is a scale matrix of spacings. There are performance considerations, so the implementation may cache some internal matrices and state.
Questions Raised by Proposal
- The coordinate frame in which the direction cosines are measured needs to be identified somehow. Should the itk::Image allow many possible spaces, and then explicitly identify which space is being used in a given Image? This flexibility would reflect the fact that different coordinate systems are used in different formats (NIFTY-1's right-anterior-superior versus DICOM's left-posterior-superior).
- Or, should ITK pick exactly one space (say, RAS) and convert orientation information from different formats into that space?
- If you pick exactly one space, how can can that match the otherwise dimensional generality of ITK? fMRI volumes can "live" in 4D x-y-z-t, and other images may live in higher dimensional spaces.
- Vector and Tensor data has a coordinate frame associated with it: the frame in which the vector and tensor coefficients are measured. Should there any assumption or restriction that the measurement frame of vector/tensor data is identical to the space in which the direction cosines are expressed? It may simplify things.
- But if there is this assumption, then are operations which rotate an image (during, say, a rigid regisitration) responsible for performing the corresponding coordinate transform on the vector and tensor values in the image?
- If there is not this assumption, then does the orientation of the vector/tensor measurementt (a third coordinate frame!) have to be identified? Relative to which space: image or world? Doesn't this require adding a second itk::Matrix?
- If the raster ordering of the axes in the itk::Image is permuted, should the columns of the itk::Matrix be correspondingly permuted?
- Current ITK SpatialOrientation header:
- How NRRD handles orientation information:
- Some info on Analyze format:
- Info from Tosa Yasunari about how Freesurfer handles coordinates:
- A lot of good documentation is included in the nifti1.h header file:
- The DICOM Standard in PDF: