[Insight-users] Excluding DICOM pixel (padding) values in pipeline, was Re: Possible bug reading DICOM MONOCHROME1 images with Pixel Padding Value != 2^bits stored-1

David Clunie dclunie at dclunie.com
Sat Jan 2 12:13:31 EST 2010


Hi guys

Sorry I didn't reply earlier but I just came across this thread
recently.

There are two fundamental issues here that any parser and pipeline
and application need to deal with:

1. How much to "cook" the pixel data prior to making it available to
   the next layer or step ?

2. How to indicate that some values are to be "excluded", i.e., are
   not really pixels (or voxels) at all ?

The form of the stored pixel data is an artifact of the modality
encoding and has no inherent significance in and of itself, and
most 3D applications would prefer that their toolkit apply things
like rescale slope and intercept first, so that it doesn't have
to worry about this. This simplifying assumption is all well and
good for CT Hounsfield Units, but breaks down when the rescaling
takes one into ranges of values that are unexpected, e.g., PET
SUV for instance, which can be significantly less than 1.0 (i.e.,
a floating point representation or transformation is needed).

In more recent DICOM objects we have tried to address this by
separating the rescaling from the mapping to physical units by
allowing for one or more "real world value mapping" transformations
that are independent of the rendering pipeline. But one still
cannot avoid the fact that the output of the rescale operation
is not necessarily an integer value.

As for "excluded" pixels, in the early CT implementations the CT
reconstruction produced circular, not rectangular or square,
images and hence it was desired to flag the pixels in the rectangular
pixel matrix that were padding, and hence not to be windowed (and
which could be compressed during transmission or storage).

Often, the pixel padding values are not only outside the range
of what the modality can produce, but actually outside the range
of what are defined to be valid encoded pixel values, e.g., they
may be beyond what is defined by Bits Stored in the case of
DICOM encoding (e.g., 12 bit CT HU data, signed or unsigned, but
with a pixel padding value with the 16th bit set, e.g., 0x8000).

This is somewhat distinct but has similar implications to the area
that has been "shuttered" (e.g., under the collimators our outside
the bounds of say a circular image intensifier in XA and XRF
applications).

Now, since most 3D pipelines don't have the concept of an "invalid"
pixel value, a simplifying assumption may be to set the "padding
value", if encountered to the lowest valid pixel value so that
at least it stays black, or similar, which is not ideal but better
than nothing.

Another option is to extract the pixels or voxels that are stored
with padding values into a separate graphics plane (e.g., as a
bitmap) and then apply them as a clip plane to the volume; this
is a nice approach since it can be used both for the pixel padding
values, as well as for shutters, as well as for both 2D and 3D
applications, and it logically extends to extracting overlays
that are embedded in high bits of the pixel data or in the
separate overlay data attribute as well.

But I completely agree with Iván that applications need to be
"protected" from this complexity (and variation between vendors
and modalities) to the maximum extent possible, so that the
toolkits take care of this, but it means that the toolkits
need to be able to "cook" the pixels in a completely predictable
way, as well as separate the concept of making "invalid" pixels
behave as valid ones (by cooking the padded pixels to have minimum
values) AND signaling "invalid" pixels from those that are valid
(e.g., by extracting them and providing them as separate bit
planes that the application can use if more sophisticated).

We have recently faced this issue in defining what the DICOM
WG23 application hosting interface should do in this respect,
so you might want to review a recent version of Sup 118 for
example, which in A.2 Abstract Multi-dimensional Image Model,
the subject of replacing and stripping out the padding values
is discussed and a solution provided. See:

"http://www.dclunie.com/dicom-status/status.html#Supplement118"

for a link to the most recent version.

WG 23 is trying to do exactly what Iván is expecting, protecting
the application from this nonsense, wherever possible, yet
providing access to the raw data if needed (hence the three
interfaces, file, native model and abstract model). Does the
ITK community have any plans for directly incorporating support
for the WG 23 (web service based) interfaces ?

David

PS. Note that recently DICOM has extended the pixel padding concept
to include a pixel padding range of values, not just a single value,
to deal with noisy air encountered outside objects, e.g., on
mammograms, so the DICOM parser making the decision to suppress
or replace such values and/or extract a clip plane needs to account
for this. See CP 692 and subsequent revisions.

PPS. Another question is when and how to apply window values in
the image header and whether they are relative to the stored
pixel values (which vendors assume they are for PET, contrary
to the standard) or whether they apply to the rescaled values
(which they generally do except for a Philips MR "feature"),
but that is a subject for another day.

Iván Macía wrote:
> Hi Mathieu, all,
> 
> I understand your point and probably from a technical point of view you may
> be right but I don't totally agree. Sorry if this is a long post, just take
> a breath and read :) 
> 
> My knowledge of DICOM is limited, but, if I understand you well, you mean
> that the final application should do all the work of interpreting things
> like the Photometric Interpretation, Rescale Slope/Intercept and other pixel
> interpretation operations, which are needed not only at a presentation level
> but also at an image processing level. This means I would need to introduce
> this code in every application that uses DICOM.
>  
> IMHO, interpreting some important DICOM fields is a repetitive and bug prone
> work that users of the library try to avoid. Otherwise I would have to
> program my own library or layer on top of GDCM or ITK to do exactly this
> (which would generate buggy code etc etc). If you ask me, I would introduce
> part of this code as part of the library (maybe as some auxiliary classes).
> In fact, this is what itk::GDCMImageIO does (i.e. applying Rescale Slope /
> Intercept) and then, following this philoshophy, the code for interpreting
> MONOCHROME1 should be introduced in itk. I agree with this, and as you
> mention, maybe applying Rescale Slope / Intercept is not correct after
> ConvertFixGrayLevels() but before. Also, maybe rescale/slope intercept
> values could be modified at ITK level if images are MONOCHROME1, knowing
> that this transformation has been applied (as a temporary patch).
> 
> Following your point of view transformations such as Rescale Slope /
> Interpret shouldn't be applied by the library, in this case ITK, but some of
> the operations, are needed not only at a presentation level, but at an image
> processing level too. I have seen in several forum threads problems with
> researchers that don’t know how to interpret the Rescale Slope / Intercept
> tags and don't know if they are handling let's say the real CT Hounsfield
> values when they should be focusing on developing algorithms and methods
> that work with those images, regardless of how they are stored and how to
> interpret DICOM format. This is what ITK does right now with Rescale Slope /
> Intercept and I think this is a very nice feature. If someone really really
> needs raw pixel data, options could be provided for this, or the libraries
> could be used at a lower level.
> 
> Not to mention that there are already some other operations that are applied
> at a presentation level, such as W/L transformation, pixel transfer to
> textures, etc. I would like not to focus on interpreting DICOM specific
> things as if I loaded a MetaImage.
> 
> We have already introduced quite a lot of code to further interpret some
> DICOM peculiarities (such as series with intermixed volumes, with different
> orientation, pixel types and sizes in their images, time series vs volumes
> vs 3D+t series vs multi-frame images...). I would prefer to trust on a
> public, well tested DICOM library to do this but unfortunately none fulfills
> all our requirements. The closest thing we have found so far is GDCM for
> DICOM and ITK for medical image processing. Currently they are being used
> extensively in our applications and we would like to use them as much as
> possible, report bugs, contribute bug fixes/code if necessary etc.
>  
> Of course this is my idea of what I would like the libraries to be with
> respect to DICOM, and since I am not the original developer nor I pay
> anything for it, it is up to the developers/maintainers decide what goes and
> what does not go in it.
> 
> Regarding the solution you propose mixing 1.x and 2.x, it is an option, but
> I don't see it very practical for our current development since several
> developers will have to follow the same process and some of them are not
> into our organization. I would prefer to provide a patch myself, for
> internal use if necessary, at least as a temporary solution. Thanks anyway,
> I will try and see what works and let you know.
> 
> Buf, that was long and quite philosophical. But I would like to know what
> are the future plans for DICOM in ITK in order to take some decisions with
> time.
> 
> On the other hand, like me finish thanking you all developers for the nice
> work you are doing with these nice libraries that makes our life as
> developers/researchers a little easier ;)
> 
> Best regards
> 
> Ivan
> 
> 
> -----Mensaje original-----
> De: Mathieu Malaterre [mailto:mathieu.malaterre at gmail.com] 
> Enviado el: martes, 17 de febrero de 2009 12:10
> Para: Iván Macía
> CC: Bill Lorensen; insight-users at itk.org
> Asunto: Re: [Insight-users] Possible bug reading DICOM MONOCHROME1 images
> with Pixel Padding Value != 2^bits stored-1
> 
> 'lo
> 
> On Tue, Feb 17, 2009 at 12:00 PM, Iván Macía <imacia at vicomtech.org> wrote:
>> Hi,
>>
>> If ConvertFixGreyLevels() is removed, all MONOCHROME1 images will fail to
>> display correctly.
> 
> Presentation is one thing, pixel data is another. I really do believe
> what was done in GDCM 1.x is simply a bug, and instead the Pixel Data
> should be loaded as is: untouched. In which case it make the Pixel
> Padding valid again, as well as Largest Pixel Value (and any other
> pixel data associated attribute).
> I have not touched GDCM 1.x in years but I do believe that Rescale
> Slope/Intercept cannot be applied after this 'ConvertFixGreyLevels'
> operation, (which AFAIK is what is happening).
> So I am deeply convinced the right thing to do is simply remove this
> function completely from the pipeline and have *Presentation*
> application handle the Photometric Interpretation as expected. But
> again ITK is not a Presentation software.
> 
>> On the other hand, right now we cannot use gdcm 2.x.
>> since we use gdcm 1.x via ITK and directly in our DICOM/PACS viewer.
> 
> Well there is always a solution :)
> Set ITK_USE_SYSTEM_GDCM to your gdcm 2.x installation and have another
> gdcm 1.x installation where the namespace would be mangled to
> something like 'gdcm_legacy'. Then in your code it is just a matter of
> recompiling with 'using gdcm_legacy;'.
> 
> 2cts


More information about the Insight-users mailing list