[Insight-users] difference between 2D and 3D anisotropic filt er?

Miller, James V (Research) millerjv at crd.ge.com
Mon, 12 Apr 2004 09:41:23 -0400


Rodrigo, 

When an algorithm relies on pixel neighborhoods, we have to check
whether each pixel is a boundary condition (where its neighbors are 
outside the image) or not. ITK has a very nice framework for analyzing
an image to determine which pixel are subject to boundary conditions and 
divides the work into several work packets.  One work packet will have all
the pixels that do not have boundary conditions (usually the vast majority 
of the pixels) and the rest of the work packets are subject to one or 
more boundary conditions.  In most algorithms, processing the boundary 
condition pixels takes about the same amount of time as processing the 
interior (which may have 90% of the pixels).

But when an algorithm is given a 3D image with just a single slice 
(degenerate volume), then EVERY pixel is subject to boundary conditions 
(the neighbors of a pixel on "adjacent" slices are outside the degenerate
volume, and hence every pixel is a boundary condition).

Jim



-----Original Message-----
From: Luis Ibanez [mailto:luis.ibanez at kitware.com]
Sent: Sunday, April 11, 2004 2:47 AM
To: Rodrigo Vivanco
Cc: Insight-users at itk.org
Subject: Re: [Insight-users] difference between 2D and 3D anisotropic
filter?



Hi Rodrigo,

The reason why it takes longer to compute the filtering in a
degenerate 3D image (a single slice image), is that you are
asking the filter to explore thousands of pixel neighborhoods
that do not exist.

If you have a 3D image and would like to filter the slices
independently, then you must use the ExtractImageFilter in
order to get one slice at a time, and represent this slice
as a 2D image.

Note that running 2D anisotropic diffusion in all the slices
of a 3D volume *is not* equivalent to run anisotropic diffusion
in the original full 3D image.

However, it is common in applications that involve user supervision
to present the user with previews of the effects of 3D filtering
by running the 2D versions of the same filters only in the slice
being presented to the user.  This should be taken with caution
since Slice extraction and Filtering rarely commute.



   Regards,


      Luis



-----------------------
Rodrigo Vivanco wrote:

> HI:
> 
> perhaps someone can answer this. Why does it take so much longer to
process a 
> single slice image stored as 3D (x,y,z)=(256,256,1) using the 3D
anisotropic 
> filter versus the same image stored as 2D? Also, it displays a warning
about 
> using a time-step that may introduce instability in the solution...
> 
> what if I have a 3D image, but would like to filter the slices
independently 
> of each other, what would be the best way to do this, that is calling the
2D 
> version of the filter for each slice. Should I make a new 2D Image for
every 
> slice? Can I use shally copies using Regions to do this?
> 
> thanks,
> 
> rodrigo
> 
> -------------------------------------------------
> This mail sent through IMP: http://horde.org/imp/
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk.org
> http://www.itk.org/mailman/listinfo/insight-users
> 



_______________________________________________
Insight-users mailing list
Insight-users at itk.org
http://www.itk.org/mailman/listinfo/insight-users