[Insight-users] Object extraction using Neighbourhood connected filter

Luis Ibanez luis . ibanez at kitware . com
Wed, 07 May 2003 13:24:39 -0400


Hi Valli,

 From the definition of the algorithm behind this filter, it
is to expect that it will behave differently in 2D and 3D.

That is, if you apply it to a 2D slice taken from a 3D volume,
the result is different than applying the algorithm to the 3D
volume and then extracting the 2D slice.

In other words, the operations:

               SliceExtraction

and

               NeighborhoodConnectedness


*do not* commute.



The reason is simply that the criterion used by this filter
in order to accept pixels in the region is that all the
neighbors of the pixels must have intensities in the range
defined by the thresholds.

(See: http://www.itk.org/ItkSoftwareGuide.pdf, Section 8.1.2
  pdf-page 250)


For a given pixel, the 3D neighborhood provides more pixels
than the 2D neighborhood that will be used by a full 2D operation.
This means that in 2D it is easier for a pixel to get accepted
in the region since only the 8 in-slice neighbors need to satisfy
the intensity test, while in 3D, 26 neighbors must satisfy the
test.  You can then be sure that the 2D output of this filter
will contain more pixels in the accepted region, than the 3D
counterpart.

This is not only the case for the NeighborhoodConnected filter.
Any filter based on neigborhood operations *will not* commute
with slice extraction. This include:

   - Mean filter
   - Median filter
   - Mathematical morphology filters
   - Any kernel based convolution filter
   - Any of the anisotropic diffusion filters
   - Gaussian smoothing and their derivatives
   - Any region growing filter
   - Any of the level set filters


We can probably prove that only pixel-wise filters will
commute with slice extraction. Just by applying the argument
when extracting the slice along any of the dimensions.

---

ITK was build N-D with the conviction that Slice-by-Slice
processing is a bad approach. Slice-wise processing is a
remanent of a time when only one slice fitted in your computer
memory, and a time when algorithm developers didn't make the
additional effort for generalizing their methods to N-D.

This time is gone...
                             Flatland is over...


   Enjoy 3D, 4D, ND !



      Luis



-------------
cspl wrote:
> Dear Mr.Luis,
>  
>  We are working on itkNeighborhoodConnectedImageFilter. Our requirement 
> is to separate brain from Skull for MRImages.i.e to extract largest 
> connected region.
> Our input dataset is of size 256X256X120. When we give entire volume to 
> Filter it is extracting the object as expected. But, When we try to 
> extract the largest object slice by slice as input, for the same 
> seedpoint, threshold  and radius as given to the volume, the output for 
> some slices ( about 6 - 10 slices in 120) is different when compared to 
> that of the volume.
> We are a bit confused about the behaviour. Could you please explain and 
> suggest for better results.
>  
> Enclosing the code for verification,
>  
> typedef itk::NeighborhoodConnectedImageFilter<ConverterType::ImageType, 
> ConverterType::ImageType> FilterType;
>  
>  //check threshold choices
>  if (low <= 0) return ;
>  if (high <= 0 || high <= low) low = 255; 
>  
>  FilterType::IndexType seed; 
>  seed[0] = seedPointX; seed[1] = seedPointY;
>  //seed[2]=slicenumber; //this is set if volume is given.
>  
>  FilterType::InputImageSizeType radius;
>  FilterType::Pointer filter = FilterType::New();
>  filter->SetInput(meanFilter->GetOutput()); 
>  filter->SetSeed(seed);
>  
>  radius.Fill(1);
>  filter->SetRadius(radius);
>  filter->SetLower (low);
>  filter->SetUpper (high);
>  filter->SetReplaceValue(1);
>  
>  try
>  {
>   filter->Update();
>  }
>  catch (itk::ExceptionObject& e){
>   AfxMessageBox(e.GetDescription());
>   return ;
>  }
>  
>  
> Thanking you,
>  
> Regards,
> Valli.