[Insight-users] Segmenting Visible Human Data : RGB ConfidenceConnected : VectorThresholdSegmentationLevelSetImageFilter added.

Celina Imielinska ci42 at columbia . edu
Mon, 22 Dec 2003 12:55:38 -0500 (EST)


 Stefan,

 have you tried our RGB filters for Simple fuzzy connectedness and
Voronoi diagram segmentation (classification) to segment color Visible
Huaman data.... we have good results...

  -Celina


On Mon, 22 Dec 2003, Stefan Lindenau wrote:

> Hi Luis,
>
> the Doxygen documentation is not containing the Class summary. Maybe
> this is due to a wrong class tag.
>
> Thank you for your comprehensive answer for the Mahalanobis distance
>
> Stefan
>
> Luis Ibanez wrote:
>
> >
> > Hi Stefan,
> >
> > Thanks a lot for contributing these classes to ITK.
> >
> > They have been commited in the repository:
> >
> > http://www . itk . org/Insight/Doxygen/html/classitk_1_1VectorThresholdSegmentationLevelSetFunction . html
> >
> > http://www . itk . org/Insight/Doxygen/html/classitk_1_1VectorThresholdSegmentationLevelSetImageFilter . html
> >
> >
> > You will find the new files under
> >
> >   Insight/Code/Algorithms
> >
> > A test has been added under
> >
> >   Insight/Testing/Code/Algorithms
> >     itkVectorThresholdSegmentationLevelSetImageFilterTest.cxx
> >
> > Please let us know if you find any problems
> > with these classes.
> >
> >
> > ---
> >
> >
> > About your question regarding the Mahalanobis distance:
> >
> > In the VectorConfidenceConnected class we are also using
> > the square root of the distance returned by the class
> > tk::Statistics::MahalanobisDistanceMembershipFunction
> > http://www . itk . org/Insight/Doxygen/html/classitk_1_1Statistics_1_1MahalanobisDistanceMembershipFunction . html
> >
> >
> > since in this class the returned value of distance
> > is computed as a quadratic expression. By taking the
> > square root it is easier to assign threshold values
> > by reasoning on the linear scale of the pixel values.
> > (or its vector components).
> >
> >
> >
> > Thanks a lot for your contribution to ITK !
> >
> >
> >    Luis
> >
> >
> >
> > -------------------------
> > Stefan Lindenau wrote:
> >
> >> Hi Luis,
> >>
> >> I have generated the Vector ThresholdSegmentationLevelSetImageFilter.
> >> The files are attached. You can add them to the toolkit.
> >>
> >> I wanted to write a test too, but I did not figure out how I could do
> >> this. Maybe I could do this by modifying the tests for the
> >> ThresholdSegmentationLevelSetImageFilter, but I don't know how. If
> >> you have a good starting point I will try to write this, too.
> >>
> >> And there is another question regarding the MahalanobisDistance. In
> >> the filter I used the square root of the Evaluate method of  the
> >> MahalanobisDistanceMembershipFunction. Is this reasonable?
> >> Are there any resources available on the Internet, where example
> >> values are given so that it is possible to make estimations. I think
> >> I got some good values, but I am interested how it is working. I did
> >> not find much useful by using google.
> >>
> >> Thank you
> >> stefan
> >>
> >> Luis Ibanez wrote:
> >>
> >>>
> >>> Hi Stefan,
> >>>
> >>> You are right,
> >>> Region growing filters that only based on intensity values
> >>> are prone to producing leaks.
> >>>
> >>> You may reduce this tendency by first applying a smoothing
> >>> filter like the VectorGradientAnisotropic smoothing. This
> >>> may help, but still it is not possible to guarranty that it
> >>> will prevent the leaks.
> >>>
> >>> In practice a common approach is to use the RegionGrowing
> >>> methods for producing a very sketchy representation of the
> >>> object, then solidify this representation using mathematical
> >>> morphology filters (like dilation-erosion sequences). Then
> >>> use this as an initialization for level set filters that
> >>> have better capabilities for dealing with leaks.
> >>>
> >>> The ThresholdLevelSet filter is certainly one of the best
> >>> first options to try out.  Unfortunately this filter is
> >>> not yet extended to RGB data.  In make sense, as you
> >>> suggested, to use the Mahalanobis distance in this case
> >>> for controling the Threholding values.
> >>>
> >>> The good news for you is that is should be relatively easy
> >>> to get this RGB level set filter done. You simply need to
> >>> create a class
> >>>
> >>>
> >>>    itkVectorThresholdSegmentationLevelSetFunction
> >>>
> >>> based on the code of the current
> >>>
> >>>    itkThresholdSegmentationLevelSetFunction
> >>>
> >>>
> >>> whose only goal is to compute the Speed image used by the
> >>> level set.
> >>>
> >>> Once you have this new class, you can easily create the
> >>>
> >>>    VectorThresholdSegmentationLevelSetImageFilter
> >>>
> >>> by copy/pasting code from the current class:
> >>>
> >>>         ThresholdSegmentationLevelSetImageFilter
> >>>
> >>>
> >>>
> >>> Notice that the level set code only sees this speed image,
> >>> not the original RGB data.
> >>>
> >>> The only trick here is that you will have to compute the
> >>> mean and covariance of a "sample region" in order to
> >>> feed the Mahalanobis distance function.  You may also want
> >>> to look at the speed function before you start using the
> >>> level set method.  A quick look at the speed functions
> >>> will give you a feeling on the chances of segmenting the
> >>> region using the level set method. A high quality speed
> >>> image is a fundamental requirement for getting good results
> >>> from level set methods.
> >>>
> >>>
> >>> Please let us know if you have any problems in writing
> >>> this class.  We will also be very interested in adding it
> >>> to the toolkit  :-)
> >>>
> >>>
> >>> Thanks
> >>>
> >>>
> >>>
> >>>    Luis
> >>>
> >>>
> >>>
> >>> -------------------------------
> >>> Stefan Lindenau wrote:
> >>>
> >>>> Hi Luis,
> >>>>
> >>>> ok I have read the parts of the software guide that you mentioned
> >>>> again.
> >>>>
> >>>> Now I want to realize the segmentation of the Visible Human Data by
> >>>> using the VectorConfidenceConnectedImageFilter to get the mean
> >>>> vector and the covariant matrix of my tissue. I cannot use the
> >>>> segmentation of this filter directly because it is leaking.
> >>>> With this data I want to initialize a Levelset filter that is
> >>>> almost similar to the ThresholdLevelset filter, but it should use
> >>>> the Mahalanobis distance for generating the speed image.
> >>>>
> >>>> I think that I have to write this LevelsetFilter by myself or is
> >>>> there a implementation for such a problem available?
> >>>>
> >>>>
> >>>> Thanks
> >>>> Stefan
> >>>>
> >>>> Luis Ibanez wrote:
> >>>>
> >>>>> Hi Stefan,
> >>>>>
> >>>>> When you use ConfidenceConnected you only need to provide the
> >>>>> multiplier
> >>>>> for the variance.  The range of intensities is computed by the filter
> >>>>> based on the mean and the variance of intensities around the seed
> >>>>> points.
> >>>>>
> >>>>> The range is simply:
> >>>>>
> >>>>>      lower limit =   mean   -   standardDeviation * multiplier
> >>>>>      upper limit =   mean   +   standardDeviation * multiplier
> >>>>>
> >>>>> The mean and standardDeviation are computed by the filter.
> >>>>> You only need to tune the value of the multiplier, and
> >>>>> experiement with the number of iterations.
> >>>>>
> >>>>> This holds for RGB confidence connected, where instead of a scalar
> >>>>> mean
> >>>>> you have a mean vector of three components (RGB components), and
> >>>>> instead
> >>>>> of standardDeviation you have a covariance matrix, intead of lower
> >>>>> and
> >>>>> upper limits the filter computes the Mahalanobis distance in RGB
> >>>>> space.
> >>>>> Therefore you only need to provide the value for the multiplier.
> >>>>>
> >>>>> You may want to read again the description of this method in the
> >>>>> SoftwareGuide.
> >>>>>
> >>>>>         http://www . itk . org/ItkSoftwareGuide . pdf
> >>>>>
> >>>>> It is in Section 9.1.3.
> >>>>> In particular look at equation 9.2 in pdf-page 348.
> >>>>>
> >>>>> We used the RGB Confidence connected filter for producing most of the
> >>>>> segmentations shown in the cover of the SoftwareGuide printed
> >>>>> version.
> >>>>>
> >>>>> The code we used for creating the cover is available in
> >>>>>
> >>>>>      InsightDocuments/SoftwareGuide/Cover/Source
> >>>>>
> >>>>>
> >>>>>
> >>>>> Regards,
> >>>>>
> >>>>>
> >>>>>   Luis
> >>>>>
> >>>>>
> >>>>> ------------------------
> >>>>> Stefan Lindenau wrote:
> >>>>>
> >>>>>> Hi Luis,
> >>>>>>
> >>>>>> I tried to get the example of Josh working but I failed on VC6
> >>>>>> and Cygwin to compile it. At the moment I want to give your
> >>>>>> suggestion with the ConfidenceConnected and the
> >>>>>> ThresholdConnected filter a try.
> >>>>>> I read the Software Guide and I think that I am now knowing how
> >>>>>> these filters are working. The only thing that I do not
> >>>>>> understand is how I can get the intensity range values from the
> >>>>>> ConfidenceConnected filter. I can get/set the multiplier, but I
> >>>>>> see no access method to these values.
> >>>>>>
> >>>>>> Maybe I could get them by comparing the input image of the
> >>>>>> ConfidenceConnectedFilter and the output Image, but this seems a
> >>>>>> bit to complicated to me. Is there a more elegant solution? Did I
> >>>>>> miss a method?
> >>>>>>
> >>>>>> Thank you
> >>>>>> Stefan
> >>>>>>
> >>>>>> P.S.: as I have progressed with my work I have seen that the data
> >>>>>> I need can be reduced to 500MB (unsigned char RGB).
> >>>>>>
> >>>>>> Luis Ibanez wrote:
> >>>>>>
> >>>>>>>
> >>>>>>> Hi Stefan,
> >>>>>>>
> >>>>>>>
> >>>>>>> The reason for postprocessing the joint regions is that
> >>>>>>> if you take two contiguous pieces of the image and run
> >>>>>>> level sets on each one, the level sets will evolve in
> >>>>>>> a different way at each side of the boundary, and it
> >>>>>>> is likely that if you try to put the two level sets
> >>>>>>> together just by joining the two blocks of data, the
> >>>>>>> zero set surface will not be contiguous from one block
> >>>>>>> to the next.
> >>>>>>>
> >>>>>>> I would anticipate that some smoothing will be needed
> >>>>>>> for ironing out any discontinuity in the connection.
> >>>>>>> taking the joint region (a region around the boundary
> >>>>>>> of the two block and running some more iterations of
> >>>>>>> the level set there may help to smooth out the transition
> >>>>>>> between the blocks.
> >>>>>>>
> >>>>>>> You could certainly attempt this post-processing-smoothing
> >>>>>>> with other methods. For example, a simple median filter
> >>>>>>> has proved to be powerful enough for smoothing out
> >>>>>>> transitions and it will be a much faster approach too.
> >>>>>>>
> >>>>>>> You may want to start by trying Josh's suggestion since
> >>>>>>> he and his group are the ones who experimented more
> >>>>>>> deeply into this issue.
> >>>>>>>
> >>>>>>>
> >>>>>>> Please let of know of your findings,
> >>>>>>>
> >>>>>>>
> >>>>>>>  Thanks
> >>>>>>>
> >>>>>>>
> >>>>>>>   Luis
> >>>>>>>
> >>>>>>>
> >>>>>>> -----------------------
> >>>>>>> Stefan Lindenau wrote:
> >>>>>>>
> >>>>>>>> Hi Luis,
> >>>>>>>>
> >>>>>>>> thank you for your quick and comprehensive answer. I will just
> >>>>>>>> have to cut the image in pieces.
> >>>>>>>>
> >>>>>>>> Only one thing I still do not understand:
> >>>>>>>>
> >>>>>>>>> If you use level sets, you could post process
> >>>>>>>>> the joint regions between your contiguous pieces
> >>>>>>>>> in order to smooth out the potential differences
> >>>>>>>>> between the level set obtained in one region and
> >>>>>>>>> the level set obtained in the facing region.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Why is it dependend on the level sets to postprocess the the
> >>>>>>>> joint region. In my comprehension I will just cut the data in
> >>>>>>>> big pieces,process it and put it together just after the
> >>>>>>>> processing. Then such a postprocessing should be possible with
> >>>>>>>> any of the methods. Or did I ignore some facts?
> >>>>>>>>
> >>>>>>>> Maybe I can get it working with the streaming example for
> >>>>>>>> watershed algorithms as Joshua proposed. I will just have to
> >>>>>>>> test it out.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> thanks
> >>>>>>>> Stefan
> >>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>>
> >
> >
> >
> >
> > _______________________________________________
> > Insight-users mailing list
> > Insight-users at itk . org
> > http://www . itk . org/mailman/listinfo/insight-users
> >
> >
>
>
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk . org
> http://www . itk . org/mailman/listinfo/insight-users
>