[Insight-users] Segmenting Visible Human Data : RGB ConfidenceConnected.
Stefan Lindenau
stefan . lindenau at gmx . de
Thu, 04 Dec 2003 17:30:42 -0500
Hi Luis,
ok I have read the parts of the software guide that you mentioned again.
Now I want to realize the segmentation of the Visible Human Data by
using the VectorConfidenceConnectedImageFilter to get the mean vector
and the covariant matrix of my tissue. I cannot use the segmentation of
this filter directly because it is leaking.
With this data I want to initialize a Levelset filter that is almost
similar to the ThresholdLevelset filter, but it should use the
Mahalanobis distance for generating the speed image.
I think that I have to write this LevelsetFilter by myself or is there a
implementation for such a problem available?
Thanks
Stefan
Luis Ibanez wrote:
> Hi Stefan,
>
> When you use ConfidenceConnected you only need to provide the multiplier
> for the variance. The range of intensities is computed by the filter
> based on the mean and the variance of intensities around the seed
> points.
>
> The range is simply:
>
> lower limit = mean - standardDeviation * multiplier
> upper limit = mean + standardDeviation * multiplier
>
> The mean and standardDeviation are computed by the filter.
> You only need to tune the value of the multiplier, and
> experiement with the number of iterations.
>
> This holds for RGB confidence connected, where instead of a scalar mean
> you have a mean vector of three components (RGB components), and instead
> of standardDeviation you have a covariance matrix, intead of lower and
> upper limits the filter computes the Mahalanobis distance in RGB space.
> Therefore you only need to provide the value for the multiplier.
>
> You may want to read again the description of this method in the
> SoftwareGuide.
>
> http://www . itk . org/ItkSoftwareGuide . pdf
>
> It is in Section 9.1.3.
> In particular look at equation 9.2 in pdf-page 348.
>
> We used the RGB Confidence connected filter for producing most of the
> segmentations shown in the cover of the SoftwareGuide printed version.
>
> The code we used for creating the cover is available in
>
> InsightDocuments/SoftwareGuide/Cover/Source
>
>
>
> Regards,
>
>
> Luis
>
>
> ------------------------
> Stefan Lindenau wrote:
>
>> Hi Luis,
>>
>> I tried to get the example of Josh working but I failed on VC6 and
>> Cygwin to compile it. At the moment I want to give your suggestion
>> with the ConfidenceConnected and the ThresholdConnected filter a try.
>> I read the Software Guide and I think that I am now knowing how these
>> filters are working. The only thing that I do not understand is how I
>> can get the intensity range values from the ConfidenceConnected
>> filter. I can get/set the multiplier, but I see no access method to
>> these values.
>>
>> Maybe I could get them by comparing the input image of the
>> ConfidenceConnectedFilter and the output Image, but this seems a bit
>> to complicated to me. Is there a more elegant solution? Did I miss a
>> method?
>>
>> Thank you
>> Stefan
>>
>> P.S.: as I have progressed with my work I have seen that the data I
>> need can be reduced to 500MB (unsigned char RGB).
>>
>> Luis Ibanez wrote:
>>
>>>
>>> Hi Stefan,
>>>
>>>
>>> The reason for postprocessing the joint regions is that
>>> if you take two contiguous pieces of the image and run
>>> level sets on each one, the level sets will evolve in
>>> a different way at each side of the boundary, and it
>>> is likely that if you try to put the two level sets
>>> together just by joining the two blocks of data, the
>>> zero set surface will not be contiguous from one block
>>> to the next.
>>>
>>> I would anticipate that some smoothing will be needed
>>> for ironing out any discontinuity in the connection.
>>> taking the joint region (a region around the boundary
>>> of the two block and running some more iterations of
>>> the level set there may help to smooth out the transition
>>> between the blocks.
>>>
>>> You could certainly attempt this post-processing-smoothing
>>> with other methods. For example, a simple median filter
>>> has proved to be powerful enough for smoothing out
>>> transitions and it will be a much faster approach too.
>>>
>>> You may want to start by trying Josh's suggestion since
>>> he and his group are the ones who experimented more
>>> deeply into this issue.
>>>
>>>
>>> Please let of know of your findings,
>>>
>>>
>>> Thanks
>>>
>>>
>>> Luis
>>>
>>>
>>> -----------------------
>>> Stefan Lindenau wrote:
>>>
>>>> Hi Luis,
>>>>
>>>> thank you for your quick and comprehensive answer. I will just have
>>>> to cut the image in pieces.
>>>>
>>>> Only one thing I still do not understand:
>>>>
>>>>> If you use level sets, you could post process
>>>>> the joint regions between your contiguous pieces
>>>>> in order to smooth out the potential differences
>>>>> between the level set obtained in one region and
>>>>> the level set obtained in the facing region.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Why is it dependend on the level sets to postprocess the the joint
>>>> region. In my comprehension I will just cut the data in big
>>>> pieces,process it and put it together just after the processing.
>>>> Then such a postprocessing should be possible with any of the
>>>> methods. Or did I ignore some facts?
>>>>
>>>> Maybe I can get it working with the streaming example for watershed
>>>> algorithms as Joshua proposed. I will just have to test it out.
>>>>
>>>>
>>>> thanks
>>>> Stefan
>>>>
>>>> _______________________________________________
>>>> Insight-users mailing list
>>>> Insight-users at itk . org
>>>> http://www . itk . org/mailman/listinfo/insight-users
>>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> Insight-users mailing list
>> Insight-users at itk . org
>> http://www . itk . org/mailman/listinfo/insight-users
>>
>
>
>
> _______________________________________________
> Insight-users mailing list
> Insight-users at itk . org
> http://www . itk . org/mailman/listinfo/insight-users
>
>