[Insight-users] How do I validate a 3D segmentation process ?
Simon Warfield
warfield@bwh.harvard.edu
Tue, 11 Mar 2003 15:03:37 -0500
Hi Luis,
On Tue, Mar 11, 2003 at 02:49:23PM -0500, Luis Ibanez wrote:
> Validation is an ill-defined problem.
>
> 2) Recreuit human operators with some
> background in anatomy. Train them
> in manual segmentation techniques
> and convince them (via persuasion,
> pizzas or even salary) to pass hours
> in front of the screen segmenting
> a set of defined volumes. Ask them
> to repeat the segmentation several
> times in order to evaluate their
> variability.
>
> Then you compare the segmentations
> done with your method X against their
> manual segmentations.
We have done some work on this problem recently. It has been hard to know
exactly how to compare segmentations of experts and programs.
We recently developed an algorithm we call STAPLE for Simultaneous Truth
and Performance Level Estimation, that computes a probabilistic estimate of
the hidden ``ground truth'' segmentation from a group of expert segmentations,
and a simultaneous measure of the quality of each expert. This approach readily
enables the assessment of an automated image segmentation algorithm, and
direct comparison of expert and algorithm performance.
A conference paper and presentation on the method are available at:
http://splweb.bwh.harvard.edu:8000/~warfield/papers/staple-paper.pdf
http://splweb.bwh.harvard.edu:8000/~warfield/papers/staple-presentation.pdf
--
Simon K. Warfield, Ph.D. warfield@bwh.harvard.edu Phone:617-732-7090
http://www.spl.harvard.edu/~warfield FAX: 617-582-6033
Assistant Professor of Radiology, Harvard Medical School
Director, Computational Radiology Laboratory
Thorn 329, Dept Radiology, Brigham and Women's Hospital
75 Francis St, Boston, MA, 02115