ITK Release 4/The Team/A2D2 Development Team
- 1 A Comprehensive Workflow for Robust Characterization of Microstructure for Cancer Studies
- 2 Comprehensive Workflow for Large Histology Segmentation and Visualization
- 3 Adding Deconvolution Algorithms to ITK
- 4 ITK Extensions for Video Processing
- 5 Real-Time Image Capture for ITK through a Video Grabber
- 6 Methods in Medical Image Analysis: an ITK-based course
- 7 ITK Algorithms for Analyzing Time-varying Shape with Application to Longitudinal Heart Modeling
- 8 3D Real-time Physics-based Non-rigid Registration for Image-guided Neurosurgery
- 9 Denoising Microscopy, MRI, and Ultrasound Images
- 10 Framework for Automated Parameter Tuning of ITK Registration Pipelines
- 11 SCORE: Systematic Comparison through Objective Rating and Evaluation
- 12 SCORE++: Crowd sourced data, automatic segmentation, and ground truth for ITK4
- 13 Fostering Open Science for Lung Cancer Lesion Sizing
A2D2 stands for Algorithms, Adaptors and Data.
A2D2 developers have projects that will exercise the ITKv4 toolkit, and that will define requirements from application domains, that will guide the design and development of ITKv4
They are listed below
A Comprehensive Workflow for Robust Characterization of Microstructure for Cancer Studies
Raghu Machiraju (Ohio State University), Kun Huang
There is a growing body of cancer biology research that relies heavily on 3D cellular model systems. In recent years, there has been especially much interest in the tumor microenvironment (TME) that surrounds epithelial tumors. Fluorescence markers and confocal and multi-photon microscopes are employed towards the creation of the required virtual 3D annotated models of the TME. We propose to create a comprehensive ITKv4 application that includes tangible workflows for preprocessing images (denoising), and segmenting, classifying and visualizing cell nuclei and their arrangements. Emphasis will be placed on creating tangible shape spaces and tools that will associate cellular (nuclei) phenotypes to regions in the microenvironment. We will leverage our extensive experience in developing algorithms for processing confocal stacks from thick tissue sections. The proposed tool will facilitate the use of ITKv4 among an important and growing community of cancer researchers.
Comprehensive Workflow for Large Histology Segmentation and Visualization
Raghu Machiraju (Ohio State University), Kun Huang and Lisa Lee
3D histology stacks are being increasingly used to understand gross anatomical changes and to provide valuable educational contexts. Most existing toolkits allow a 2D approach and do not meet the challenges posed by 3D histology. ITKv4 can facilitate the realization application level toolkits that will allow for a sensible registration, segmentation and reconstruction of digital slides depicting various organs and tissue systems. We will leverage our extensive experience in constructing ITK-based tools to process 3D histology. Modules will be included that will allow for pre-processing (color correction, artifact removal, etc.), rigid and non-rigid registration, material-based segmentation, and visualization. Additionally, we will provide access to multi resolution data that describes ensembles of nephrons in a human kidney. Our team includes an anatomist who will provide validation data and will deploy the tool in her anatomy classes. Thus, our proposed tool will bring a valuable user community into the ITK fold.
Adding Deconvolution Algorithms to ITK
Marc Niethammer (Univ. of NC, Chapel Hill), Russell Taylor and Cory Quammen
ITK offers many algorithms suitable for analyzing images from fluorescence microscopy, an im- portant imaging modality in the life sciences. However, blur from out-of-focus light reduces the quantitative accuracy of these algorithms. Adding deconvolution algorithms that reduce this blur to ITK would increase ITK’s impact on the life scientist community. We propose to 1) imple- ment four of the most-used deconvolution algorithms in ITK that scale to multi-core systems by using ITK’s existing multithreading capabilities, 2) implement theoretical parametric models of widefield point-spread functions that can be fit to a measured point-spread function using an adapter interface to ITK’s registration framework, 3) curate example images from our collabora- tors for inclusion in ITK examples and tests, and 4) disseminate the algorithms through both the ITK library and through the visualization and image analysis programs ImageSurfer, Slicer3, and BioImageXD, all of which have large user bases among life scientists.
ITK Extensions for Video Processing
Amitha Perera (Kitware), Patrick Reynolds, Hua Yang
The widespread proliferation of digital video recorders and the advent of video-based imaging modalities in biomedical image analysis create a need for image processing techniques that take advantage of temporal data. The computer vision community has significant experience in video analysis, and leveraging that experience would be a great benefit to medical image analysis. We propose to extend ITK to allow algorithms from existing computer vision libraries to be quickly integrated for use within ITK, allowing current ITK users to evaluate or use computer vision techniques. We also hope that by bridging ITK and existing vision libraries, we will bring some of the vision community into the ITK community.
Real-Time Image Capture for ITK through a Video Grabber
Kevin Cleary (Georgetown*), Patrick Cheng and Ziv Yaniv
ITK is a widely used toolkit for image analysis. One of its design features is the generic itkImageReader API, which encapsulates supports for a wide variety of image formats and greatly simplifies image data reading operations for users. However, ITK is designed to handle only “off-line” images, meaning images stored as files. It lacks the capability to deal with real- time images and video streams, such as outputs from ultrasound, endoscopy, and x-ray fluoroscopy, which are widely used in the clinical environment for both diagnostic and image guidance purposes. In an ITK user survey conducted in 2006, many researchers indicated “access real-time image data” as a desired feature. The goal of this proposal is to add the capability to acquire/capture real-time video streams by implementing a hardware-independent (generic) itkVideoGrabber class. This new extension will broaden ITK’s application areas, expand its user base, and add to its impact on clinical practice.
Methods in Medical Image Analysis: an ITK-based course
John Galeotti (CMU) and George Stetten
We propose to add additional algorithm families to ITK by adapting an existing multi-university ITK-project-based course, titled “Methods in Medical Image Analysis.” We will produce new contributions to ITK by generating new contributors. Students finishing the one-semester course will be able to write new code that is ITK-compliant, deposit it in the ITK repository, and generate appropriate validation tests. During the performance period, the chief deliverables will center on enhancing ITK to work with real-time video. OpenCV will be integrated into ITK, and these new facilities will be used to implement several new algorithms, including (1) surface registration between a CT/MRI volume and a range-map generated by stereo video cameras, (2) real-time ultrasound slice tracking in patient coordinates by means of classical optical flow on the patient’s skin as measured from a small camera mounted on the ultrasound probe, and (3) segmentation and tracking of cells in large data video microscopy.
ITK Algorithms for Analyzing Time-varying Shape with Application to Longitudinal Heart Modeling
Thomas Fletcher (U. Utah), Joshua Cates, Rob MacLeod, Chris McGann and Steven Callahan
Abstract The goal of this project is to implement algorithms within the Insight Toolkit for the longitudinal analysis of three-dimensional anatomical shape. The modelling and statistical analysis of longitudinal shape changes has a wide range of biomedical applications, including understanding normal versus abnormal anatomical development, characterizing degenerative changes due to disease, and modelling motion such as breathing or the heart cycle. Our proposed contribution to ITK will be a general software framework for constructing statistical models of sets of shapes from image data. This framework will include a mixed-effects model for analyzing longitudinal shape change. Such models offer increased statistical power over standard regression or simple averaging of individual trends. A driving application for testing our implementation of these algorithms will come from a longitudinal MRI study of patients with atrial fibrillation following radio frequency ablation.
3D Real-time Physics-based Non-rigid Registration for Image-guided Neurosurgery
Nikos Chrisochoides (William and Mary*), Andrey Chernikov, Yixun Liu, Michel Audette, Luis Ibanez, Casey Goodlett and Xenios Papademetris
"We will develop an ITK implementation of physics-based Non-Rigid Registration (NRR) for Image- Guided Surgery (IGS) that will satisfy the following requirements: account for tissue properties in the registration, improve accuracy compared to rigid registration, and reduce execution time to less than one minute. The deliverable from this project differs from the existing ITK class FEMRegistrationFilter. Our methodology is based on the separation of the NRR method into two parts: a regular part, block matching that utilizes the GPU, and an irregular part, a Finite Element solver that is mapped to multi-core processors. The benefits to the ITK community are at least two-fold. First, as a stand-alone software we will provide a computationally efficient registration method that accounts for tissue properties and that approximates sparse deformation. Second, through ITK integration into other IGS toolkits, NRR will be part of open-source systems like 3D-Slicer and IGSTK, and commercial systems like BrainLab."
Denoising Microscopy, MRI, and Ultrasound Images
Ross Whitaker (U. Utah) and Suyash Awate
- Full Title: "Fast Non-local Algorithm for Denoising Microscopy, MRI, and Ultrasound using Non-local Parametric Neighborhood Statistics"
Recent approaches to denoising multi-dimensional image data relying on information contained in image patches, or neighborhoods, have produced outstanding results in a wide spectrum of medical-imaging modalities (e.g. MRI, microscopy, ultrasound). Patch-based denoising algorithms process every pixel by aggregating rich information that is local as well as distant/non-local. The biomedical research community has, however, been unable to harness the potential of these algorithms because of two major reasons: (i) the computation burden of these algorithms is tremendous and (ii) implementations on multidimensional, multimodal data remain inaccessible to a large part of the scientific community. We propose to leverage the widely-disseminated Insight-Toolkit (ITK) framework to provide a set of modules/classes for fast and effective non-local denoising by exploiting (a) the parallelism inherent in the algorithms, (b) the multi-threading capability in ITK coupled with multi-processor multi-core architectures, and (c) approximate statistical estimation schemes achieve order-of-magnitude speedups in computation time while maintaining excellent level of performance.
Framework for Automated Parameter Tuning of ITK Registration Pipelines
Ziv Yaniv (Georgetown), Filip Banovac, Kevin Cleary, Andinet Enquobahrie and Luis Ibanez
The purpose of this proposal is to develop a framework for automated parameter tuning of ITK non-rigid registration pipelines. ITK components used to construct registration pipelines are highly parameterized. As a consequence, the performance of a registration pipeline, as measured by accuracy and running time, is dependent on the selected parameter values. These values are task specific and most often are determined empirically, based on the developer’s domain specific knowledge and a tedious trial and error process. We propose to automate this process, providing a framework that enables the setting of optimal parameter values for a specific registration task.
SCORE: Systematic Comparison through Objective Rating and Evaluation
Marcel Prastawa (U. Utah), Julien Jomier (Kitware), Guido Gerig (U Utah) and J.C. Fillon-Robin (Kitware)
Validation and comparison of algorithms is a challenging task for three reasons: first, accessing and preprocessing relevant testing datasets is often difficult; second, implementing other algorithms for comparison takes time and duplicates effort; and third, comparison with already published methods is often dependent on the testing datasets. The Insight Toolkit allows developers to share state of the art algorithms. Here, we aim to provide a software infrastructure for sharing, accessing and disseminating datasets and algorithms for validation purposes. We propose to 1) implement classes to access testing datasets from publicly accessible repositories directly into ITK, 2) develop a validation framework for segmentation algorithms, and 3) provide a public platform for the community to share algorithm evaluations. This framework will extend and enhance ITK by allowing developers to easily compare their new methods to existing ones, and allowing users to evaluate potential methods for their needs.
SCORE++: Crowd sourced data, automatic segmentation, and ground truth for ITK4
Sean Megason (Harvard Medical School) and Julien Jomier (Kitware)
In a separate A2D2 proposal (SCORE: Systematic Comparison through Objective Rating and Evaluation), Marcel Prastawa (Univ. Utah) proposes the creation of an infrastructure for hosting test datasets in MIDAS, new I/O filters in ITK to access this data, and new filters for validating and comparing segmentation results. Here we propose to leverage and extend the SCORE infrastructure by crowd sourcing 1) the collection of a diverse number of image sets to place into the SCORE repository, 2) the organization of a Grand Challenge to develop new ITK filters that process datasets in the repository and will be judged using the new metrics developed in SCORE, and 3) the creation of a system to allow manual segmentation of data in the repository from users anywhere in the world to increase the availability of ground truth.
Fostering Open Science for Lung Cancer Lesion Sizing
Rick Avila (Kitware), Luis Ibanez (Kitware) and David Yankelevitz (Cornell University)
The goal of this open science project is to accelerate the advancement of CT lung cancer lesion sizing algorithms. We have developed a free and open source lesion sizing toolkit that is based on ITK and contains a modular architecture for lesion sizing algorithms. The toolkit also provides a volumetric measurement reference algorithm for CT lung lesions. We propose to fully integrate this toolkit as a module into ITK, import additional algorithms into the toolkit to better model variation in CT acquisition systems and lung structures, provide a large and open image archive for development and testing, and actively encourage research groups to contribute to the new ITK capabilities. Kitware will provide the open source software engineering skills and computing resources and Mt. Sinai will provide a high quality lung cancer image and metadata database as well as clinical guidance by leading lung cancer clinicians.