Maverick/Merlin: Difference between revisions

From KitwarePublic
Jump to navigationJump to search
m (Reverted edits by Jann112 (Talk) to last revision by Finetjul)
(Blanked the page)
Line 1: Line 1:
<h1><center><b>Maverick: Merlin</b></center></h1>


<center><em>A product of Kitware, Inc.</em></center>
<center><em>End-User Documentation</em></center>
<center><em>Copyright (c) 2008, Kitware Inc.</em></center>
= Authors (alphabetical listing) =
* Stephen Aylward
* Julien Finet
* Julien Jomier
* Karthik Krishnan
* Patrick Reynolds
* William Schroeder
* Hua Yang
= Screenshots =
= Introduction =
This documentation is intended as an end-user guide to Merlin, a labelmap creation and editing tool.  It provides an illustrative walk-thru of the most common use-cases for Merlin.  This documentation makes heavy use of screenshots with supporting text to detail the user interactions required and to summarize the assumptions and requirements of the program at each step.
== Labelmaps ==
Labelmap creation and editing involve assigning each voxel in a three-dimensional (3D) image to a category.  The driving application area for Merlin is anatomical imaging, so the labelmap categories discussed in this document refer to organs (e.g., liver, spleen) and matter typically found in medical images (e.g., internal air, saline).  A configuration file can be used to change the labelmap categories as new data is loaded.  For example, labelmap categories for microscopy images (e.g, containing categorizes such as nucleus and inter-cellular space), geophysical data (e.g., fault line and granite), and any other domain can be defined.  Figure 1 illustrates one 2D slice of a 3D labelmap from a volumetric image of a pig.  In Figure 2 select organs in that labelmap are rendered as surfaces in 3D.
Figure 1. One 2D slice of a 3D labelmap. Depicts muscle, lungs, liver, and gall bladder.
Figure 2. A 3D rendering of select organs in a labelmap from an MRI scan of a pig.
Labelmap creation involves computer-based (a) importing medical images such as magnetic resonance images (MRI), these are used to guide the 3D labelmap creation process; (b) manually “painting” categories on top of the medical images; (c) manually touching-up those paintings to increase the accuracy and/or utility of the labels; (d) using computerized algorithms to assist in the painting and touching-up processes; and (e) exporting those labelmaps for use in subsequent analyses and simulations.
== Maverick ==
Merlin is one application in a suite of tools built upon Kitware’s Maverick library.  Maverick is an extensive collection of C++ classes, parameter files, specifications, and documentation for (a) defining workflows whereby complex tasks are reduced to a sequence of simple tasks, (b) presenting workflows using Trolltech’s Qt graphical user interface environment, (c) managing representations of physical objects in a scene, (d) rendering scenes, and (e) calling internal and external modules to process a scene at each stage of the workflow.
Maverick is useful for end-users and developers.  It can be used to rapidly define new applications or to integrate advanced image analysis methods into existing applications.  Maverick contains a set of standard applications, those applications can be tailored by Kitware to meet a customer’s specific needs, and Maverick can be made available as a set of binary libraries that can be used by other developers to build new applications or to extend existing applications.
One of the major strengths of Maverick is that it embodies decades of image analysis expertise gathered from the top researchers at Kitware.  The same scientists who lead the development of ITK (www.itk.org) and VTK (www.vtk.org) have developed Maverick to reduce the expertise needed to apply advance image analysis and visualization algorithms to real-world data.  Their expertise allows Maverick applications to have a simple workflow interface and a few intuitive, high-level options with which novice users can accomplish complex image segmentation and registration tasks with ease. 
To understand the important of image analysis expertise in designing an application, consider the following.  Many segmentation tasks actually involve a complex sequence of ITK filters for pre-processing the data, performing the segmentation based on initializations provided by the user, post-processing the results, and then offering those results for the user to edit.  Each of those steps may be accomplished to varying degrees of efficacy by any of a number filters and interaction styles, and each of those steps may contain a multitude of parameters.  Nevertheless, with experience and knowledge, most segmentation tasks can be reduced to having the user set one or two high-level parameters, if the imaging modality and target organ is known a priori.  This is the philosophy used throughout Maverick and its modules.
The remainder of this introduction provides additional details on the major components of Maverick: workflow, Qt, scenes, rendering, and external modules.  The subsequent sections of this document are use-case-based, illustrative examples of the operation of Merlin.
== Scenes ==
Maverick has a scene graph as its central data structure so that it can be easily handle multiple images and models in memory and provide the user with intuitive access to those objects.  The benefits of these two characteristics are discussed next.
Handling multiple images and objects is now central to most image analysis algorithms and clinical tasks.  Many diseases are now identified by comparing a subject’s images with standard images via complex registration and transcription tasks that require managing a multitude of images.  Surgeries are planned in consideration of information from multiple anatomic images (such as CT scans and MR scans) as well as from functional (fMRI and PET) scans that capture different objects as well as different object properties.  A scene graph provides an organization of these data in memory. 
Scene graphs also provide an intuitive access to those objects by allowing users to interact with Maverick applications using familiar terms (e.g., liver, cell, arm) instead of having to use computer-science terms (e.g., gradient, Bezier contour, pixels with intensity value 137).  Furthermore, in a scene graph, those objects can have hierarchical relationships, e.g., the liver contains portal and hepatic vascular trees.  Those objects can be manipulated as units, e.g., the liver and all objects contained within it can be shown or hidden in the 3D display.  Those objects can also be used to drive the processing pipeline, e.g., the liver model can be registered with the PET image in that scene. Additionally, objects can be easily quantified, e.g., the change in a tumor object’s volume can be reported or the perfusion of a liver lobe can be computed.
In addition to providing simplified data management via scene graphs, Maverick also simplifies the processing of those data, via a workflow-style interface, which is discussed next.
== Workflow ==
The integration of algorithmic sequences with efficient graphical user interfaces for visualizing and editing intermediate and end results is a branch of the science of workflow.  The Wikipedia definition of workflow is as follows:
Workflow, at its simplest, is the movement of documents and/or tasks through a work process. More specifically, workflow is the operational aspect of a work procedure: how tasks are structured, who performs them, what their relative order is, how they are synchronized, how information flows to support the tasks and how the tasks are being tracked.
In scientific computing workflow is often used to refer both to the movement of data as well as a user interface style.  For example, ITK has a processing pipeline that a programmer creates to specify how data moves from one image analysis filter/modules to the next.  A graphic user interface, however, can also have a workflow that walks the user through a pre-defined sequence of interactions in order to accomplish a specific high-level task.  Maverick applies workflow concepts to its data handling and user interactions.  In this manner, the power of ITK and other pipeline toolkits can be merged with an intuitive user interface.  This is illustrated in Figure 3.
Figure 3. An example of how a user-interface workflow interacts with a data workflow in Maverick.  The sequence of ovals indicates a data processing pipeline/workflow.  The sequence of boxes indicates a user-interface workflow that is driven by the desired data processing workflow.  Rounded boxed indicate how expertise is used to interpret user interactions in order to define parameters for complex image processing algorithms.
Via workflows, a novice user will be able to gradually walk through a complex segmentation or registration task.  Furthermore, as illustrated in Figure 3, as developers build workflow-based applications, they become more aware of the context in which an algorithm will operate and thereby develop a better sense for which algorithm parameters to expose and how to use information that has already been extracted at previous steps in the pipeline to minimize the information that the user must specify in the current stage.  We have found these secondary effects to have a greater positive impact on a user’s experience than the simple linearization of a task that most people associate with workflow.
Our experience with workflows (data and user-interface workflows), however, has shown that they often need to be adapted to the specific data being processed by each user.  Instead of having a user provide those parameter tweaks themselves, we have chosen to focus on the use of higher-level “hyper-parameters.”  These hyper-parameters are interpreted using heuristics and parameter files to determine how a workflow should be adapted for a user.  Details on this process are given next.
== Hyper-Parameters and Parameter Files ==
“Parameters files” are rich XML descriptions of the sequence of methods and method parameters that have be custom crafted to solve specific tasks.  In the Maverick, these parameter files are indexed by high-level descriptions of the tasks being accomplished such as the name of the organ of interest and the modality of the available image data.  We refer to these high-level task descriptions as “hyper-parameters.”  For example, given contract CT data, the algorithm and parameters appropriate for liver modeling may be different from the algorithm and parameters appropriate for kidney modeling, and those will change if the input modality is instead a T1 MR image. 
Parameter files and hyper-parameters are one of the many mechanisms we have developed for embedding our image analysis expertise into Maverick applications.  Parameter files are typically defined using data supplied by Kitware’s customers.  A systematic series of experiments are applied to those data to determine parameters that work on those data and that are statistically likely to work well on similar data.  Not only will more accurate answers be generated by a workflow that has been tweaked using hyper-parameters and parameter files, but the output of such a workflow will be generated more quickly and with less sensitivity to the skill of the operator.  The speed, accuracy, and consistency of Maverick’s algorithms are also enhanced by the speed, elegance, and responsiveness of its graphical user interface.  These GUI features are enabled by our choice of Qt from Trolltech to implement Maverick’s graphic user interfaces.
== Qt for User-Interfaces ==
Qt from Trolltech is a dual-licensed toolkit for cross-platform user-interface development.  It provides a powerful set of widgets such as sliders, buttons, lists, and file choosers.  We have extended it to include additional widgets for image display and processing, e.g., light-box widget, volume rendering widget, scene browser widget, workflow panel widget, and more.  Qt applications automatically adapt to the native look-and-feel of each platform to which they are ported.  Therefore, for example, Windows users will be comfortable with the look-and-feel of Maverick applications running in Windows, while Linux users will be equally comfortable with the look-and-feel of Maverick applications running in Linux.
Another of the strengths of Qt is its “designer” application.  Designer is a WYSIWYG tool for developing user interfaces.  Using it, Qt’s widgets, and our image-processing-specific widgets, a user can develop solid, cross-platform, workflow-style user interfaces in a few hours.  Experienced Maverick developers can produce applications for volume and scene rendering in under an hour.  Most development is done within the designer: cmake automates the conversion from a designer file to c++ code, and few lines of c++ code ultimately need to be written.
While Maverick end-users do not need to purchase a Qt license to run a Maverick application, Maverick developers are required to purchase Qt licenses if they will be using any component of the Maverick GUI.  In particular, even though Trolltech offers a GPL-license for Qt, that license is not compatible with the licensing terms of Maverick.  More specifically, Kitware has purchased licenses to develop commercial applications using Qt, and anyone who develops applications using Maverick’s Qt widgets is also required to purchase a Qt license.  If they do not, then their entire application would be virally transformed to having a GPL license, which requires all source to be redistributed, which is not compatible with Maverick’s developer licensing terms, which prohibit the publication of Maverick’s API.  We strongly believe that the cross-platform, extensible, easy-to-use, user-interface capabilities of Qt far outweigh the licensing costs.  The visualization capabilities of Maverick’s custom Qt widgets are summarized next.
== Rendering Objects and Volumes ==
Not only is maintaining a scene of objects key to powerful analysis methods and the effective/intuitive control of those methods, it also provides a solid foundation for visualizing and controlling the visualization of those objects.
Visualizations techniques provided by Maverick include (a) 2D slice-view (coronal, saggital, and axial) with the labelmap as a color overlap; (b) a “light box” view with color labelmap overlay that shows a sequence of slices simultaneously; (c) quad-view that shows coronal, saggital, axial, and volume or surface renderings simultaneously; (d) 3D surface rendering in which objects can be shown using solid surfaces, wireframe meshes,  or point clouds; (e) texture-memory-based volume rendering; and (f) GPU-based volume rendering. 
These rendering options are a combination of standard VTK techniques with proprietary extensions that increase their speed and provide highly tuned transfer functions for medical images.  By being built upon standard VTK methods, they benefit from the support of the broad VTK community and have well documented and generally useful APIs.  The proprietary components incorporate state-of-the-art research being conducted within Kitware and at our collaborators.  These render methods are one of the unique strengths of Maverick.  Another of the unique strengths of Maverick is its algorithms which are available as external modules.
== External Modules ==
“External modules” are the image and object analysis algorithms of Maverick.  These algorithms include methods for image and object filtering, segmentation, registration, and quantification.  These algorithms are provided as steps in the Maverick data workflow, and they can be embedded into interactive “smart paintbrushes” and “smart contours” which are discussed later in this document.  The benefits of external modules come from their software architecture as well as from the image processing capabilities they provide.
The software architecture of external modules allows (a) new modules to be easily developed for Maverick, (b) new and existing modules to be easily integrated into Maverick, and (c) Maverick’s modules to be used in other (non-Maverick) applications.  Regretfully it is beyond the scope of this documentation to detail those features or give more specific information on the software architecture.  We can, however, give a summary of their processing power. 
The major computation capabilities of Maverick’s external modules are as follows:
* Image input/output and image reconstruction
** DICOM import with automated correction for patient movement in MR, and automated composing of overlapping acquisitions into large volumes
** Handles over 20 file formats for images and labelmaps
* Image filtering methods
** Morphological operators
** Variable conductance diffusion
** Contrast enhancement
* Segmentation methods based on intensity, texture, multi-modality imagery, and multi-variate measures.
** Voxel labeling
** Region growing
** Atlas-based segmentation strategies
** Shape-based segmentations, e.g., vessel tracking
* Intra- and inter-subject registration strategies
** Rigid transforms
** Deformable transforms
** Landmark and intensity-based metrics
* Built using ITK, VTK, and other public toolkits
** Industry-standard and community supported libraries
** Easy for developers to learn and extend
* Optimized for speed and memory usage
** Multi-threaded
** Many can be operated in “Draft” mode
*** Provides rapid, approximate results when parameters are adjusted
*** For interactive tweaking of results
** May operate on a subset of the image at a time.
* Can be specialized to meet the needs of individual customers
* Can be run via a Maverick applications
** Components of the data and/or the user interface workflow
* Can be run as command-line modules
** Supports batch processing
** Supports calling from existing applications
* Available as binary libraries and C++ header files
** Enables developers to build new applications (requires a Maverick Developers’ License)
Despite the power of Maverick’s external modules, it must be admitted that no segmentation method will work with full accuracy on every dataset, and many segmentation methods will produce more accurate results when given more accurate initializations.  To address both of these issues, Maverick includes “smart paintbrushes” and “smart contours” so that a user may interact with its algorithms.
== Smart Paintbrushes and Smart Contours ==
“Smart paintbrushes” allow 2D and 3D algorithms to be run for each stroke painted in a labelmap by a user.  Smart paintbrush algorithms have access to the medical image data, the labelmap, other objects in the scene, and the properties of those objects.  Therefore, a single stroke can initiate a region fill based on the local image gradient or can be limited to paining in areas not already covered by existing objects, to painting over only points that are bright in the medical image, or to erasing or changing a particular object while not affecting other objects.
“Smart contours” use algorithms to adjust manually placed control points and to interpolate curves between those points.  Smart contour algorithms, like their smart paintbrush counterparts, have access to the medical image data, the labelmap, other objects in the scene, and the properties of those objects.  Therefore, a seed-point placement by a user can be automatically snapped to appropriate image features such as edges or centers of vessels and can be connected to adjacent contour points using linear or Bezier-curve interpolation.
Via the integration of algorithms, the contouring and painting methods reduce inter- and intra-user variability.  More accurate and consistent segmentations, registrations, and edits result.
Paintbrushes and contours are two of the components of Maverick that are being most actively extended.  New types of paintbrushes and contours are being developed as new insights into their potential are gained.
Via paintbrushes and contours, Maverick addresses every aspect of image analysis.  That is, it is a complete solution for image processing, yet its modularity also allows its components to be integrated into existing applications and infrastructure of research labs, commercial data processing environments, and commercial application developers.  Next, we discuss the simple steps needed to install and begin using Maverick.
= Installation and System Requirements =
== System Requirements ==
Merlin is distributed for 32-bit Windows and 64-bit Linux systems.  It is also available for Apple Macintosh and most other popular operating systems, upon request.
The primary hardware requirement for Maverick is memory.  For most operations, the memory requirements are modest.  A 2 gigabyte system will be able to apply all operations to an MRI of size 400x400x800.  However, as organs are defined within that data, it may not be possible to simultaneously visualize all of them as surfaces in 3D.  A 4 gigabyte system should be able to generate and visualize all data and organs.
Regarding computation, Maverick is built upon a multi-threading library, and so it can take advantage of multi-core and multi-processor machines (e.g., “dual core” and “quad core” machines).  Nevertheless, for large datasets, computation times can still be extensive.  The most computationally demanding module in Merlin v0.91 is the module for composing multiple MRI DICOM images into a single volume.  This process can require up to one hour of processing on a dual-core, 2 Ghz machine.  During those computations, the system is estimating the overlap of the different scans, correcting for interleaved scans, and correcting for inter-scan changes in anatomy.  So, while some computation costs may seem high, the manual alternative would require many orders of magnitude more time.  Every effort is being made to further reduce these computation times by re-engineering algorithms and taking further advantage of multi-threading.
== Installation ==
Maverick consists of executables and demonstration datasets.  Please contact stephen.aylward@kitware.com for access to the most recent version.
Merlin executables are distributed as a self-installing executable for Windows and as a tar-ed, compressed directory on Linux.
Windows: Simply double-click the self-installing executable and accept all defaults to install.
Linux: Uncompress and un-tar the directory to a convenient location.  For shared use, we suggest installing the directory in /usr/local.  You will then need to update your PATH variable to point to the Merlin executable and your load-library path to point to the shared libraries used by Merlin.  Assuming your shell is bash and your installation directory is /usr/local/Maverick-0.9 then the commands to issue are:
export PATH=/usr/local/Maverick-0.9/bin:${PATH}
export LD_LIBRARY_PATH=/usr/local/Maverick-0.9/bin
Additionally you should download and install the demonstration data so that you can follow the examples in this document.  In particular, you will need the PigTiff.tar.gz files which contain a sample labelmap as well as the mavTissue.csv file which assigned names and colors to the numeric labels in the PigTiff images.
== End-User License ==
The following is a summary of the general terms of the Maverick end-user license.  For details, questions, or a copy of the full end-user license, please contact stephen.aylward@kitware.com.
<pre>
Maverick End-User License
General Summary of Terms
Revision: February 13, 2008
A Maverick End-User License (the “License”) is being purchased by the Licensee so that the Licensee, its employees, and its agents (collectively the “Licensed Users”) may use Maverick on a single computer.
The following is a general summary of the terms of the License.  The full text and terms of the License are distributed with Maverick.
• Maverick is a collection of binary applications, parameter values, data, and documentation. 
• The License allows Licensed Users to use Maverick to process data for any legal purpose.  In particular, processing data for commercial purposes is specifically allowed.
• The License does not transfer source code, intellectual property, proprietary knowledge, confidential information, or copyrights between the parties.
• The License includes five (5) hours of Kitware Consultation within thirty (30) days of the purchase of a License. Kitware Consultation is initiated via email.
• The License is machine specific.  The Maverick applications distributed with each License are encoded so as to operate on a single, Licensee-specified machine.
• Maverick is intended for research purposes.  Maverick has not been reviewed or approved by the Food and Drug Agency or any other agency.  Kitware neither recommends nor advised the use of Maverick or any Kitware product for any purpose including, but not limited to, medical care or treatment.
• Maverick is sold “as-is.”  Any expressed or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed.  In no event shall Kitware, its employees, or its agents be liable for any damage arising in any way out of the use of this software.
</pre>
== Notes on Development, Testing, and Bug Reporting ==
Maverick, including Merlin, can be offered on multiple platforms because of its use of Kitware’s open-source, cross-platform build system CMake (www.cmake.org).  This build system has been adopted by KDE, arguably the largest open-source development effort in the world.
In addition to being built on multiple platforms, Maverick is tested nightly on multiple platforms.  This nightly testing is achieved using Kitware’s open-source DART/Dashboard technology.  An early Maverick dashboard is illustrated in Figure 4. 
Figure 4. An early instance of the Maverick Dashboard.  Each row represents the result from downloading, compiling, and testing Maverick on a different computer system.  Such processing is automated and occurs nightly.  In this Dashboard, reports have been submitted from Macs, Linux, and Windows machines.
In addition to nightly unit tests, extensive hand-testing is conducted on each Maverick release.  That testing currently focuses on 32-bit Windows XP and 64-bit Ubuntu 7.10 Linux distributions.
Despite extensive testing, we anticipate that bugs and feature requests will arise during your use of Maverick.  Please use our public bug reporting system to report bugs and request features.  That reporting system is on the web at:
http://public.kitware.com/Bug
Please select “Maverick” as the project when making a report, and please be as detailed as possible.  Thank you.
= The Components of the Merlin Interface =
The major components of the Merlin interface are illustrated in Figure 5.  In that figure, a scene has already been created and loaded into Merlin, Merlin is displaying a surface rendering of the liver and a mesh rendering of the gall bladder.
Figure 5. Merlin after a scene containing a labelmap has been imported.  The liver, gall bladder, and stomach are shown in 3D.
It is important to understand the components of the Merlin interface that are labeled in Figure 5 before continuing to read this document.  In the remainder of this document, the style of presentation switches to annotated illustrations of the Merlin interface.  The captions of each figure discuss the interactions and requirements for the corresponding step in the workflow.  Important notes are demarcated by the symbol (!).
This document will walk you through the process needed to create visualizations such as the one show in Figure 5.  We begin with importing images and labelmaps.
== Getting Started: Importing Images and Labelmaps ==
GOAL: When you have completed the steps illustrated in this section, you will have successfully loaded a 3D labelmap and DICOM data into Merlin.  This step is needed to visualize organs in 3D, manually edit those organs, and to apply semi-automatic filtering to those organs.
When Merlin is run, a splashscreen will popup and then the interface shown in Figure 6 will appear.  The look-and-feel of the interface may be slightly different on your system as Qt automatically adapts to the native GUI standards for each platform on which a Qt application is run.
Figure 6. Appearance of Merlin after startup.  The look-and-feel might be slightly different on different platforms since Qt (Trolltech, Inc.) uses each platforms native interface standards for each platform.
In the next subsection we illustrate the steps needed for (1) creating a 3D volume file from a sequence of 2D TIFF image files; (2) adding that 3D volume file to a Maverick Scene file; and (3) loading a scene file into Merlin.  In a subsequent subsections we will discuss how to convert DICOM images to 3D volume files and how to automatically those images interleaving and inter-scan pose/shape variations during the import process.
* Import TIFF
GOAL: When you have completed this section you will have converted a sequence of tiff images, stored in a single directory, into a 3D image.  It also covers how to add that image to a scene file and then load that scene file into Merlin.
This section will show you how to convert the demonstration data’s directory of tiff images into a 3D volume that can be added to a scene.
Figure 7. The initial screen of Merlin.  The options for loading data into the system are displayed in the workflow panel.  Choose “Create new scene” to begin converting a directory of tiff images into a 3D volume.
Figure 8. Import Tiff allows you to convert a series of 2D tiff images into a 3D volume which can then be added to a scene.
Figure 9. An example of how a directory of tiffs should be organized for import into Maverick.  This is how the demonstration data’s directory of tiff images should look when using Microsoft Explorer. Subdirectories and other files are allowed in this directory, but they will not be used by the importer. 
(!) Tiff images must be order numerically, be at the top level of the directory, and end in .tif or .tiff. (!)
Figure 10. Use this button to begin the specification of where the directory containing the tiff images is located. Throughout Merlin, a button labeled “…” is used to bring up a directory / file chooser for leading or saving files.  Such buttons are located next to the field where the directory / file name will be listed.
Figure 11. Use the browser to select the directory of tiff images.  Click “OK” when that directory is selected.
Figure 12. Use this button to bring up a browser to select the .CSV file that defines the name to be associated with each numeric label in the tiff images. For the demonstration data, the file is called “mavTissue.csv”
(!) For Merlin to operate correctly, it was necessary to modify the tissue.csv that had been originally provided by the AFRL.  The original file contained non-alphanumeric characters in the names of several of the labels, and those characters confounded subsequent processing.  Those characters have been eliminated in the CSV file distributed with Maverick. (!)
Figure 13. Once the tiff directory and the CSV file have been specified (as indicated by the file names listed in the fields next to the “…” buttons), choose "Accept" to begin the conversion process.
Figure 14. Conversion from tiff to a volume file requires you to specify the name of the volume file that will be created.  In this example the name "PiggTiff.mha" has been chosen.  (!) The filename MUST end in ".mha" to designate the supported MetaImage format. (!)
Figure 15. Depending on the version of Merlin and the operating system, Merlin may produce output that indicates errors during the conversion of the tiff files.  These errors are normal as tiff files from research machines often contain tags that are not part of the general tiff standard.
Figure 16. Once conversion is complete, the application returns to the import workflow panel.  The name of the volume and the CSV file are automatically filled-in to refer to the data just converted.
Figure 17. Pressing "Accept" will bring up a prompt for you to specify the name of the scene file to be created.  Here the name PigTiff.mrml has been entered by the user. (!) The scene filename MUST end in .mrml. (!)
Figure 18. Once the scene file has been created, its name is automatically entered into the "scene to load" field.  You may select an organ from the drop-down list to load a subset of the volume, or you may load the entire volume.  Then press the "load" button.
(!) The ability to load a subset of a volume, edit that subset, and then to re-insert the modified subset back into the original volume is one of the great strengths of Maverick.  Via this facility, arbitrarily large volumes and labelmaps can be processed on modest computing hardware. (!)
Figure 19. Merlin after the liver subset of the demonstration tiff data has been imported and loaded.
* Import DICOM
GOAL: After you have worked through the example in this section you will have imported a collection of DICOM images into Merlin.  The import process optionally merges volumes from multiple scans, handles volume overlap, and corrects for slice interleaving. 
This section will show you how to convert the demonstration data’s directory and subdirectories of DICOM images into a 3D volume that can be added to a scene.
(!) If your DICOM data is stored as a single 3D DICOM file, then it is not necessary to use this import process.  Instead, add the 3D DICOM image directly to the scene. (!)
Figure 20. The initial screen of Merlin.  The options for loading data into the system are displayed in the workflow panel.  Choose “Create new scene” to begin converting one or more DICOM series into a 3D volume.
Figure 21. Select "Import from DICOM" to initiate the appropriate Maverick module.
(!) The DICOM import module has strict data organization requirements.  The top level of the directory must contain one or more subdirectories, where each subdirectory contains one and only one DICOM series (a contiguous set of 2D slices that form a volume).  Each of those DICOM series will be concatenated to form the 3D volume.  See Figure 22.
(!) It is also critical to know the order in which the series should be concatenated to correctly compose the volume.  The reason for this restriction is that we assume that the subject may have been repositioned on the scanner bed between each DICOM series.  While such movement isn’t common in clinical applications, it is common when imaging animals.  As a result, the ordering of the DICOM series provided by the scanner is not assumed to reflect the order in which the data should be concatenated to form the volume.  The user is required to provide that ordering.  Future versions of Maverick will allow this assumption of inter-scan patient movement to be enabled/disabled. (!)
Figure 22. Mid-volume slice thru the demonstration DICOM data, ordered by Series ID.  The pig was moved between scans, so the user must specify the correct ordering.  A convenient interface is provided for specifying the Series ordering in Maverick.
(!) The order of the 2D images within each series is automatically resolved by the module.  So, while the order of the directories must be known, the order of the individual files within the directories will be automatically resolved. (!)
The expected arrangement of the data on disk is illustrated in Figure 23 and Figure 24. The demonstration DICOM data included with Maverick adheres to the required arrangement.
Figure 23. (!) The top level of the import directory should contain one or more subdirectories, which contain the DICOM files. (!)
Figure 24. (!) Each subdirectory should only contain the DICOM files for a single DICOM series.  The presence of other files or additional subdirectories may cause errors.(!)
Given data in the appropriate directory structure on disk, Maverick can compose a large 3D volume from multiple DICOM series.  Furthermore, Maverick will attempt to automatically correct for multiple forms of inter-scan and intra-scan acquisition artifacts, such as: 
1) Series overlap: Two adjacent DICOM series may contain the same anatomy; that is, they may partially overlap.  The import method will automatically scan the end slices of each adjacent series, and detect such overlap.  If found, the overlap will be eliminated when the series are adjoined.
2) Inter-series patient movement: Even when the 3D ordering of the series is given, the precise alignment of the subject within each scan must be determined and made contiguous with the adjacent scans.  Maverick uses automated registration methods to provide such alignment.
Figure 25. Scan overlap and subject movement.
3) Inter-series patient deformation: Even when the subject’s data has been aligned across scans, the anatomy may still not perfectly align due to deformations that occurred between scans.  Such deformations may be due to a change in pose or the release or build-up of internal pressures.  The full correction for such deformation is one of the grand challenges in medical image analysis.  We have provided a partial solution in Maverick, so that certain types of deformation can be compensated for, but further work is needed.
4) Intra-series patient movement during interleaved acquisitions: Some MRI acquisition sequences are acquired using an interleaved scanning technique, i.e., first the odd-number slices are acquired, and then the even number slices are acquired.  The challenge is that patient movement may have occurred between each scan, and the image data will appear to “jitter” when the slices are viewed in numeric sequence, as a volume.  We have developed a method for reducing the amount of jitter evident in interleaved series.  The user must specify which series should be corrected for interleaving.  For an example of interleave correction, see Figure 25.
 
Figure 26. Interleaved scan correction. Left: Uncorrected scan – the effects of patient movement during interleaved scan acquisition are apparent as jagged edges when the data is resliced in an X-Z plane.  Right: corrected scan – during DICOM load, proprietary methods are used to volumetrically adjust for patient movement.
Figure 27. Open the file browser and select the top-level directory.  This will add all subdirectories to the list of DICOM directories to be processed.
Figure 28. The order of the subdirectories (aka Series) can be changed by selecting a subdir and then using the arrow keys to move its position up or down in the list. (!) As previously stated, the user must order this list of series to reflect their spatial ordering for volume reconstruction. (!)
Figure 29. You may also use the browser buttons to designate directories containing series that need correction for (1) interleaving or (2) deformation.
 
== Load a MRML Scene File ==
== Rendering a Scene ==
== Smooth Labels ==
 
== Painting ==
 
== Resample ==
 
 
== Fix Air (Internal –vs– External) ==
== Regularize Skin ==
== Save and Export ==
[[Image:Merlin_Page_Export.png|right|Export Panel]]
There are 2 different ways of saving your work in Merlin: Saving the current scene or Exporting the label map volume.
For both, you have to go to the Export page by clicking on "Export" in the "I/O" category in the Task list located in the right side panel of Merlin.
* Save the full scene:
Check the "Export Scene" check box.
By default, the current paths of the scene/volumes/objects  are set into the edit boxes, if you don't change the file names, these files will be overrwritten.
The scene file (.MRML) contains all the volumes/objects in the scene. It only contains the path of the saved files the scene points to.
The volumes (labelmap, MRI or SAR) must be saved in .MHA or .MHD file format.
The tissue file can be saved in .txt or .csv. It contains all the organ properties. If an organ has been previously created using "Add Organ", the saved tissue file will contain the new organ entry.
When the file paths are correct, you can click on the "Accept" button to generate these files.
The next time you launch Merlin, you will be able to load the edited scene you saved.
* Export the LabelMap:
Check the "Export LabelMap" check box.
It exports the edited label map in a different file format: raw,tiff, png... Only the label map is saved, not the scene. For the tiff and png file format, a series of XY images will be saved, the end of the filename will described the Z index. Make sure that you export your label map into a file different from the file in your scene.
Click "Accept" to generate the file(s).
The next time you open Merlin, your changes to the labelmap won't appear. You can however create a new scene file using the exported labelmap.
* Note:
You can edit the .MRML file with the notepad to verify/change the file path of the volumes.
Some useful information can be found in the .MHA and .MHD files when open with the notepad.
= Acknowledgements =
This work was funded in part by Air Force Research Laboratories SBIR FA8650-07-C-6756.  We are grateful for their generous support, motivation, and feedback.  In particular we thank Jason Payne, Pam Henry, Matthew A. Haeuser, John M.  Zirax, and Samuel D. Adams.

Revision as of 20:58, 20 March 2012