https://public.kitware.com/Wiki/api.php?action=feedcontributions&user=Berk&feedformat=atomKitwarePublic - User contributions [en]2024-03-19T09:32:07ZUser contributionsMediaWiki 1.38.6https://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2015&diff=57702VTK/GSoC 20152015-03-18T15:43:32Z<p>Berk: /* Fine-Grained Parallelism in VTK-m */</p>
<hr />
<div>Project ideas for the Google Summer of Code 2015<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit), [[VTK/Git/Develop|instructions are here]]. Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to be parallel. Ideally, this will include modernization of the old multi-threading code in the imaging pipeline. References:<br />
<br />
* [[VTK/VTK_SMP | VTK SMP Wiki page]]<br />
* [https://hal.inria.fr/hal-00789814/document VTK SMP paper]<br />
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.396.4673&rep=rep1&type=pdf Initial VTK SMP paper]<br />
* [[VTK/Threaded Image Algorithms | VTK multi-threaded image algorithm wiki page]]<br />
<br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com) and David Gobbi (david dot gobbi at gmail dot com)<br />
<br />
=== Fine-Grained Parallelism in VTK-m ===<br />
<br />
'''Brief explanation''': VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures ( GPU's and Coprocessors's ). VTK-m is designed for fine-grained concurrency and provides abstract data and execution models that can be applied to a variety of algorithms. The goal of the project will be on developing algorithms such as Slice by implicit surface, Gradient, Streamlines, External Faces, Resample, etc. using VTKM-m data and execution model. See [http://m.vtk.org/] for more information on VTK-m.<br />
<br />
'''Expected results''': A collection of algorithms that are processor architecture agnostic and execute on GPU's and Coprocessor's using VTK-m's data and execution model. Performance of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in templated C++ development, visualization algorithms, data structures and highly parallel architectures such as GPU's. Some experience in VTK or CUDA or OpenCL would be ideal but not necessary.<br />
<br />
'''Mentor''': Robert Maynard (robert dot maynard at kitware dot com) and Kenneth Moreland (kmorel at sandia dot gov)<br />
<br />
=== Computational Biology (Molecular Dynamics) In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Computational Biology involves using computer simulations to study biological problems using molecular dynamics and other techniques. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the computational biology workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Marcus D. Hanwell (mhanwell at kitware dot com).<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it trivial to bring new scientific data formats into VTK and ParaView. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Improvements to Earth and Space Science Visualization ===<br />
<br />
'''Brief explanation''': Add new capabilities to the existing VTK GeoVIS framework. In GSoC 2014 we built vtkMap that can support <br />
2D visualization using OpenStreetMap and other data sources. This time we plan on adding 3D support for it. Also, we need to add support<br />
for overlay and clipping using shapefiles or other geographically references data, improved support for data types such as LAS, improve support rendering points, lines and polygons for Geovisualizations, and filters to performs clustering and classification etc are amongst few of them. <br />
<br />
'''Expected results:''': New VTK filters and mappers, few demos to showcase the new features. <br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of earth science / GIS and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' Aashish Chaudhary (aashish dot chaudhary at kitware dot com)<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Brad Davis (brad dot davis at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Eulerian Magnification for Revealing Subtle Changes ===<br />
<br />
'''Brief explanation:''' This project is based on the SIGGRAPH 2012 paper described at http://people.csail.mit.edu/mrub/evm/.<br />
The idea would be to develop a filter that would extract out the subtle changes in a time-dependent data set and amplify them. As a follow on, specialized views to show the changes and a feedback path from some of the existing views that show time varying data into the filter may be created.<br />
<br />
'''Expected results:''' A filter in VTK to analyze the time-dependent data set.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' David DeMarle (dave dot demarle at kitware dot com) and/or Andy Bauer (andy dot bauer at kitware dot com).<br />
<br />
=== Parallel dataset partitioning ===<br />
<br />
'''Brief explanation:''' Writing parallel applications using vtk requires the distribution of datasets between processes in such a way that map/reduce style operations can be performed with the minimal amount of communication between those processes. Source objects generally load pieces of data which can be processed independently by downstream filters, but sometimes the data loading produces overlapping pieces which must be repartitioned into spatially distinct regions. VTK contains a DistributedDataFilter (http://www.vtk.org/doc/release/5.2/html/a00336.html) to perform this operation, but it is not the most efficient implementation. Another more efficient implementation based on the Zoltan library from the Trilinos package is available (https://github.com/biddisco/pv-zoltan) but uses an outdated version of the Zoltan library and has high memory consumption during the partitioning phase. The newer Zoltan2 library allows generic programming techniques to be used and geometric data can be partitioned using zero copy methods. The objective of this project is to produce a data partitioning filter that can operate on all vtk dataset types using Zoltan2 and produce spatially distinct data as well as ghost regions when requested.<br />
<br />
'''Expected results:''' A filter in VTK to partition unstructured and polygonal datasets in parallel.<br />
<br />
'''Prerequisites:''' Experience developing in C++, MPI.<br />
<br />
'''Mentor:''' John Biddiscombe (biddisco at cscs dot ch).<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Shadows, Reflection, etc etc.<br />
<br />
* interface to high quality rendering engines</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2015&diff=57664VTK/GSoC 20152015-03-16T17:46:47Z<p>Berk: /* Project Ideas */</p>
<hr />
<div>Project ideas for the Google Summer of Code 2015<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit), [[VTK/Git/Develop|instructions are here]]. Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to be parallel. Ideally, this will include modernization of the old multi-threading code in the imaging pipeline. References:<br />
<br />
* [http://www.vtk.org/Wiki/VTK/VTK_SMP VTK SMP Wiki page]<br />
* [https://hal.inria.fr/hal-00789814/document VTK SMP paper]<br />
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.396.4673&rep=rep1&type=pdf Initial VTK SMP paper]<br />
<br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com) and David Gobbi (david dot gobbi at gmail dot com)<br />
<br />
=== Fine-Grained Parallelism in VTK-m ===<br />
<br />
'''Brief explanation''': VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures ( GPU's and Coprocessors's ). VTK-m is designed for fine-grained concurrency and provides abstract data and execution models that can be applied to a variety of algorithms. The goal of the project will be on developing algorithms such as Slice by implicit surface, Gradient, Streamlines, External Faces, Resample, etc. using VTKM-m data and execution model.<br />
<br />
'''Expected results''': A collection of algorithms that are processor architecture agnostic and execute on GPU's and Coprocessor's using VTK-m's data and execution model. Performance of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in templated C++ development, visualization algorithms, data structures and highly parallel architectures such as GPU's. Some experience in VTK or CUDA or OpenCL would be ideal but not necessary.<br />
<br />
'''Mentor''': Robert Maynard (robert dot maynard at kitware dot com) and Kenneth Moreland (kmorel at sandia dot gov)<br />
<br />
=== Computational Biology (Molecular Dynamics) In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Computational Biology involves using computer simulations to study biological problems using molecular dynamics and other techniques. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the computational biology workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Marcus D. Hanwell (mhanwell at kitware dot com).<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it trivial to bring new scientific data formats into VTK and ParaView. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Improvements to Earth and Space Science Visualization ===<br />
<br />
'''Brief explanation''': Add new capabilities to the existing VTK GeoVIS framework. In GSoC 2014 we built vtkMap that can support <br />
2D visualization using OpenStreetMap and other data sources. This time we plan on adding 3D support for it. Also, we need to add support<br />
for overlay and clipping using shapefiles or other geographically references data, improved support for data types such as LAS, improve support rendering points, lines and polygons for Geovisualizations, and filters to performs clustering and classification etc are amongst few of them. <br />
<br />
'''Expected results:''': New VTK filters and mappers, few demos to showcase the new features. <br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of earth science / GIS and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' Aashish Chaudhary (aashish dot chaudhary at kitware dot com)<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Brad Davis (brad dot davis at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Eulerian Magnification for Revealing Subtle Changes ===<br />
<br />
'''Brief explanation:''' This project is based on the SIGGRAPH 2012 paper described at http://people.csail.mit.edu/mrub/evm/.<br />
The idea would be to develop a filter that would extract out the subtle changes in a time-dependent data set and amplify them. As a follow on, specialized views to show the changes and a feedback path from some of the existing views that show time varying data into the filter may be created.<br />
<br />
'''Expected results:''' A filter in VTK to analyze the time-dependent data set.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' David DeMarle (dave dot demarle at kitware dot com) and/or Andy Bauer (andy dot bauer at kitware dot com).<br />
<br />
=== Parallel dataset partitioning ===<br />
<br />
'''Brief explanation:''' Writing parallel applications using vtk requires the distribution of datasets between processes in such a way that map/reduce style operations can be performed with the minimal amount of communication between those processes. Source objects generally load pieces of data which can be processed independently by downstream filters, but sometimes the data loading produces overlapping pieces which must be repartitioned into spatially distinct regions. VTK contains a DistributedDataFilter (http://www.vtk.org/doc/release/5.2/html/a00336.html) to perform this operation, but it is not the most efficient implementation. Another more efficient implementation based on the Zoltan library from the Trilinos package is available (https://github.com/biddisco/pv-zoltan) but uses an outdated version of the Zoltan library and has high memory consumption during the partitioning phase. The newer Zoltan2 library allows generic programming techniques to be used and geometric data can be partitioned using zero copy methods. The objective of this project is to produce a data partitioning filter that can operate on all vtk dataset types using Zoltan2 and produce spatially distinct data as well as ghost regions when requested.<br />
<br />
'''Expected results:''' A filter in VTK to partition unstructured and polygonal datasets in parallel.<br />
<br />
'''Prerequisites:''' Experience developing in C++, MPI.<br />
<br />
'''Mentor:''' John Biddiscombe (biddisco at cscs dot ch).<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Shadows, Reflection, etc etc.<br />
<br />
* interface to high quality rendering engines</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2015&diff=57663VTK/GSoC 20152015-03-16T17:46:17Z<p>Berk: /* Project Ideas */</p>
<hr />
<div>Project ideas for the Google Summer of Code 2015<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit), [[VTK/Git/Develop|instructions are here]]. Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to be parallel. Ideally, this will include modernization of the old multi-threading code in the imaging pipeline. References:<br />
<br />
* [http://www.vtk.org/Wiki/VTK/VTK_SMP VTK SMP Wiki page]<br />
* [https://hal.inria.fr/hal-00789814/document VTK SMP paper]<br />
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.396.4673&rep=rep1&type=pdf Initial VTK SMP paper]<br />
<br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com) and David Gobbi (david dot gobbi at gmail dot com)<br />
<br />
=== Fine-Grained Parallelism in VTK-m ===<br />
<br />
'''Brief explanation''': VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures ( GPU's and Coprocessors's ). VTK-m is designed for fine-grained concurrency and provides abstract data and execution models that can be applied to a variety of algorithms. The goal of the project will be on developing algorithms such as Slice by implicit surface, Gradient, Streamlines, External Faces, Resample, etc. using VTKM-m data and execution model.<br />
<br />
'''Expected results''': A collection of algorithms that are processor architecture agnostic and execute on GPU's and Coprocessor's using VTK-m's data and execution model. Performance of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in templated C++ development, visualization algorithms, data structures and highly parallel architectures such as GPU's. Some experience in VTK or CUDA or OpenCL would be ideal but not necessary.<br />
<br />
'''Mentor''': Robert Maynard (robert dot maynard at kitware dot com) and Kenneth Moreland (kmorel at sandia dot gov)<br />
<br />
=== Computational Biology (Molecular Dynamics) In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Computational Biology involves using computer simulations to study biological problems using molecular dynamics and other techniques. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the computational biology workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Marcus D. Hanwell (mhanwell at kitware dot com).<br />
<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it trivial to bring new scientific data formats into VTK and ParaView. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Improvements to Earth and Space Science Visualization ===<br />
<br />
'''Brief explanation''': Add new capabilities to the existing VTK GeoVIS framework. In GSoC 2014 we built vtkMap that can support <br />
2D visualization using OpenStreetMap and other data sources. This time we plan on adding 3D support for it. Also, we need to add support<br />
for overlay and clipping using shapefiles or other geographically references data, improved support for data types such as LAS, improve support rendering points, lines and polygons for Geovisualizations, and filters to performs clustering and classification etc are amongst few of them. <br />
<br />
'''Expected results:''': New VTK filters and mappers, few demos to showcase the new features. <br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of earth science / GIS and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' Aashish Chaudhary (aashish dot chaudhary at kitware dot com)<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Brad Davis (brad dot davis at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Eulerian Magnification for Revealing Subtle Changes ===<br />
<br />
'''Brief explanation:''' This project is based on the SIGGRAPH 2012 paper described at http://people.csail.mit.edu/mrub/evm/.<br />
The idea would be to develop a filter that would extract out the subtle changes in a time-dependent data set and amplify them. As a follow on, specialized views to show the changes and a feedback path from some of the existing views that show time varying data into the filter may be created.<br />
<br />
'''Expected results:''' A filter in VTK to analyze the time-dependent data set.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' David DeMarle (dave dot demarle at kitware dot com) and/or Andy Bauer (andy dot bauer at kitware dot com).<br />
<br />
=== Parallel dataset partitioning ===<br />
<br />
'''Brief explanation:''' Writing parallel applications using vtk requires the distribution of datasets between processes in such a way that map/reduce style operations can be performed with the minimal amount of communication between those processes. Source objects generally load pieces of data which can be processed independently by downstream filters, but sometimes the data loading produces overlapping pieces which must be repartitioned into spatially distinct regions. VTK contains a DistributedDataFilter (http://www.vtk.org/doc/release/5.2/html/a00336.html) to perform this operation, but it is not the most efficient implementation. Another more efficient implementation based on the Zoltan library from the Trilinos package is available (https://github.com/biddisco/pv-zoltan) but uses an outdated version of the Zoltan library and has high memory consumption during the partitioning phase. The newer Zoltan2 library allows generic programming techniques to be used and geometric data can be partitioned using zero copy methods. The objective of this project is to produce a data partitioning filter that can operate on all vtk dataset types using Zoltan2 and produce spatially distinct data as well as ghost regions when requested.<br />
<br />
'''Expected results:''' A filter in VTK to partition unstructured and polygonal datasets in parallel.<br />
<br />
'''Prerequisites:''' Experience developing in C++, MPI.<br />
<br />
'''Mentor:''' John Biddiscombe (biddisco at cscs dot ch).<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Shadows, Reflection, etc etc.<br />
<br />
* interface to high quality rendering engines</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2015&diff=57662VTK/GSoC 20152015-03-16T17:45:29Z<p>Berk: /* Shared Memory Parallelism in VTK */</p>
<hr />
<div>Project ideas for the Google Summer of Code 2015<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit), [[VTK/Git/Develop|instructions are here]]. Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to be parallel. Ideally, this will include modernization of the old multi-threading code in the imaging pipeline. References:<br />
<br />
* [http://www.vtk.org/Wiki/VTK/VTK_SMP VTK SMP Wiki page]<br />
* [https://hal.inria.fr/hal-00789814/document VTK SMP paper]<br />
* [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.396.4673&rep=rep1&type=pdf Initial VTK SMP paper]<br />
<br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com) and David Gobbi (david dot gobbi at gmail dot com)<br />
<br />
=== Computational Biology (Molecular Dynamics) In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Computational Biology involves using computer simulations to study biological problems using molecular dynamics and other techniques. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the computational biology workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Marcus D. Hanwell (mhanwell at kitware dot com).<br />
<br />
=== Fine-Grained Parallelism in VTK-m ===<br />
<br />
'''Brief explanation''': VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures ( GPU's and Coprocessors's ). VTK-m is designed for fine-grained concurrency and provides abstract data and execution models that can be applied to a variety of algorithms. The goal of the project will be on developing algorithms such as Slice by implicit surface, Gradient, Streamlines, External Faces, Resample, etc. using VTKM-m data and execution model.<br />
<br />
'''Expected results''': A collection of algorithms that are processor architecture agnostic and execute on GPU's and Coprocessor's using VTK-m's data and execution model. Performance of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in templated C++ development, visualization algorithms, data structures and highly parallel architectures such as GPU's. Some experience in VTK or CUDA or OpenCL would be ideal but not necessary.<br />
<br />
'''Mentor''': Robert Maynard (robert dot maynard at kitware dot com) and Kenneth Moreland (kmorel at sandia dot gov)<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it trivial to bring new scientific data formats into VTK and ParaView. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Improvements to Earth and Space Science Visualization ===<br />
<br />
'''Brief explanation''': Add new capabilities to the existing VTK GeoVIS framework. In GSoC 2014 we built vtkMap that can support <br />
2D visualization using OpenStreetMap and other data sources. This time we plan on adding 3D support for it. Also, we need to add support<br />
for overlay and clipping using shapefiles or other geographically references data, improved support for data types such as LAS, improve support rendering points, lines and polygons for Geovisualizations, and filters to performs clustering and classification etc are amongst few of them. <br />
<br />
'''Expected results:''': New VTK filters and mappers, few demos to showcase the new features. <br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of earth science / GIS and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' Aashish Chaudhary (aashish dot chaudhary at kitware dot com)<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Brad Davis (brad dot davis at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Eulerian Magnification for Revealing Subtle Changes ===<br />
<br />
'''Brief explanation:''' This project is based on the SIGGRAPH 2012 paper described at http://people.csail.mit.edu/mrub/evm/.<br />
The idea would be to develop a filter that would extract out the subtle changes in a time-dependent data set and amplify them. As a follow on, specialized views to show the changes and a feedback path from some of the existing views that show time varying data into the filter may be created.<br />
<br />
'''Expected results:''' A filter in VTK to analyze the time-dependent data set.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' David DeMarle (dave dot demarle at kitware dot com) and/or Andy Bauer (andy dot bauer at kitware dot com).<br />
<br />
=== Parallel dataset partitioning ===<br />
<br />
'''Brief explanation:''' Writing parallel applications using vtk requires the distribution of datasets between processes in such a way that map/reduce style operations can be performed with the minimal amount of communication between those processes. Source objects generally load pieces of data which can be processed independently by downstream filters, but sometimes the data loading produces overlapping pieces which must be repartitioned into spatially distinct regions. VTK contains a DistributedDataFilter (http://www.vtk.org/doc/release/5.2/html/a00336.html) to perform this operation, but it is not the most efficient implementation. Another more efficient implementation based on the Zoltan library from the Trilinos package is available (https://github.com/biddisco/pv-zoltan) but uses an outdated version of the Zoltan library and has high memory consumption during the partitioning phase. The newer Zoltan2 library allows generic programming techniques to be used and geometric data can be partitioned using zero copy methods. The objective of this project is to produce a data partitioning filter that can operate on all vtk dataset types using Zoltan2 and produce spatially distinct data as well as ghost regions when requested.<br />
<br />
'''Expected results:''' A filter in VTK to partition unstructured and polygonal datasets in parallel.<br />
<br />
'''Prerequisites:''' Experience developing in C++, MPI.<br />
<br />
'''Mentor:''' John Biddiscombe (biddisco at cscs dot ch).<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Shadows, Reflection, etc etc.<br />
<br />
* interface to high quality rendering engines</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2015&diff=57628VTK/GSoC 20152015-03-06T16:32:59Z<p>Berk: /* Project Ideas */</p>
<hr />
<div>Project ideas for the Google Summer of Code 2015<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit), [[VTK/Git/Develop|instructions are here]]. Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/VTK/help/mailing.html mailing lists], [http://open.cdash.org/index.php?project=VTK dashboard].<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to be parallel. Ideally, this will include modernization of the old multi-threading code in the imaging pipeline. <br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com) and David Gobbi (david dot gobbi at gmail dot com)<br />
<br />
=== Templated Input Generator for VTK ===<br />
<br />
'''Brief explanation''':<br />
Build up an infrastructure that makes it trivial to bring new scientific data formats into VTK and ParaView. The infrastructure will handle the complexities of temporal support, parallel processing, composite data structures, ghost levels and the like, and provide easy to use entry points that bring data from the file or other source and populate VTK arrays.<br />
<br />
'''Expected Results:'''<br />
A set of classes that can take an input specification and produce vtk data objects correctly and relatively efficiently.<br />
The input specification should be sufficiently abstracted from VTKs data types that users who understand the input format well won't have to understand VTK's complexities in order to use it.<br />
<br />
'''Prerequisites:'''<br />
C++ and probably a scripting language such as Python or Lua.<br />
<br />
'''References:'''<br />
http://www.paraview.org/Wiki/Writing_ParaView_Readers<br />
<br />
'''Mentor(s):''' Robert Maynard (robert dot maynard at kitware dot com) and/or David DeMarle (dave dot demarle at kitware dot com)<br />
<br />
=== Improvements to Earth and Space Science Visualization ===<br />
<br />
'''Brief explanation''': Add new capabilities to the existing VTK GeoVIS framework. In GSoC 2014 we built vtkMap that can support <br />
2D visualization using OpenStreetMap and other data sources. This time we plan on adding 3D support for it. Also, we need to add support<br />
for overlay and clipping using shapefiles or other geographically references data, improved support for data types such as LAS, improve support rendering points, lines and polygons for Geovisualizations, and filters to performs clustering and classification etc are amongst few of them. <br />
<br />
'''Expected results:''': New VTK filters and mappers, few demos to showcase the new features. <br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of earth science / GIS and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' Aashish Chaudhary (aashish dot chaudhary at kitware dot com)<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara at kitware dot com)<br />
<br />
=== Supporting Solid Model Geometry in VTK ===<br />
<br />
'''Brief explanation:''' Traditionally VTK has addressed the visualization needs of post-processed simulation information. Typically in these cases a tessellated mesh represents the geometric domain. This project will extend VTK's role in the simulation lifecycle by investigating approaches that will enable VTK to visualize the parametric boundary representation information used in solid modeling kernels such as CGM and OpenCASCADE (http://www.opencascade.org), which is typical pre-processing description of the geometric domain.<br />
<br />
'''Expected results:''' A VTK module that interfaces with one or more solid modeling kernels.<br />
<br />
'''Prerequisites:''' Experience in C++, and data structures. Some experience in VTK, parametric surfaces and solid modeling kernels ideal but not necessary.<br />
<br />
'''Mentor:''' Bob O'Bara (bob dot obara at kitware dot com).<br />
<br />
=== KiwiViewer on VTK ===<br />
<br />
'''Brief explanation:''' KiwiViewer (http://www.kiwiviewer.org) is a model viewer for VTK datasets that runs on iOS and Android devices. It is built from a cross compiled version of an older release of VTK coupled with VES (http://www.vtk.org/Wiki/VES), a lightweight rendering library that runs on OpenGL ES. The most recent release of VTK supports iOS and Android directly, so bringing KiwiViewer up to date with full featured rendering would open up many visualization capabilities.<br />
<br />
'''Expected results:''' A new version of KiwiViewer.<br />
<br />
'''Prerequisites:''' Experience developing for mobile platforms and C++.<br />
<br />
'''Mentor:''' Brad Davis (brad dot davis at kitware dot com).<br />
<br />
=== OpenFOAM Catalyst adaptor ===<br />
<br />
'''Brief explanation:''' OpenFOAM (http://www.openfoam.org) is a premier open source Computational Fluid Dynamics (CFD) simulation package. ParaView/Catalyst (http://www.paraview.org/Wiki/ParaView/Catalyst/Overview) is a VTK based in-situ visualization framework that tightly couples visualization capabilities to arbitrary simulation code. Updates to the data import path between OpenFOAM and VTK would give extreme scalability to OpenFOAM because data products would never need to be written to disk. It would also facilitate live data and computational steering connections that let the scientist see new results while they are being generated.<br />
<br />
'''Expected results:''' A Catalyst adaptor contributed to either the OpenFOAM or ParaView communities. Two feasible starting points to begin the work are the existing vtkOpenFOAM readers and and the vtkFOAM FOAM-to-VTK exporter.<br />
<br />
'''Prerequisites:''' Experience developing in C++, experience with CFD.<br />
<br />
'''Mentor:''' Andy Bauer (andy dot bauer at kitware dot com) and Takuya Oshima (oshima at eng dot niigata-u dot ac dot jp)<br />
<br />
=== Eulerian Magnification for Revealing Subtle Changes ===<br />
<br />
'''Brief explanation:''' This project is based on the SIGGRAPH 2012 paper described at http://people.csail.mit.edu/mrub/evm/.<br />
The idea would be to develop a filter that would extract out the subtle changes in a time-dependent data set and amplify them. As a follow on, specialized views to show the changes and a feedback path from some of the existing views that show time varying data into the filter may be created.<br />
<br />
'''Expected results:''' A filter in VTK to analyze the time-dependent data set.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' David DeMarle (dave dot demarle at kitware dot com) and/or Andy Bauer (andy dot bauer at kitware dot com).<br />
<br />
=== Parallel dataset partitioning ===<br />
<br />
'''Brief explanation:''' Writing parallel applications using vtk requires the distribution of datasets between processes in such a way that map/reduce style operations can be performed with the minimal amount of communication between those processes. Source objects generally load pieces of data which can be processed independently by downstream filters, but sometimes the data loading produces overlapping pieces which must be repartitioned into spatially distinct regions. VTK contains a DistributedDataFilter (http://www.vtk.org/doc/release/5.2/html/a00336.html) to perform this operation, but it is not the most efficient implementation. Another more efficient implementation based on the Zoltan library from the Trilinos package is available (https://github.com/biddisco/pv-zoltan) but uses an outdated version of the Zoltan library and has high memory consumption during the partitioning phase. The newer Zoltan2 library allows generic programming techniques to be used and geometric data can be partitioned using zero copy methods. The objective of this project is to produce a data partitioning filter that can operate on all vtk dataset types using Zoltan2 and produce spatially distinct data as well as ghost regions when requested.<br />
<br />
'''Expected results:''' A filter in VTK to partition unstructured and polygonal datasets in parallel.<br />
<br />
'''Prerequisites:''' Experience developing in C++, MPI.<br />
<br />
'''Mentor:''' John Biddiscombe (biddisco at cscs dot ch).<br />
<br />
=== Direct mapped Polyhedral input cells from OpenFOAM ===<br />
<br />
'''Brief explanation:''' OpenFOAM is an Open Source Computational Fluid Dynamics (CFD) package. OpenFOAM runs on unstructured meshes that are composed of polyhedral cells. Polyhedral support is now provided with VTK although this is not supported by all filters. The default option within the OpenFOAM reader is to decompose polyhedral cells into the other VTK primitive types. The OpenFOAM reader also lacks support for ghost cells when reading in parallel.<br />
<br />
'''Expected results:''' An updated OpenFOAM reader with support for ghost cells when reading in parallel where the default output is a polyhedral cells. Test cases should be created for many of the common filters and polyhedral related bugs should be fixed.<br />
<br />
'''Prerequisites:''' Experience developing in C++.<br />
<br />
'''Mentor:''' Paul Edwards (paul dot m dot edwards at intel dot com)<br />
<br />
== Half Baked Ideas ==<br />
<br />
(contact Dave DeMarle if you would like to work on one of these or an idea of your own and I will find you a good mentor to work out a solid GSoC proposal with)<br />
<br />
* make concave polydata "just work" (i.e. render correctly) with minimal impact on common case speed<br />
<br />
* an add on framework to help VTK using applications keep track of units<br />
<br />
* anything from vtk user voice http://vtk.uservoice.com/forums/31508-general, except documentation (unfortunately) since docs effort is explicitly ruled out of GSoC<br />
<br />
* anything from paraview user voice http://paraview.uservoice.com/forums/11350-general<br />
<br />
* lua wrapping, lua programmable filters<br />
<br />
* advanced rendering algorithms with OpenGL2 back end - Ambient occlusion, Shadows, Reflection, etc etc.<br />
<br />
* interface to high quality rendering engines</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Roadmap&diff=56365VTK/Roadmap2014-05-27T14:21:54Z<p>Berk: /* VTK 6.2 (git master) */</p>
<hr />
<div>== Summary of Changes ==<br />
<br />
==== VTK 6.2 (git master) ====<br />
<br />
* [[VTK/Parallel Pipeline | Major refactoring of the parallel pipeline ]]<br />
* Under Cocoa, removed "-fobjc-gc" as a default compiler flag. VTK still supports Cocoa garbage collection, but you must specify it yourself now.<br />
* Added a reader/writer for NIFTI image files.<br />
<br />
==== VTK 6.1 ====<br />
<br />
* Move to use CMake's external data support over VTKData<br />
* [[VTK/OpenGL_Errors | OpenGL error detection and reporting macros and error cleanup ]]<br />
* [[VTK/OpenGL_Driver_Information | API for dealing with OpenGL driver bugs ]]<br />
* [[VTK/OSMesa_Support | Enable rendering with OSMesa where possible ]]<br />
* [[ParaView/Line_Integral_Convolution | Surface LIC parallelization and features for interactive tuning ]]<br />
* [[VTK/VTK_SMP | SMP framework introduced to make shared memory parallel development]]<br />
* Fixed compiler/linker errors when building against OS X 10.9 SDK. Fixed other errors building against llvm's [http://libcxx.llvm.org libc++].<br />
* Support for unicode text when a suitable font file is used in vtkTextProperty.<br />
* [[VTK/Wrapping C++11 Code | Wrapper support for header files with C++11 syntax]].<br />
* [[VTK/Better_Java_Support | Better Java support and install rules]]<br />
* Depth peeling support for ATI devices<br />
* Ctests generate a stacktrace on POSIX systems in response to catastrophic failure such as abort or segfault.<br />
* Qt5 support<br />
* [[VTK/API_Changes_6_0_0_to_6_1_0 | API Diff Report]]<br />
<br />
==== VTK 6.0 ====<br />
<br />
* [[VTK/VTK_6_Migration_Guide | VTK 6 API Migration Guide]]<br />
* [[VTK/Build_System_Migration | VTK 6 (build system) Migration Guide]]<br />
* [[VTK/Module_Development | VTK 6 Module Development]]<br />
* [[VTK/Remove_VTK_4_Compatibility | Remove VTK 4 compatibility layer from pipeline]]<br />
* [[VTK/Modularization_Proposal | Modularization]]<br />
* [[VTK/Remove_vtkTemporalDataSet | Temporal support changes]]<br />
* [[VTK/Composite_data_changes | Composite data structure changes ]]<br />
* [[VTK/API_Changes_5_10_1_to_6_0_0 | API Diff Report]]<br />
<br />
==== VTK 5.10 ====<br />
<br />
* [[VTK/improved unicode support | Change unicode readers/writers to register as codecs (finished Oct 29 2010)]]<br />
* [[VTK/Image Rendering Classes | New image rendering classes (start Dec 15 2010, finish Mar 15 2011)]]<br />
* [[VTK/Image Interpolators | Image interpolators (start Jun 20 2011, finish Aug 31 2011)]]<br />
* [[VTK/GSoC | Projects from Google Summer of Code 2011]]<br />
* [[VTK/Release5100 New Classes | List of new classes in 5.10]]<br />
* [[VTK/API_Changes_5_8_0_to_6_1_0 | API Diff Report]]<br />
<br />
==== VTK 5.8 ====<br />
<br />
* [[VTK/Polyhedron_Support | Polyhedron cells and MVC Interpolation]]<br />
* [http://visimp.cs.unc.edu/2010/10/26/reeb-graphs/ Reeb Graphs]<br />
* [[VTK/Closed Surface Clipping | Clipping of closed surfaces (start Mar 26, 2010, finish Apr 22, 2010)]]<br />
* [[VTK/Wrapper Update 2010 | New wrappers (start Apr 28, 2010)]]<br />
* [[VTK/Image Stencil Improvements | Improved image stencil support (start Nov 3, 2010)]]<br />
* [[VTK/MNI File Formats | MNI file formats]]<br />
* [[VTK/Release580 New Classes | List of New Classes]]<br />
<br />
==== VTK 5.6 ====<br />
<br />
* [[VTK/MultiPass_Rendering | VTK Multi-Pass Rendering]]<br />
* [[VTK/Multicore and Streaming | Multicore and Streaming]]<br />
* [[VTK/statistics | Statistics]]<br />
* [[VTK/Array Refactoring | Array Refactoring]]<br />
* [[VTK/3DConnexion Devices Support | 3DConnexion Devices Support]]<br />
* [[VTK/Charts | New Charts API]]<br />
* [[VTK/New CellPicker | New Cell Picker and Volume Picking (start Nov 2010, finish Feb 2010)]]<br />
<br />
==== VTK 5.4 ====<br />
<br />
* [[VTK 5.4 Release Planning]]<br />
* [[VTK/Cray XT3 Compilation| Cray XT3 Compilation]]<br />
* [[VTK/Geovis vision toolkit | Geospatial and vision visualization support ]]<br />
<br />
==== VTK 5.2 ====<br />
<br />
* [[VTK/Java Wrapping | VTK Java Wrapping]]<br />
* [[VTK/Composite Data Redesign | Composite Data Redesign]]<br />
* [[VTK Shaders | VTK Shaders]]<br />
* [[VTKShaders | Shaders in VTK]]<br />
* [[VTK/VTKMatlab | VTK with Matlab]]<br />
* [[VTK/Time_Support | VTK Time support]]<br />
* [[VTK/Graph Layout | VTK Graph Layout]]<br />
* [[VTK/Depth_Peeling | VTK Depth Peeling]]<br />
* [[VTK/Using_JRuby | Using VTK with JRuby]]<br />
* [[VTK/Painters | Painters]]<br />
<br />
==== VTK 5.0 ====<br />
<br />
* [[VTK/Tutorials/New_Pipeline | New Pipeline]]<br />
* [[VTKWidgets | VTK Widget Redesign]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56238VTK/Parallel Pipeline2014-04-30T21:33:18Z<p>Berk: /* Structured Data Readers and Filters */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and [http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline this page]. Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here.<br />
<br />
The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This pass can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(outInfo, controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Also note that filters that request ghost levels are not expected to remove ghost levels from their output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read or produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Then data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT() request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent if the number of pieces is larger than 1. This is because readers are expected to partition the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Note: Handling of UPDATE_EXTENT() is always optional for all readers. Readers that for some reason cannot read a subset of structured data can choose to produce data larger than UPDATE_EXTENT. On the other hand, filters that set CAN_HANDLE_PIECE_REQUEST() cannot ignore piece requests and have to produce unique datasets based on the piece request.<br />
<br />
Structured data filters need to handle the case where the extent of the data they receive in RequestData() is different than the update extent they requested '''if''' they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56237VTK/Parallel Pipeline2014-04-30T21:28:09Z<p>Berk: /* RequestData */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and [http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline this page]. Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here.<br />
<br />
The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This pass can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(outInfo, controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Also note that filters that request ghost levels are not expected to remove ghost levels from their output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent they requested if the number of pieces is larger than 1. This is because readers are expected to partitione the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Structured data filters need to handle the case where data extent they receive in RequestData() is different than the update extent they requested if they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56236VTK/Parallel Pipeline2014-04-30T21:26:32Z<p>Berk: /* RequestUpdateExtent */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and [http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline this page]. Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here.<br />
<br />
The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This pass can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(outInfo, controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Furthermore, filters that request ghost levels are not expected to remove ghost levels from output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent they requested if the number of pieces is larger than 1. This is because readers are expected to partitione the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Structured data filters need to handle the case where data extent they receive in RequestData() is different than the update extent they requested if they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56235VTK/Parallel Pipeline2014-04-30T21:25:06Z<p>Berk: /* RequestInformation */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and [http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline this page]. Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here.<br />
<br />
The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This pass can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Furthermore, filters that request ghost levels are not expected to remove ghost levels from output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent they requested if the number of pieces is larger than 1. This is because readers are expected to partitione the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Structured data filters need to handle the case where data extent they receive in RequestData() is different than the update extent they requested if they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56234VTK/Parallel Pipeline2014-04-30T21:24:19Z<p>Berk: /* VTK Pipeline Support for Data Parallelism */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and [http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline this page]. Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here.<br />
<br />
The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Furthermore, filters that request ghost levels are not expected to remove ghost levels from output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent they requested if the number of pieces is larger than 1. This is because readers are expected to partitione the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Structured data filters need to handle the case where data extent they receive in RequestData() is different than the update extent they requested if they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56233VTK/Parallel Pipeline2014-04-30T21:23:51Z<p>Berk: /* VTK Pipeline Support for Data Parallelism */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and [http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline this page]. Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here. The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Furthermore, filters that request ghost levels are not expected to remove ghost levels from output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent they requested if the number of pieces is larger than 1. This is because readers are expected to partitione the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Structured data filters need to handle the case where data extent they receive in RequestData() is different than the update extent they requested if they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56232VTK/Parallel Pipeline2014-04-30T21:12:42Z<p>Berk: /* Structured Data Readers */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and this page (http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline). Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here. The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Furthermore, filters that request ghost levels are not expected to remove ghost levels from output.<br />
<br />
== Structured Data Readers and Filters ==<br />
<br />
The VTK pipeline has the ability to request a subset of a structured dataset from a data source to further reduce I/O or data generation overhead. All structured data (vtkImageData, vtkUniformGrid, vtkRectilinearGrid and vtkStructuredGrid) readers are required to produce a WHOLE_EXTENT value during RequestInformation(), this provides information about the maximum logical extents a reader can read produce. Data sinks or filters can ask for a subset of this whole extent by setting the UPDATE_EXTENT key during RequestUpdateExtent(). Data sources can use this key to minimize data they have to produce. Readers that are capable of honoring the UPDATE_EXTENT request should set the CAN_PRODUCE_SUB_EXTENT() during RequestInformation().<br />
<br />
It is very important to note that the interaction between update pieces and extents changed in VTK 6.2. In 6.2, update piece and update extent requests are treated independently except by data sources. This means that a filter should make changes to UPDATE_EXTENT independent of which piece they are processing and expect that they will receive a subset of the update extent they requested if the number of pieces is larger than 1. This is because readers are expected to partitione the UPDATE_EXTENT further based on the piece request they received. The pipeline handles this automatically for readers that set CAN_PRODUCE_SUB_EXTENT() and passes them the appropriate UPDATE_EXTENT() using the following code:<br />
<br />
<source lang="cpp"><br />
if (outInfo->Has(vtkAlgorithm::CAN_PRODUCE_SUB_EXTENT()))<br />
{<br />
int piece = outInfo->Get(UPDATE_PIECE_NUMBER());<br />
int ghost = outInfo->Get(UPDATE_NUMBER_OF_GHOST_LEVELS());<br />
<br />
vtkExtentTranslator* et = vtkExtentTranslator::New();<br />
int execExt[6];<br />
et->PieceToExtentThreadSafe(piece, numPieces, ghost,<br />
uExt, execExt,<br />
vtkExtentTranslator::BLOCK_MODE, 0);<br />
et->Delete();<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT(),<br />
execExt, 6);<br />
}<br />
</source><br />
<br />
Such readers can safely ignore all piece and ghost level requests. Note that a structured reader may want to handle all data partitioning internally to optimize I/O (for example when using MPI I/O). Such readers should not set CAN_PRODUCE_SUB_EXTENT() but set CAN_HANDLE_PIECE_REQUEST() and handle both UPDATE_EXTENT() and pieces/ghosts internally.<br />
<br />
Structured data filters need to handle the case where data extent they receive in RequestData() is different than the update extent they requested if they are expected to work properly in data parallel mode. Note that as of VTK 6.2, most imaging filters cannot handle this case and cannot be used in data parallel pipelines.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56231VTK/Parallel Pipeline2014-04-30T20:47:44Z<p>Berk: /* VTK Pipeline Support for Data Parallelism */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and this page (http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline). Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here. The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
<br />
</source><br />
<br />
More commonly, partitioning information is controlled by data members of a data sink such as a mapper or a writer. For example, vtkPolyDataMapper can be used to control partitioning as follows:<br />
<br />
<source lang="cpp"><br />
<br />
mapper->SetPiece(controller->GetLocalProcessId());<br />
mapper->SetNumberOfPieces(controller->GetNumberOfProcesses());<br />
<br />
renWin->Render();<br />
<br />
</source><br />
<br />
Note that some readers do not support reading data in parallel. Such readers do not set the CAN_HANDLE_PIECE_REQUEST() meta-data during RequestInformation(). When this key is not set, the pipeline will ask the reader to provide the whole data for UPDATE_PIECE_NUMBER() == 0 and empty data for all other pieces. If parallelism is to be achieved with a serial reader, the developer needs to use a filter such as D3 or vtkTransmitXXXPiece() to partition data after a serial read.<br />
<br />
Also note that these keys are automatically copied by the pipeline upstream (except to a reader that does not support parallel reading) so filtes do not need to implement RequestUpdateExtent() unless they want to modify the values requested by downstream.<br />
<br />
=== RequestData ===<br />
<br />
This is the pass where algorithms produce data based on the 3 update keys described above. A parallel reader is free to partition the data in any way it chooses as long as each partition is unique with the exception of ghost levels. Embarrasingly parallel filters do not need to worry about these keys. Furthermore, filters that request ghost levels are not expected to remove ghost levels from output.<br />
<br />
== Structured Data Readers ==</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56229VTK/Parallel Pipeline2014-04-30T20:36:18Z<p>Berk: /* VTK Pipeline Support for Data Parallelism */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and this page (http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline). Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here. The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
A serial reader should not set this key. This case is described in more detail below.<br />
<br />
=== RequestUpdateExtent ===<br />
<br />
This pass is where consumers can request a specific subset of the available data from upstream. Upstream filters can further modify this request to fit their needs. There are 3 specific keys that are used to implement data parallelism:<br />
<br />
* '''UPDATE_NUMBER_OF_PIECES''': This key together with UPDATE_PIECE_NUMBER controls how data should be partitioned by the data source. It is usually set equal to the number of MPI ranks in the current MPI group.<br />
<br />
* '''UPDATE_PIECE_NUMBER''': This key determines which partition should be loaded on the current process. It is usually set to the MPI rank of the current process.<br />
<br />
* '''UPDATE_NUMBER_OF_GHOST_LEVELS''': This key determines the number of ghost levels requested by a particular filter. Filters should usually increment the number of ghost levels requested by downstream by the number of ghost levels they need.<br />
<br />
These keys are usually set by a data consumer and possibly modified by filters upstream. Usually UPDATE_NUMBER_OF_GHOST_LEVELS is the only one modified by filters. It is also possible to set them manually as follows:<br />
<br />
<source lang="cpp"><br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_PIECE_NUMBER(), controller->GetLocalProcessId());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_PIECES(), controller->GetNumberOfProcesses());<br />
outInfo->Set(vtkStreamingDemandDrivenPipeline::UPDATE_NUMBER_OF_GHOST_LEVELS(), 0);<br />
<br />
// or in short<br />
<br />
afilter->UpdateInformation();<br />
vtkInformation* outInfo = afilter->GetOutputInformation(0);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(controller->GetLocalProcessId(), controller->GetNumberOfProcesses(), 0);<br />
</source><br />
<br />
<br />
=== RequestData ===</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56208VTK/Parallel Pipeline2014-04-30T19:55:43Z<p>Berk: /* Introduction */</p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]<br />
<br />
= VTK Pipeline Support for Data Parallelism =<br />
<br />
Demand-driven data parallelism is natively supported by VTK's execution mechanism. This is achieved by utilizing various pipeline passes and a specific set of meta-data and request objects. An introduction to VTK's pipeline can be found in the VTK User's Guide and this page (http://www.vtk.org/Wiki/VTK/Tutorials/New_Pipeline). Unless you are familiar with the VTK pipeline, we recommend taking a look at these documents before continuing with this one. Also note that certain keys pertaining to parallelism have changed since VTK 6.1 and are described in more detail here. The three main pipeline passes pertaining to data parallelism are RequestInformation, RequestUpdateExtent and RequestData:<br />
<br />
=== RequestInformation ===<br />
<br />
This is where data sources (e.g. readers) provide meta-data about their capabilities and what data they can produce. Filters downstream can also modify this meta-data when they can add/reduce capability or change what data can be made available downstream. This can be usually ignored with respect to data parallelism. The only exception is that readers that can produce data in a partitioned way need to notify the pipeline by providing the CAN_HANDLE_PIECE_REQUEST() key as follows:<br />
<br />
<source lang="cpp"><br />
int vtkSphereSource::RequestInformation(<br />
vtkInformation *vtkNotUsed(request),<br />
vtkInformationVector **vtkNotUsed(inputVector),<br />
vtkInformationVector *outputVector)<br />
{<br />
// get the info object<br />
vtkInformation *outInfo = outputVector->GetInformationObject(0);<br />
<br />
outInfo->Set(CAN_HANDLE_PIECE_REQUEST(), 1);<br />
<br />
return 1;<br />
}<br />
</source><br />
<br />
=== RequestUpdateExtent ===<br />
<br />
=== RequestData ===</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56194VTK/Parallel Pipeline2014-04-30T18:52:34Z<p>Berk: </p>
<hr />
<div>= Introduction =<br />
<br />
VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56188VTK/Parallel Pipeline2014-04-30T18:02:58Z<p>Berk: </p>
<hr />
<div>VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
[[File:pvtk-figure1.png | frame | center| Figure 1: Example parallel pipeline]]<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
[[File:pvtk-figure2.png | frame | center| Figure 2: A simple partitioned data set]]<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
[[File:pvtk-figure3.png | frame | center| Figure 3: Performing clip in parallel]]<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
[[File:pvtk-figure4.png | frame | center| Figure 4: Extracting external faces in parallell]]<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.<br />
<br />
[[File:pvtk-figure5.png | frame | center| Figure 5: Extracting external faces in parallel with ghost cells]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=File:Pvtk-figure5.png&diff=56187File:Pvtk-figure5.png2014-04-30T17:46:58Z<p>Berk: </p>
<hr />
<div></div>Berkhttps://public.kitware.com/Wiki/index.php?title=File:Pvtk-figure4.png&diff=56186File:Pvtk-figure4.png2014-04-30T17:46:47Z<p>Berk: </p>
<hr />
<div></div>Berkhttps://public.kitware.com/Wiki/index.php?title=File:Pvtk-figure3.png&diff=56185File:Pvtk-figure3.png2014-04-30T17:46:36Z<p>Berk: </p>
<hr />
<div></div>Berkhttps://public.kitware.com/Wiki/index.php?title=File:Pvtk-figure2.png&diff=56184File:Pvtk-figure2.png2014-04-30T17:46:27Z<p>Berk: </p>
<hr />
<div></div>Berkhttps://public.kitware.com/Wiki/index.php?title=File:Pvtk-figure1.png&diff=56183File:Pvtk-figure1.png2014-04-30T17:46:17Z<p>Berk: </p>
<hr />
<div></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/Parallel_Pipeline&diff=56182VTK/Parallel Pipeline2014-04-30T17:37:38Z<p>Berk: Created page with "VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data..."</p>
<hr />
<div>VTK uses a form of parallelism called data parallelism. In this form, the data is divided amongst the processes, and each process performs the same operation on its piece of data. Advantages of data parallelism include scalability, simplified load balancing, and reduced communications overhead.<br />
<br />
Figure 1 shows how VTK filters can be setup when running in parallel. In this example, each reader reads in a piece of the data. Usually, this can be done with very little communication between processes. Then, each reader feeds into a pipeline identical with that on the other processes. Because many of the filters in VTK use algorithms that independently process each point or cell, these filters can run in parallel with little or no communication between them. Let us see a simple example of how that works.<br />
<br />
Consider the simple, 2D grid that is partitioned into three pieces shown in Figure 2. Assume that the three pieces reside on separate processes of a distributed memory computer. Because no process has global information, communications costs will be minimized if each process performs its operation only on its local information. Of course, we can only do this if the end result of the parallel operation is equivalent to the same operation on the global data.<br />
<br />
Figure 3 demonstrates a clip filter (vtkClipDataSet for example) processing data in parallel. Each process is given the same parameters for the cut plane. The cut plane is then applied independently on each piece of the data. When the pieces are brought back together, we see that the result is the entire data set properly cut by the plane.<br />
<br />
Not all visualization algorithms can operate on pieces without information on neighboring cells. Consider the operation of extracting external faces as shown in Figure 4. The external face operation identifies all faces that have no local neighbors. When we bring the pieces together, we see that some internal faces have been incorrectly identified as being external. These false positives on the faces occur whenever two neighboring cells are placed in separate processes.<br />
<br />
The external face operation in our example fails because some important global information is missing from the local processing. The processes need some data that is not local to them, but they do not need all the data. They only need to know about cells in other partitions that are neighbors to the local cells.<br />
<br />
We can solve this local/global problem with the introduction of ghost cells. Ghost cells are cells that belong to one partition of the data and are duplicated on other partitions. The introduction of ghost cells is performed through neighborhood information and organized in levels. For a given partition, any cell neighboring a cell in the partition but not belonging to the partition itself is a ghost cell 1. Any cell neighboring a ghost cell at level 1 that does not belong to level 1 or the original partition is at level 2. Further levels are defined recursively. We define ghost cells in this way because it provides a simple distance metric to the cells of a partition and allows filters to easily specify the minimal or near minimal set of ghost cells required for proper operation.<br />
<br />
Let us apply the use of ghost cells to our example of extracting external faces. Figure 5 shows the same partitions with a layer of ghost cells added. When the external face algorithm is run again, some faces are still inappropriately classified as external. However, all of these faces are attached to ghost cells. These ghost faces are easily culled, and the end result is the appropriate external faces.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_SMP&diff=54418VTK/VTK SMP2013-11-03T18:22:04Z<p>Berk: Created page with "VTK 6.1 introduces a new framework to facilitate the development of shared memory parallel algorithms. The initial guide for this framework can be found [[media:VTK_SMP_Guide.pdf..."</p>
<hr />
<div>VTK 6.1 introduces a new framework to facilitate the development of shared memory parallel algorithms. The initial guide for this framework can be found [[media:VTK_SMP_Guide.pdf | here]]. Note that this is only the beginning of this development and it simply introduces a few foundational classes and some example filters. Our next objective is to start parallelizing core functionality in VTK. If you are interested in further developing the SMP framework, developing new parallel algorithms or parallelizing existing ones, please get in touch on the VTK users or developers mailing list.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK&diff=54417VTK2013-11-03T18:11:08Z<p>Berk: /* VTK 6.1 */</p>
<hr />
<div><center>http://public.kitware.com/images/logos/vtk-logo2.jpg</center><br />
<br /><br />
The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers around the world. VTK consists of a C++ class library, and several interpreted interface layers including Python, Tcl/Tk and Java. Professional support and products for VTK are provided by Kitware, Inc. ([http://www.kitware.com www.kitware.com]) VTK supports a wide variety of visualization algorithms including scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modelling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation. In addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D imaging / 3D graphics algorithms and data.<br />
<br />
<br />
== Learning VTK ==<br />
If you want to learn how to use or develop VTK, please see [[VTK/Learning_VTK | Learning VTK]]<br />
<br />
== Building VTK ==<br />
* Where can I [http://vtk.org/get-software.php download VTK]?<br />
<br />
* Where can I download a tarball of the [http://vtk.org/files/nightly/vtkNightlyDocHtml.tar.gz nightly HTML documentation]?<br />
<br />
* How do I build the [[VTK/BuildingDoxygen|Doxygen documentation]]?<br />
<br />
* [[VTK/Git|Using Git for VTK development]]<br />
<br />
* [[VTK/GitMSBuild|Using Git and MSBuild to build VTK]]<br />
<br />
* [[VTK/PythonDevelopment|Setting up a Python Development Environment using Eclipse/Pydev]]<br />
<br />
* [[VTK/Build parameters | Build parameters]]<br />
<br />
* [[Making Development Environment without compiling source distribution]]<br />
<br />
* [[VTK/Building/VisualStudio | Building VTK with Visual Studio]]<br />
<br />
== Extending VTK ==<br />
<br />
* Where can I get [[VTK Datasets]]?<br />
<br />
* [[VTK Classes|User-Contributed Classes]]<br />
<br />
* [[VTK Coding Standards]] <br />
<br />
* [[VTK/Commit_Guidelines|VTK Commit Guidelines]]<br />
<br />
* [[VTK/Git/Develop|Contribute to VTK / Patch Procedure]]<br />
<br />
* [[VTK Scripts|Extending VTK with Scripts]]<br />
<br />
== Projects/ Tools that use VTK == <br />
<br />
* [[VTK Tools|VTK-Based Tools and Applications]]<br />
<br />
* What are some [[VTK Projects|projects using VTK]]?<br />
<br />
== Troubleshooting ==<br />
<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]<br />
<br />
* [[VTK OpenGL|Common OpenGL troubles]]<br />
<br />
== Miscellaneous ==<br />
* [[VTK Related Job Opportunities|VTK Related Job Opportunities]]<br />
<br />
* [[VTK/Third Party Library Patrol | VTK 3rd Party Library Patrol]]<br />
<br />
* [[VTK/Meeting Minutes | Meeting Minutes]]<br />
<br />
* [[VTK/License | VTK License]]<br />
* [[VTK/ThirdPartyLicenses | VTK Third-Party Licenses]]<br />
<br />
== Summary of Changes ==<br />
<br />
==== VTK 6.1 ====<br />
<br />
* Move to use CMake's external data support over VTKData<br />
* [[VTK/OpenGL_Errors | OpenGL error detection and reporting ]]<br />
* [[VTK/VTK_SMP | SMP framework introduced to make shared memory parallel development]]<br />
<br />
==== VTK 6.0 ====<br />
<br />
* [[VTK/VTK_6_Migration_Guide | VTK 6 API Migration Guide]]<br />
* [[VTK/Build_System_Migration | VTK 6 (build system) Migration Guide]]<br />
* [[VTK/Module_Development | VTK 6 Module Development]]<br />
* [[VTK/Remove_VTK_4_Compatibility | Remove VTK 4 compatibility layer from pipeline]]<br />
* [[VTK/Modularization_Proposal | Modularization]]<br />
* [[VTK/Remove_vtkTemporalDataSet | Temporal support changes]]<br />
* [[VTK/Composite_data_changes | Composite data structure changes ]]<br />
<br />
==== VTK 5.10 ====<br />
<br />
* [[VTK/improved unicode support | Change unicode readers/writers to register as codecs (finished Oct 29 2010)]]<br />
* [[VTK/Image Rendering Classes | New image rendering classes (start Dec 15 2010, finish Mar 15 2011)]]<br />
* [[VTK/Image Interpolators | Image interpolators (start Jun 20 2011, finish Aug 31 2011)]]<br />
* [[VTK/GSoC | Projects from Google Summer of Code 2011]]<br />
* [[VTK/Release5100 New Classes | List of new classes in 5.10]]<br />
<br />
==== VTK 5.8 ====<br />
<br />
* [[VTK/Polyhedron_Support | Polyhedron cells and MVC Interpolation]]<br />
* [http://visimp.cs.unc.edu/2010/10/26/reeb-graphs/ Reeb Graphs]<br />
* [[VTK/Closed Surface Clipping | Clipping of closed surfaces (start Mar 26, 2010, finish Apr 22, 2010)]]<br />
* [[VTK/Wrapper Update 2010 | New wrappers (start Apr 28, 2010)]]<br />
* [[VTK/Image Stencil Improvements | Improved image stencil support (start Nov 3, 2010)]]<br />
* [[VTK/MNI File Formats | MNI file formats]]<br />
* [[VTK/Release580 New Classes | List of New Classes]]<br />
<br />
==== VTK 5.6 ====<br />
<br />
* [[VTK/MultiPass_Rendering | VTK Multi-Pass Rendering]]<br />
* [[VTK/Multicore and Streaming | Multicore and Streaming]]<br />
* [[VTK/statistics | Statistics]]<br />
* [[VTK/Array Refactoring | Array Refactoring]]<br />
* [[VTK/3DConnexion Devices Support | 3DConnexion Devices Support]]<br />
* [[VTK/Charts | New Charts API]]<br />
* [[VTK/New CellPicker | New Cell Picker and Volume Picking (start Nov 2010, finish Feb 2010)]]<br />
<br />
==== VTK 5.4 ====<br />
<br />
* [[VTK 5.4 Release Planning]]<br />
* [[VTK/Cray XT3 Compilation| Cray XT3 Compilation]]<br />
* [[VTK/Geovis vision toolkit | Geospatial and vision visualization support ]]<br />
<br />
==== VTK 5.2 ====<br />
<br />
* [[VTK/Java Wrapping | VTK Java Wrapping]]<br />
* [[VTK/Composite Data Redesign | Composite Data Redesign]]<br />
* [[VTK Shaders | VTK Shaders]]<br />
* [[VTKShaders | Shaders in VTK]]<br />
* [[VTK/VTKMatlab | VTK with Matlab]]<br />
* [[VTK/Time_Support | VTK Time support]]<br />
* [[VTK/Graph Layout | VTK Graph Layout]]<br />
* [[VTK/Depth_Peeling | VTK Depth Peeling]]<br />
* [[VTK/Using_JRuby | Using VTK with JRuby]]<br />
* [[VTK/Painters | Painters]]<br />
<br />
==== VTK 5.0 ====<br />
<br />
* [[VTK/Tutorials/New_Pipeline | New Pipeline]]<br />
* [[VTKWidgets | VTK Widget Redesign]]<br />
<br />
== News ==<br />
<br />
=== Development Process ===<br />
The VTK Community is [[VTK/Managing_the_Development_Process | upgrading its development process]]. We are doing this in response to the continuing and rapid growth of the toolkit. A VTK Architecture Review Board [[VTK/Architecture_Review_Board |VTK ARB]] is being put in place to provide strategic guidance to the community, and individuals are being identified as leaders in various VTK subsystems.<br />
<br />
Have a question or topic for the ARB to discuss about the future of VTK? First, please bring the topic to the [http://public.kitware.com/mailman/listinfo/vtk-developers VTK developers mailing list]. If the issue is not resolved there or needs further planning or direction, you may [[VTK/ARB/Meetings#Potential Topics|enter a suggested topic for discussion]].<br />
<br />
* [[Proposed Changes to VTK | Proposed Changes to VTK]]<br />
<br />
===[[VTK/NextGen|VTK NextGen]]=== <br />
We have started collecting works in progress as well as future ideas at [[VTK/NextGen|NextGen]]. Please add anything you are working on, would like to collaborate on, or would like to see in the future of VTK!<br />
<br />
== Wrapping ==<br />
<br />
* [[VTK/Wrappers | Wrapping Tools]]<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
* [[VTK/CSharp/ActiViz.NET|CSharp/ActiViz.NET]]<br />
** [[VTK/Examples/CSharp|CSharp/ActiViz.NET code samples]]<br />
* [[VTK/CSharp/ComingSoon|CSharp/ComingSoon]]<br />
<br />
== Developers Corner ==<br />
[[VTK/Developers Corner|Developers Corner]]<br />
<br />
<!-- <br />
== External Links ==<br />
dead link *[http://zorayasantos.tripod.com/vtk_csharp_examples VTK examples in C#] (Visual Studio 5.0 and .NET 2.0)<br />
--><br />
{{VTK/Template/Footer}}</div>Berkhttps://public.kitware.com/Wiki/index.php?title=File:VTK_SMP_Guide.pdf&diff=54416File:VTK SMP Guide.pdf2013-11-03T14:27:17Z<p>Berk: This is a brief introduction to developing shared memory parallel algorithms in VTK using the SMP framework.</p>
<hr />
<div>This is a brief introduction to developing shared memory parallel algorithms in VTK using the SMP framework.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2013&diff=52029VTK/GSoC 20132013-03-27T16:05:13Z<p>Berk: </p>
<hr />
<div>Project ideas for the Google Summer of Code 2013<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit). Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/vtk/help/mailing.html mailing lists], [http://www.cdash.org/CDash/index.php?project=VTK dashboard].<br />
<br />
=== Project: Biochemistry Visualization ===<br />
<br />
'''Brief explanation:''' Addition of new data types, mappers and visualizations for biochemistry visualization. VTK has already been used in several open source biochemistry applications, but only has limited support for protein ribbons. This would build on previous work done in chemistry. Features such as marching cubes, GPU accelerated volume rendering and glyph mappers could be leveraged here. This could also make use of existing work in infovis and 2D charting to display numerical data associated with biochemistry and bioinformatics applications. Looking at the boundary between quantum calculations and molecular dynamics, with QMMM data and other output ideally. <br />
<br />
'''Expected results:''' Support for standard biochemical representations, advanced visualization techniques of electronic structure using volume rendering, surfaces, contours etc. <br />
<br />
'''Prerequisites:''' Experience in C++, some experience with VTK and/or OpenGL ideally, but not necessary.<br />
<br />
'''Mentor:''' Marcus Hanwell (marcus dot hanwell at kitware dot com).<br />
<br />
=== Project: Supporting a Visualization Grammar ===<br />
<br />
'''Brief explanation:''' Visualization grammars like [http://protovis.org Protovis] and the upcoming [http://trifacta.github.com/vega Vega] are new declarative techniques for rendering arbitrary visualizations by mapping data attributes to visual properties. This project would provide a baseline implementation of a visualization grammar in VTK, combining rapid prototyping of visualizations with the graphical power and OpenGL performance of VTK.<br />
<br />
'''Expected results:''' The result would be VTK classes to support a grammar such as Vega that for example supports that JSON specification completely or a significant subset. Features would include grammar mark implementations in VTK/OpenGL, data specification and mapping to VTK data objects like vtkTable, and infrastructure for custom interaction binding.<br />
<br />
'''Prerequisites:''' C++ experience required, Javascript experience a plus.<br />
<br />
'''Mentor:''' Jeff Baumes (jeff dot baumes at kitware dot com).<br />
<br />
=== Biocomputing In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Biocomputing involves using computer simulations to study biological problems. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the biocomputing workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Patrick O'Leary (patrick dot oleary at kitware dot com).<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara ta kitware dot com)<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to be parallel.<br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com)</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/GSoC_2013&diff=52028VTK/GSoC 20132013-03-27T16:04:21Z<p>Berk: </p>
<hr />
<div>Project ideas for the Google Summer of Code 2013<br />
<br />
== Guidelines ==<br />
<br />
=== Students ===<br />
<br />
These ideas were contributed by developers and users of [http://www.vtk.org/ VTK] and [http://www.paraview.org/ ParaView]. If you wish to submit a proposal based on these ideas you should contact the community members identified below to find out more about the idea, get to know the community member that will review your proposal, and receive feedback on your ideas.<br />
<br />
The Google Summer of Code program is competitive, and accepted students will usually have thoroughly researched the technologies of their proposed project, been in frequent contact with potential mentors, and ideally have submitted a patch or two to fix bugs in their project (through Gerrit). Kitware makes extensive use of mailing lists, and this would be your best point of initial contact to apply for any of the proposed projects. The mailing lists can be found on the project pages linked in the preceding paragraph. Please see [[GSoC proposal guidelines]] for further guidelines on writing your proposal.<br />
<br />
=== Adding Ideas ===<br />
<br />
When adding a new idea to this page, please try to include the following information:<br />
<br />
* A brief explanation of the idea<br />
* Expected results/feature additions<br />
* Any prerequisites for working on the project<br />
* Links to any further information, discussions, bug reports etc<br />
* Any special mailing lists if not the standard mailing list for VTK<br />
* Your name and email address for contact (if willing to mentor, or nominated mentor)<br />
<br />
If you are not a developer for the project concerned, please contact a developer about the idea before adding it here.<br />
<br />
== Project Ideas ==<br />
<br />
[http://www.vtk.org/ Project page], [http://www.vtk.org/vtk/help/mailing.html mailing lists], [http://www.cdash.org/CDash/index.php?project=VTK dashboard].<br />
<br />
=== Project: Biochemistry Visualization ===<br />
<br />
'''Brief explanation:''' Addition of new data types, mappers and visualizations for biochemistry visualization. VTK has already been used in several open source biochemistry applications, but only has limited support for protein ribbons. This would build on previous work done in chemistry. Features such as marching cubes, GPU accelerated volume rendering and glyph mappers could be leveraged here. This could also make use of existing work in infovis and 2D charting to display numerical data associated with biochemistry and bioinformatics applications. Looking at the boundary between quantum calculations and molecular dynamics, with QMMM data and other output ideally. <br />
<br />
'''Expected results:''' Support for standard biochemical representations, advanced visualization techniques of electronic structure using volume rendering, surfaces, contours etc. <br />
<br />
'''Prerequisites:''' Experience in C++, some experience with VTK and/or OpenGL ideally, but not necessary.<br />
<br />
'''Mentor:''' Marcus Hanwell (marcus dot hanwell at kitware dot com).<br />
<br />
=== Project: Supporting a Visualization Grammar ===<br />
<br />
'''Brief explanation:''' Visualization grammars like [http://protovis.org Protovis] and the upcoming [http://trifacta.github.com/vega Vega] are new declarative techniques for rendering arbitrary visualizations by mapping data attributes to visual properties. This project would provide a baseline implementation of a visualization grammar in VTK, combining rapid prototyping of visualizations with the graphical power and OpenGL performance of VTK.<br />
<br />
'''Expected results:''' The result would be VTK classes to support a grammar such as Vega that for example supports that JSON specification completely or a significant subset. Features would include grammar mark implementations in VTK/OpenGL, data specification and mapping to VTK data objects like vtkTable, and infrastructure for custom interaction binding.<br />
<br />
'''Prerequisites:''' C++ experience required, Javascript experience a plus.<br />
<br />
'''Mentor:''' Jeff Baumes (jeff dot baumes at kitware dot com).<br />
<br />
=== Biocomputing In Situ Visualization ===<br />
<br />
'''Brief explanation:''' Biocomputing involves using computer simulations to study biological problems. Of particular interest is [[http://www.gromacs.org/ GROMACS]], a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions. GROMACS is optimized to run on distributed memory clusters with recent support for GPU and SSE optimization. These GROMACS supercomputing simulations produce enormous (terabytes) file output to be analyzed post process by tools that only read the trajectory (position, velocity, and forces) or coordinate (molecular structure) information, and simply guess at the topology rather than using the simulations topology defined in GROMACS.<br />
<br />
This project would provide a baseline implementation of ParaView Catalyst for molecular in situ visualization and data analysis embedded in GROMACS based on GROMACS' computed topology and trajectory information.<br />
<br />
'''Expected results:''' The result would be ParaView Catalyst adaptors, example python scripts, and new advanced visualization techniques for GROMACS in order to enhance the biocomputing workflow.<br />
<br />
'''Prerequisites:''' C++ and python experience required, some experience with VTK and ParaView ideally, but not required.<br />
<br />
'''Mentor:''' Patrick O'Leary (patrick dot oleary at kitware dot com).<br />
<br />
=== CAD Model and Simulation Spline Visualization ===<br />
<br />
'''Brief explanation''': While spline curves and surfaces have been used for many years to describe CAD models, these flexible representations have often been discarded during the meshing and simulation process that is used to estimate the behavior of CAD-modeled parts in a particular environment or during a particular event. Recently, a group of techniques called IsoGeometric Analysis (IGA) has evolved in order to reduce the amount of work required to prepare simulations and to improve their accuracy. This project would develop support for arbitrary-dimensional, rational spline patches in VTK so that these simulations can be visualized properly. This may involve conversion to handle the variety of spline formats (T-Splines, NURBS, Catmull-Clark surfaces, etc.).<br />
<br />
'''Expected results:''' The result would be a new mesh representation class, a reader, and a suite of 2-3 filters for contouring, cutting, and rendering these meshes.<br />
<br />
'''Prerequisites:''' C/C++ experience, knowledge of rational splines and techniques for processing them (degree elevation, knot insertion/removal), and preferably some experience with VTK<br />
<br />
'''Mentor(s):''' David Thompson (david dot thompson ta kitware dot com) and/or Bob O'Bara (bob dot obara ta kitware dot com)<br />
<br />
=== Shared Memory Parallelism in VTK ===<br />
<br />
'''Brief explanation''': Development of multi-threaded algorithms in VTK. Multiple R&D efforts are leading the creation of an infrastructure to support next generation multi-threaded parallel algorithm development in VTK. These efforts are based on modern parallel libraries such as Intel TBB and Inria KAAPI. The main goal of this project will be the development of algorithms that leverage this infrastructure. The focus will be on upgrading existing core algorithms such as iso-surfacing, clipping, cutting, warping etc. to ben parallel.<br />
<br />
'''Expected results''': A number of algorithms that execute in parallel using shared memory. Scalability of new algorithms will have to be measured and documented. Development of regression tests and examples will be also expected.<br />
<br />
'''Prerequisites''': Experience in C++ and multi-threaded code development. Understanding of core visualization algorithms and data structures. Some experience in VTK ideally but not necessary.<br />
<br />
'''Mentor''': Berk Geveci (berk dot geveci at kitware dot com)</div>Berkhttps://public.kitware.com/Wiki/index.php?title=ParaView/Users_Guide/Table_Of_Contents&diff=51350ParaView/Users Guide/Table Of Contents2013-01-30T21:22:04Z<p>Berk: /* ParaView User's Guide (master) */</p>
<hr />
<div>{{saved_book<br />
|title=ParaView User's Guide (v3.10)<br />
|subtitle=How to unleash the beast!<br />
|cover-image=<br />
|cover-color=<br />
}}<br />
<br />
== ParaView User's Guide (master) ==<br />
;Introduction<br />
:[[ParaView/Users Guide/Introduction|About Paraview]] <br />
::<!--[[ParaView/Users Guide/Introduction#What_is_ParaView?|What is ParaView]]-->What is ParaView?<br />
::<!--[[ParaView/Users Guide/Introduction#User_Interface|Getting your bearings in the UI]]-->Getting your bearings in the UI<br />
::<!--[[ParaView/Users Guide/Introduction#Basics_of_Visualization|Basics of Visualization]]-->Basics of Visualization<br />
::<!--[[ParaView/Users Guide/Introduction#Persistent_Sessions|Batch Processing]]-->Batch Processing<br />
::<!--[[ParaView/Users Guide/Introduction#Client/Server_Visualization|Client/Server Visualization]]-->Client/Server Vis<br />
<br />
;Loading Data<br />
:[[ParaView/Users Guide/Loading Data|Data Ingestion]]<br />
::<!--[[ParaView/Users Guide/Loading Data#File_Open_Dialog|Opening Data]]-->Opening Data<br />
::<!--[[ParaView/Users Guide/Loading Data#File_Formats|File Formats]]-->File Formats<br />
<br />
;Understanding Data<br />
:[[ParaView/Users Guide/VTK Data Model| VTK Data Model]] <br />
:[[ParaView/Users Guide/Information Panel|Information Panel]]<br />
:[[ParaView/Users_Guide/Statistics_Inspector|Statistics Inspector]]<br />
:[[ParaView/Users_Guide/Memory_Inspector|Memory Inspector]]<br />
<br />
;Displaying Data<br />
:[[ParaView/Displaying Data|Views, Representations and Color Mapping]]<br />
::<!--[[ParaView/Displaying Data#Understanding_Views|Introduction to Views]]-->Introduction to Views<br />
::<!--[[ParaView/Displaying_Data#Multiple_Views | Multiple Views]]-->Multiple View<br />
::<!--[[ParaView/Displaying_Data#Types_of_Views | View Types]]-->View Types<br />
::<!-- -->About Color Mapping<br />
<br />
;Filtering Data<br />
:[[ParaView/UsersGuide/Filtering Data|Rationale]]<br />
:[[ParaView/UsersGuide/Filter Parameters|Filter Parameters]]<br />
:[[ParaView/UsersGuide/Manipulating the Pipeline|The Pipeline]]<br />
:[[ParaView/UsersGuide/Filter Categories|Filter Categories]]<br />
:[[ParaView/UsersGuide/Recommendations|Best Practices]]<br />
:[[ParaView/UsersGuide/Macros|Custom Filters aka Macro Filters]]<br />
<br />
;Quantative Analysis<br />
:[[ParaView/Users_Guide/Quantitative Analysis|Drilling Down]]<br />
:[[ParaView/Users_Guide/Python_Programmable_Filter|Python Programmable Filter]]<br />
:[[ParaView/Users_Guide/Calculator|Calculator]]<br />
:[[ParaView/Users Guide/Python Calculator|Python Calculator]]<br />
:[[ParaView/Users Guide/Spreadsheet View|Spreadsheet View]]<br />
:[[ParaView/Users Guide/Selection|Selection]]<br />
:[[ParaView/Users Guide/Query Data|Querying for Data]]<br />
:[[ParaView/Users Guide/Histogram|Histogram]]<br />
:[[ParaView/Users Guide/Plotting and Probing Data|Plotting and Probing Data]]<br />
<br />
;Saving Data<br />
:[[ParaView/Users Guide/Saving Data|Saving Data]]<br />
::<!--[[ParaView/Users Guide/Saving Data#Save_raw_data|Save Raw Data]]-->Save Raw Data<br />
::<!--[[ParaView/Users Guide/Saving Data#Save_screenshots|Save Screenshots]]-->Save Screenshots<br />
::<!--[[ParaView/Users Guide/Saving Data#Save_Animation|Save Movies]]-->Save Movies<br />
::<!--[[ParaView/Users Guide/Saving Data#Save_geometries|Save Geometries]]-->Save Geometries<br />
:[[Exporting_Scenes | Exporting Scenes]]<br />
<br />
;3D Widgets<br />
:[[Users Guide Widgets#|Manipulating data in the 3D view]]<br />
<br />
;Annotation<br />
:[[Users Guide Annotation|Annotation]]<br />
::<!--[[Users Guide Annotation#Scalar Bar|Scalar Bar]]-->Scalar Bar<br />
::<!--[[Users Guide Annotation#Orientation Axes|Orientation Axes]]-->Orientation Axes<br />
::<!--[[Users Guide Annotation#Text Display|Text]]-->Text<br />
::<!--[[Users Guide Annotation#Annotate Time Filter|Temporal Annotation]]-->Temporal Annotation<br />
::Misc Sources<br />
::Ruler and Cube Axis<br />
<br />
;Animation<br />
:[[ParaView/Users Guide/Animation | Animation View]]<br />
:<!--[[ParaView/Users Guide/Animation#Animating_the_Camera]] --> Animating the Camera<br />
<br />
;Comparative Visualization<br />
:[[ParaView/Users_Guide/Comparative_Visualization|Comparative Views]]<br />
<br />
;Remote and Parallel Large Data Visualization<br />
:[[Users Guide Client-Server Visualization|Parallel ParaView]]<br />
:[[ParaView/Users Guide/Starting Parallel Servers | Starting the Server(s)]]<br />
:[[ParaView/Users Guide/Establishing Connections | Connecting to the Server]]<br />
:[[ParaView/Distributing_Server_Configuration_Files | Distributing/Obtaining Server Connection Configurations]]<br />
<br />
;Parallel Rendering and Large Displays<br />
:[[ParaView/Users Guide/Parallel Rendering Intro|About Parallel Rendering]]<br />
:[[ParaView/Users Guide/Parallel Rendering | Parallel Rendering]]<br />
:[[ParaView/Users Guide/Tiled Display|Tile Display Walls]]<br />
:[[ParaView/Users Guide/CAVE_Display|CAVE Displays]]<br />
<br />
;Scripted Control<br />
:[[ParaView/Users Guide/Scripting ParaView|Interpreted ParaView]]<br />
:[[ParaView/Python_Scripting | Python Scripting]]<br />
:[[Python_GUI_Tools | Tools for Python Scripting]]<br />
:[[ParaView/Users_Guide/Batch Processing | Batch Processing]]<br />
<br />
;In-Situ/CoProcessing<br />
: [[CoProcessing]]<br />
: [[Coprocessing_example | C++ CoProcessing example]]<br />
: [[Python_coprocessing_example | Python CoProcessing Example ]]<br />
<br />
;Plugins<br />
:[[ParaView/Users_Guide/Plugins | What are Plugins?]]<br />
:[[ParaView/Users_Guide/Included Plugins| Included Plugins]]<br />
:[[ParaView/Users_Guide/Loading Plugins | Loading Plugins]]<br />
<br />
;Appendix<br />
:[[ParaView/Users Guide/Command line arguments | Command Line Arguments]] <br />
:[[ParaView/Users_Guide/Settings | Application Settings]]<br />
:[[ParaView/Users Guide/List of readers | List of Readers]] <br />
:[[ParaView/Users Guide/Sources | List of Sources]] <br />
:[[ParaView/Users Guide/List of filters | List of Filters]] <br />
:[[ParaView/Users Guide/List of writers | List of Writers]]<br />
:[[ParaView:Build_And_Install | How to build/compile/install]]<br />
:[[ParaView_And_Mesa_3D | Building ParaView with Mesa3D]]<br />
:[[Writing ParaView Readers | How to write parallel VTK readers]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=ParaView&diff=47328ParaView2012-06-01T13:40:53Z<p>Berk: /* Compile/Install */</p>
<hr />
<div><center>[[image:pvsplash1.png]]</center><br />
<br />
<br />
<br />
ParaView is an open-source, multi-platform application designed to visualize data sets of varying sizes from small to very large. The goals of the ParaView project include developing an open-source, multi-platform visualization application that supports distributed computational models to process large data sets. It has an open, flexible, and intuitive user interface. Furthermore, ParaView is built on an extensible architecture based on open standards. ParaView runs on distributed and shared memory parallel as well as single processor systems and has been succesfully tested on Windows, Linux, Mac OS X, IBM Blue Gene, Cray XT3 and various Unix workstations and clusters. Under the hood, ParaView uses the Visualization Toolkit as the data processing and rendering engine and has a user interface written using the Qt cross-platform application framework.<br />
<br />
'''The ParaView Users Guide is online here: [http://paraview.org/Wiki/ParaView/Users_Guide/Table_Of_Contents The ParaView Users Guide]'''<br />
<br />
The goal of this Wiki is to provide up-to-date documentation maintained by the developer and user communities. As such, we welcome volunteers that would like to contribute. If you are interested in contributing, please contact us on the ParaView mailing list http://public.kitware.com/mailman/listinfo/paraview.<br />
<br />
'''For new users, download and install the [http://www.paraview.org/New/download.html ParaView binaries] for your local computer, and then read [[The ParaView Tutorial]]. ''' Additional tutorials are located under [[#Books and Tutorials]] below.<br />
<br />
You can find more information about ParaView on the ParaView web site: http://paraview.org. For more help, including a list of all sources and filters, check out http://paraview.org/New/help.html and http://paraview.org/OnlineHelpCurrent.<br />
<br />
==ParaView In Use==<br />
* [[ParaView In Action]]<br />
: Some examples of how ParaView is used<br />
<br />
* [http://flickr.com/groups/paraview/pool/ ParaView Screenshots]<br />
: Screenshots generated by ParaView<br />
<br />
* [[ParaView/HPC Installations|HPC Installations]]<br />
: Links to documentation of ParaView installations on various HPC sites<br />
<br />
== Documentation ==<br />
{| border="0" align="center" width="98%" valign="top" cellspacing="7" cellpadding="2"<br />
|-<br />
! width="33%"|<br />
! |<br />
! width="33%"|<br />
! |<br />
! width="33%"|<br />
|- <br />
|valign="top"|<br />
<br />
===Compile/Install===<br />
----<br />
* [http://paraview.org/paraview/resources/software.php Download ParaView]<br />
: Instructions for downloading source as well as pre-compiled binaries for common platforms.<br />
* [[ParaView Release Notes]]<br />
: Collection of release notes for official ParaView releases.<br />
* [[ParaView/Git| Git Instructions]]<br />
: Git is the revision control system that ParaView uses. If you would like to have the bleeding edge version of ParaView, or you would like to contribute code, this link describes the method you must use to get the code.<br />
* [[ParaView:Build And Install|Building and Installation instructions]]<br />
: Compiling and installing ParaView from source.<br />
* [[ParaView And Mesa 3D]]<br />
: Building ParaView with [http://www.mesa3d.org Mesa 3D].<br />
* [[ParaView Binaries | ParaView Binaries Build Information]]<br />
: Information about the official ParaView builds and versions of various dependencies used.<br />
<br />
===Server Setup===<br />
----<br />
* [[Setting up a ParaView Server| ParaView Server Setup]]<br />
:Configuring your cluster to act as a ParaView server.<br />
* [[Starting the server| ParaView Server Startup Using GUI]]<br />
:Using the ParaView client to start the servers.<br />
* [[ParaView:Server Configuration| Server Configuration]] <br />
:Customizing server startup and connection processes using XML-based configuration scripts.<br />
* [[ParaView/Distributing Server Configuration Files|Distributing Server Configuration Files]]<font color="green">*new in 3.14</font><br />
:Strategies for distributing server configuration xmls.<br />
* [[Reverse_connection_and_port_forwarding| Port forwarding]]<br />
:To run ParaView on clusters with head nodes - compute nodes<br />
* [[Configuring Server Environment Using *.pvx XML Files]]<br />
:Configure your cluster environment such as DISPLAY, or Cave settings using *.pvx xml files.<br />
* [http://www.iac.es/sieinvens/siepedia/pmwiki.php?n=HOWTOs.ParaviewInACluster Cluster Configuration to run ParaView]<br />
:A guide for configuring a cluster to run ParaView<br />
<br />
===Importing Data===<br />
----<br />
* [[Generating data]]<br />
:How to write out data in a format that Paraview understands<br />
* [[Data formats]]<br />
: More information on data formats ParaView supports and how to load them.<br />
* [[Writing ParaView Readers]]<br />
:How to write a VTK reader that will read your data directly into ParaView.<br />
<br />
===Finding Data===<br />
----<br />
* [[Find Data using Queries]] <font color="green">* new in 3.8 </font><br />
: Selecting and focusing on subset of a dataset using queries.<br />
* [[Data Selection]]<br />
: Selecting and focusing on subset of a dataset.<br />
<br />
===Analyzing Data===<br />
----<br />
* [[Parameter Study|Parameter Study (Comparative Visualization)]] <font color="green">* improved in 3.8</font><br />
: Creating visualizations to compare effects for change in parameter(s).<br />
* [[Statistical analysis]]<br />
: Computing statistics and using them to assess datasets.<br />
<br />
===Animation===<br />
----<br />
* [[Animating legacy VTK file series]]<br />
: Animating file series.<br />
* [[Disconnecting from server while still saving an animation|Unattended saving of animation]]<br />
: Saving animations on the server without client connection.<br />
* [[Animation View]]<br />
: Using ''Animation View'' to setup animations.<br />
* [[Animating the Camera]]<br />
: Creating animations involving camera movements.<br />
<br />
===Plugins===<br />
----<br />
* [[Plugin HowTo | Extending ParaView Using Plugins]]<br />
:Using and writing new plugins to extend ParaView's functionality.<br />
* [[Extending ParaView at Compile Time]]<br />
:Including extensions into ParaView at compile time.<br />
* [http://pluginwizard.mirarco.org/ Plugin Wizard]<br />
:A simple wizard application developed by MIRARCO that provides boilerplate code for some of the most common plugin types.<br />
* [[User Created Plugins]] <br />
:Please post plugins that you have created that may be useful for other users.<br />
* [[Writing Custom Applications]] <font color="green">* new in 3.8</font><br />
: Writing custom applications based on ParaView.<br />
* [[ParaView:Plugin Deployment with Development Installs|Plugin Deployment with Development Installs]]<br />
: Building plugins for deployment with Released ParaView binaries.<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
===Python Scripting===<br />
----<br />
* [[ParaView/EnvironmentSetup|Environment Setup]]<br />
* [[ParaView/Python Scripting|Python Scripting]] <font color="green">* updated to 3.6</font><br />
: Scripting ParaView using python<br />
* [[Python Programmable Filter]]<br />
: Generating/Processing data using python.<br />
* [[Python GUI Tools]] <font color="green">* updated for 3.10</font><br />
: Using the python shell interface in paraview including generating python trace.<br />
* Python [[Python recipes|recipes]] for ParaView<br />
: Collection of python scripts for some common tasks.<br />
* [[SNL ParaView 3 Python Tutorials]]<br />
: Beginning and advanced tutorial sets, each presented as 2 hour classes by Sandia National Laboratories<br />
<br />
===GUI Features===<br />
----<br />
* [[Color Palettes]]<br />
: Creating visualizations for Print and Screen.<br />
* [[Colormaps]]<br />
: Details of ParaView's xml colormap file format and collections of colormaps for use with ParaView. <br />
* [[Camera and Property Linking]]<br />
: Synchronizing filters, clip planes, camera etc.<br />
* [[ParaView Settings Files]]<br />
: The locations where ParaView saves settings.<br />
* [[Custom Filters]]<br />
: Packaging pipelines into a single composite.<br />
* [[Image Compressor Configuration]]<br />
: How to configure ParaView's image compressor for use during remote rendering.<br />
* [[Sortable spreadsheet view]] <font color="green">* new feature for 3.10</font><br />
: What can be done with the spreadsheet column sorting and how it works<br />
* [[Space Navigator]] <font color="green">* new feature for 3.10</font><br />
: Using ParaView with Space Navigator<br />
* [[ParaView/UI/TextFinder|Text Finder]] <font color="green">* new feature for 3.14</font><br />
: Searching in long lists and tables in the ParaView GUI.<br />
* [[ParaView/UI/CopyPaste|Copy/Paste Friendly]] <font color="green">* new feature for 3.14</font><br />
: Using ParaView Copy/Paste inside information tab and spreadsheet view<br />
<br />
===Other Features===<br />
----<br />
* [[Restarted Simulation Readers]]<br />
: Loading restarted data for different file formats.<br />
* [[Exporting Scenes]]<br />
: Exporting scenes as VRML, X3D etc.<br />
* [[Backwards compatibility in state files]]<br />
: Backwards compatibility for ParaView state files (*.pvsm).<br />
* [[CoProcessing]] <font color="green">* new in 3.8</font><br />
: Information on using ParaView for in situ visualization/coprocessing (still in beta).<br />
* [[ParaView/Collaboration| Collaboration]] <font color="green">* new in 3.14</font><br />
: Information of the changes that have been done undercover to support and improve collaboration.<br />
<br />
=== Books and Tutorials ===<br />
----<br />
* [http://www.kitware.com/products/books.html The ParaView Guide]<br />
: The official ParaView guide available from Kitware. [[Book Errata]]<br />
* [[ParaView/Users_Guide/Table_Of_Contents | ParaView Users' Guide]]<br />
: The newly revised official ParaView guide.<br />
* [[The ParaView Tutorial]]<br />
: An introductory and comprehensive tutorial.<br />
*[[Sixth OpenFOAM workshop]]<br />
: Slides and data from the Advanced ParaView for tutorial. <br />
* ParaView Videos on Channel 9<br />
: [http://channel9.msdn.com/Shows/The+HPC+Show/Open-source-HPC-code-Episode-22-Running-Paraview-on-Windows-HPC-Server Using ParaView in Windows HPC Server]<br />
: [http://channel9.msdn.com/Shows/The+HPC+Show/Open-source-HPC-code-Episode-21-An-Introduction-to-Paraview Introduction to ParaView]<br />
*[[IEEE Vis10 DIY Vis Application - ParaView]]<br />
: Tutorial slides and code for the IEEE Vis DIY Vis Applications, ParaView section.<br />
* [[SC10 Coprocessing Tutorial]]<br />
: Use of ParaView's coprocessing API for ''in-situ'' visualization.<br />
* [[IEEE Vis09 Revise Workshop]]<br />
: Description of ParaView's reconfigurable client application infrastructure - aka 'branding'<br />
* [[IEEE Vis09 ParaView Tutorial]]<br />
: Slides for the advanced topics tutorial by Sandia, Kitware, and LANL.<br />
* [[IEEE Cluster 2009 ParaView Tutorial]]<br />
: Slides on topics for installing and using ParaView on visualization clusters.<br />
* [[SNL ParaView 3 Tutorials]]<br />
: Beginning and advanced tutorial sets, each presented as 2 hour classes by Sandia National Laboratories<br />
* [[IEEE Vis08 ParaView Tutorial]]<br />
: Slides for the advanced topics tutorial by Sandia, Kitware, and CSCS.<br />
* [https://visualization.hpc.mil/paraview HPCMP DAAC - Information & Tutorials on ParaView ]. <br />
: This Wiki is full of useful information and tutorials about ParaView.<br />
* [[howtos|Howtos]]<br />
: These howtos are instructions for some common operations.<br />
* [[Related Publications]]<br />
: ParaView related books, articles and papers<br />
* [[ParaView 2 Tutorials]]<br />
<br />
<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
===Design & Implementation===<br />
----<br />
* [[Testing design]]<br />
: ParaView GUI Testing framework.<br />
* [[Block Hierarchy Meta Data]]<br />
: Providing details about blocks, hierarchies, assemblies etc. to the client.<br />
* [[Multiple views]]<br />
: Details on handling multiple views in client-server framework.<br />
* [[Composite Datasets in VTK|Composite Datasets]]<br />
: Dealing with composite datasets in VTK.<br />
* [[Representations and Views]]<br />
: Understanding ParaView's views and representations.<br />
* [[Time in ParaView]]<br />
: Understanding Time implementation.<br />
* [[Cross compiling ParaView3 and VTK|Cross-compiling ParaView]]<br />
: Compiling ParaView and VTK on BlueGene and Cray Xt3/Catamount.<br />
* [[Selection Implementation in VTK and ParaView III]]<br />
* [[Suggested online help documentation changes]]<br />
: Suggestions for online help documentation changes.<br />
* [[ServerManager XML Hints]]<br />
: A place to document ServerManager configuration XML hints.<br />
<br />
===ParaView based Applications===<br />
----<br />
* [[StreamingParaView]]<br />
: Documentation about the StreamingParaView application.<br />
<br />
===Web Visualization with ParaView===<br />
----<br />
* [[ParaViewWeb | ParaView Web Visualization Framework]]<br />
: Documentation for the ParaView Web Visualization Framework<br />
<br />
===Plugins Distributed with ParaView===<br />
----<br />
:[[ParaView/Users_Guide/Included Plugins| Included Plugins]]<br />
<br />
===Community Contributed Plugins===<br />
----<br />
:[[ParaView/Guidelines for Contributing Plugins | Guidelines for Contributing Plugins]]<br />
<br />
===Miscellaneous===<br />
----<br />
* [[terminology map|Terminology Disambiguation]]<br />
* [http://kitware.com/products/thesource.html The Kitware Source]<br />
: Quarterly newsletter for developers designed to deliver detailed technical articles related to Kitware's open source products including ParaView.<br />
* [http://paraview.org/New/help.html More information about ParaView]<br />
* [[Terminology map | Real world concept -> Paraview terminology map]]<br />
: Often new users may say "Surely Paraview can do X... but I can't find it!". This terminology map should help!<br />
<br />
===Developers Corner===<br />
----<br />
====Mailing List====<br />
The developers mailing list is here: http://public.kitware.com/mailman/subscribe/paraview-developers<br />
This should be used for questions about modifying the Paraview code, not using Paraview.<br />
<br />
====Plugin Development====<br />
* [[Paraview_Make building Paraview plugin optional|Make building Paraview plugin optional]]<br />
<br />
====Handy Developer Info====<br />
* [[ParaView/Developer_Info | Developer Information]]<br />
<br />
====Release Testing====<br />
* [[Release Testing]]<br />
<br />
{{ParaView/Template/Footer}}<br />
|}</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Removal_of_Methods_for_Manipulating_Update_Extent&diff=46504VTK/VTK 6 Migration/Removal of Methods for Manipulating Update Extent2012-04-10T18:15:25Z<p>Berk: /* Removal of vtkDataObject Methods for Manipulating Update Extent */</p>
<hr />
<div>= Removal of vtkDataObject Methods for Manipulating Update Extent =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview|here]]. One of these changes is the removal of all pipeline related methods from vtkDataObject. Among these methods are those that manipulated the update extent. These are<br />
<br />
* SetUpdateExtent(int piece, int numPieces, int ghostLevel)<br />
* SetUpdateExtent(int piece, int numPieces)<br />
* SetUpdateExtent(int extent[6])<br />
* SetUpdateExtent(int x0, int x1, int y0, int y1, int z0, int z1)<br />
* int* GetUpdateExtent()<br />
* GetUpdateExtent(int& x0, int& x1, int& y0, int& y1,int& z0, int& z1)<br />
* GetUpdateExtent(int extent[6])<br />
* SetUpdateExtentToWholeExtent()<br />
<br />
These were convenience functions that actually forwarded to the executives. For convenience, we added similar convenience functions to vtkAlgorithm. This should make it easy to transition to VTK 6. These functions are as follows.<br />
<br />
* SetUpdateExtent(int port, int connection, int piece,int numPieces, int ghostLevel);<br />
* SetUpdateExtent(int piece,int numPieces, int ghostLevel);<br />
* SetUpdateExtent(int port, int connection, int extent[6]);<br />
* SetUpdateExtent(int extent[6]);<br />
* SetUpdateExtentToWholeExtent(int port, int connection);<br />
* SetUpdateExtentToWholeExtent();<br />
* int* GetUpdateExtent()<br />
* GetUpdateExtent(int& x0, int& x1, int& y0, int& y1,int& z0, int& z1)<br />
* GetUpdateExtent(int extent[6])<br />
* GetUpdatePiece()<br />
* GetUpdateNumberOfPieces()<br />
* GetUpdateGhostLeve()<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
dobj->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/);<br />
dobj->Update();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
aFilter->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/, 0 /*ghost levels*/);<br />
aFilter->Update();<br />
</source><br />
<br />
== Example 2 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
int updateExtent[6] = {0, 10, 0, 10, 0, 10};<br />
dobj->SetUpdateExtent(updateExtent);<br />
dobj->Update();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
int updateExtent[6] = {0, 10, 0, 10, 0, 10};<br />
aFilter->SetUpdateExtent(updateExtent);<br />
aFilter->Update();<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK&diff=46493VTK2012-04-06T19:40:32Z<p>Berk: /* VTK 6.0 (pending) */</p>
<hr />
<div><center>http://public.kitware.com/images/logos/vtk-logo2.jpg</center><br />
<br /><br />
The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers around the world. VTK consists of a C++ class library, and several interpreted interface layers including Tcl/Tk, Java, and Python. Professional support and products for VTK are provided by Kitware, Inc. ([http://www.kitware.com www.kitware.com]) VTK supports a wide variety of visualization algorithms including scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modelling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation. In addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D imaging / 3D graphics algorithms and data.<br />
<br />
<br />
== Learning VTK ==<br />
If you want to learn how to use or develop VTK, please see [[VTK/Learning_VTK | Learning VTK]]<br />
<br />
== Building VTK ==<br />
* Where can I [http://vtk.org/get-software.php download VTK]?<br />
<br />
* Where can I download a tarball of the [http://vtk.org/files/nightly/vtkNightlyDocHtml.tar.gz nightly HTML documentation]?<br />
<br />
* How do I build the [[VTK/BuildingDoxygen|Doxygen documentation]]?<br />
<br />
* [[VTK/Git|Using Git for VTK development]]<br />
<br />
* [[VTK/GitMSBuild|Using Git and MSBuild to build VTK]]<br />
<br />
* [[VTK/Build parameters | Build parameters]]<br />
<br />
* [[Making Development Environment without compiling source distribution]]<br />
<br />
== Extending VTK ==<br />
<br />
* Where can I get [[VTK Datasets]]?<br />
<br />
* [[VTK Classes|User-Contributed Classes]]<br />
<br />
* [[VTK Coding Standards]] <br />
<br />
* [[VTK cvs commit Guidelines]]<br />
<br />
* [[VTK Patch Procedure]] -- merge requests for the current release branch<br />
<br />
* [[VTK Scripts|Extending VTK with Scripts]]<br />
<br />
== Projects/ Tools that use VTK == <br />
<br />
* [[VTK Tools|VTK-Based Tools and Applications]]<br />
<br />
* What are some [[VTK Projects|projects using VTK]]?<br />
<br />
== Future VTK development ==<br />
<br />
* [[VTK 5.4 Release Planning]]<br />
<br />
* [[Proposed Changes to VTK | Proposed Changes to VTK]]<br />
<br />
== Troubleshooting ==<br />
<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]<br />
<br />
* [[VTK OpenGL|Common OpenGL troubles]]<br />
<br />
== Miscellaneous ==<br />
* [[VTK Related Job Opportunities|VTK Related Job Opportunities]]<br />
<br />
* [[VTK/Third Party Library Patrol | VTK 3rd Party Library Patrol]]<br />
<br />
* [[VTK/Meeting Minutes | Meeting Minutes]]<br />
<br />
== Summary of Changes ==<br />
<br />
==== VTK 5.0 ====<br />
<br />
* [[VTK/Tutorials/New_Pipeline | New Pipeline]]<br />
* [[VTKWidgets | VTK Widget Redesign]]<br />
<br />
==== VTK 5.2 ====<br />
<br />
* [[VTK/Java Wrapping | VTK Java Wrapping]]<br />
* [[VTK/Composite Data Redesign | Composite Data Redesign]]<br />
* [[VTKShaders | Shaders in VTK]]<br />
* [[VTK/VTKMatlab | VTK with Matlab]]<br />
* [[VTK/Time_Support | VTK Time support]]<br />
* [[VTK/Graph Layout | VTK Graph Layout]]<br />
* [[VTK/Depth_Peeling | VTK Depth Peeling]]<br />
* [[VTK/Using_JRuby | Using VTK with JRuby]]<br />
* [[VTK/Painters | Painters]]<br />
<br />
==== VTK 5.4 ====<br />
<br />
* [[VTK/Cray XT3 Compilation| Cray XT3 Compilation]]<br />
* [[VTK/Geovis vision toolkit | Geospatial and vision visualization support ]]<br />
<br />
==== VTK 5.6 ====<br />
<br />
* [[VTK/MultiPass_Rendering | VTK Multi-Pass Rendering]]<br />
* [[VTK/Multicore and Streaming | Multicore and Streaming]]<br />
* [[VTK/statistics | Statistics]]<br />
* [[VTK/Array Refactoring | Array Refactoring]]<br />
* [[VTK/3DConnexion Devices Support | 3DConnexion Devices Support]]<br />
* [[VTK/Charts | New Charts API]]<br />
* [[VTK/New CellPicker | New Cell Picker and Volume Picking (start Nov 2010, finish Feb 2010)]]<br />
<br />
==== VTK 5.8 ====<br />
<br />
* [[VTK/Polyhedron_Support | Polyhedron cells and MVC Interpolation]]<br />
* [http://visimp.cs.unc.edu/2010/10/26/reeb-graphs/ Reeb Graphs]<br />
* [[VTK/Closed Surface Clipping | Clipping of closed surfaces (start Mar 26, 2010, finish Apr 22, 2010)]]<br />
* [[VTK/Wrapper Update 2010 | New wrappers (start Apr 28, 2010)]]<br />
* [[VTK/Image Stencil Improvements | Improved image stencil support (start Nov 3, 2010)]]<br />
* [[VTK/MNI File Formats | MNI file formats]]<br />
* [[VTK/Release580 New Classes | List of New Classes]]<br />
<br />
==== VTK 5.10 (imminent) ====<br />
<br />
* [[VTK/improved unicode support | Change unicode readers/writers to register as codecs (finished Oct 29 2010)]]<br />
* [[VTK/Image Rendering Classes | New image rendering classes (start Dec 15 2010, finish Mar 15 2011)]]<br />
* [[VTK/Image Interpolators | Image interpolators (start Jun 20 2011, finish Aug 31 2011)]]<br />
* [[VTK/GSoC | Projects from Google Summer of Code 2011]]<br />
* [[VTK/Release5100 New Classes | List of new classes in 5.10]]<br />
<br />
==== VTK 6.0 (pending) ====<br />
<br />
* [[VTK/VTK_6_Migration_Guide | VTK 6 Migration Guide]]<br />
* [[VTK/Remove_VTK_4_Compatibility | Remove VTK 4 compatibility layer from pipeline]]<br />
* [[VTK/Modularization_Proposal | Modularization]]<br />
* [[VTK/Remove_BTX_ETX | Remove BTX/ETX markers from VTK code]]<br />
<br />
== News ==<br />
<br />
=== Development Process ===<br />
The VTK Community is [[VTK/Managing_the_Development_Process | upgrading its development process]]. We are doing this in response to the continuing and rapid growth of the toolkit. A VTK Architecture Review Board [[VTK/Architecture_Review_Board |VTK ARB]] is being put in place to provide strategic guidance to the community, and individuals are being identified as leaders in various VTK subsystems.<br />
<br />
Have a question or topic for the ARB to discuss about the future of VTK? First, please bring the topic to the [http://public.kitware.com/mailman/listinfo/vtk-developers VTK developers mailing list]. If the issue is not resolved there or needs further planning or direction, you may [[VTK/ARB/Meetings#Potential Topics|enter a suggested topic for discussion]].<br />
<br />
===[[VTK/NextGen|VTK NextGen]]=== <br />
We have started collecting works in progress as well as future ideas at [[VTK/NextGen|NextGen]]. Please add anything you are working on, would like to collaborate on, or would like to see in the future of VTK!<br />
<br />
== Wrapping ==<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
== Developers Corner ==<br />
[[VTK/Developers Corner|Developers Corner]]<br />
<br />
<!-- <br />
== External Links ==<br />
dead link *[http://zorayasantos.tripod.com/vtk_csharp_examples VTK examples in C#] (Visual Studio 5.0 and .NET 2.0)<br />
--><br />
{{VTK/Template/Footer}}</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Removal_of_vtkPlot3DReader&diff=46492VTK/VTK 6 Migration/Removal of vtkPlot3DReader2012-04-06T19:38:41Z<p>Berk: Created page with "= Removal of vtkPlot3DReader = In VTK 6, we changed or removed a few algorithms that previously produced variable number of outputs, which is not supported in VTK 6. Such algori..."</p>
<hr />
<div>= Removal of vtkPlot3DReader =<br />
<br />
In VTK 6, we changed or removed a few algorithms that previously produced variable number of outputs, which is not supported in VTK 6. Such algorithms need to produce vtkMultiBlockDataSet instead. As part of this change, we removed vtkPLOT3DReader. In VTK 6, you should use vtkMultiBlockPLOT3DReader instead.<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="tcl"><br />
vtkPLOT3DReader pl3d<br />
pl3d SetXYZFileName "$VTK_DATA_ROOT/Data/combxyz.bin"<br />
pl3d SetQFileName "$VTK_DATA_ROOT/Data/combq.bin"<br />
pl3d Update<br />
vtkPlane plane<br />
eval plane SetOrigin [[pl3d GetOutput] GetCenter]<br />
plane SetNormal -0.287 0 0.9579<br />
</source><br />
<br />
with<br />
<br />
<source lang="tcl"><br />
vtkMultiBlockPLOT3DReader pl3d<br />
pl3d SetXYZFileName "$VTK_DATA_ROOT/Data/combxyz.bin"<br />
pl3d SetQFileName "$VTK_DATA_ROOT/Data/combq.bin"<br />
pl3d Update<br />
set output [[pl3d GetOutput] GetBlock 0]<br />
vtkPlane plane<br />
eval plane SetOrigin [$output GetCenter]<br />
</source><br />
<br />
Note the use of [[pl3d GetOutput] GetBlock 0]. It is also possible to create a pipeline that works on one block (or more) by using the composite data pipeline and vtkExtractBlock filter.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration_Guide&diff=46491VTK/VTK 6 Migration Guide2012-04-06T19:38:05Z<p>Berk: </p>
<hr />
<div>[[VTK/VTK_6_Migration/Overview | Overview]]<br />
<br />
[[VTK/VTK_6_Migration/Replacement_of_SetInput | Replacement of SetInput() with SetInputData() and SetInputConnection()]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetProducerPort | Removal of GetProducerPort() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_ GetPipelineInformation | Removal of GetPipelineInformation and GetExecutive from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Update | Removal of Pipeline Update Methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Execute | Removal of ExecuteInformation() and ExecuteData() from Algorithm Superclasses]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_SetExtentTranslator | Removal of SetExtentTranslator and GetExtentTranslator from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformation | Removal of CopyInformation and CopyTypeSpecificInformation from vtkDataObject and vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformationToPipeline | Removal of CopyInformationToPipeline and CopyInformationFromPipeline from vtkDataObject and sub-classes]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetEstimatedMemorySize | Removal of GetEstimatedMemorySize() method from vtkDataObject and vtkImageData ]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of SetWholeExtent | Removal of SetWholeExtent() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of ShouldIReleaseData | Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of Methods for Manipulating Update Extent | Removal of vtkDataObject Methods for Manipulating Update Extent]]<br />
<br />
[[VTK/VTK_6_Migration/Change to Crop | Change to Crop() in vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to Scalars Manipulation Functions | Changes to Scalars Manipulation Functions in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to CopyOriginAndSpacingFromPipeline | Change to CopyOriginAndSpacingFromPipeline in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to SetAxisUpdateExtent | Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to AllocateOutputData | Change to AllocateOutputData() in vtkImageAlgorithm]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to vtkProcrustesAlignmentFilter | Changes to vtkProcrustesAlignmentFilter]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of vtkPlot3DReader | Removal of vtkPlot3DReader]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Changes_to_vtkProcrustesAlignmentFilter&diff=46490VTK/VTK 6 Migration/Changes to vtkProcrustesAlignmentFilter2012-04-06T19:37:21Z<p>Berk: Created page with "= Changes to vtkProcrustesAlignmentFilter = In VTK 6, we changed or removed a few algorithms that previously produced variable number of outputs, which is not supported in VTK 6..."</p>
<hr />
<div>= Changes to vtkProcrustesAlignmentFilter =<br />
<br />
In VTK 6, we changed or removed a few algorithms that previously produced variable number of outputs, which is not supported in VTK 6. Such algorithms need to produce vtkMultiBlockDataSet instead. As part of this change, we change vtkProcrustesAlignmentFilter to take a multi-block dataset as input and produce a multi-block dataset as output.<br />
<br />
== Example 1 ==<br />
<br />
Replace:<br />
<br />
<source lang="tcl"><br />
vtkProcrustesAlignmentFilter procrustes<br />
procrustes SetNumberOfInputs 3<br />
procrustes SetInput 0 [sphere GetOutput]<br />
procrustes SetInput 1 [transformer1 GetOutput]<br />
procrustes SetInput 2 [transformer2 GetOutput]<br />
[procrustes GetLandmarkTransform] SetModeToRigidBody<br />
<br />
vtkPolyDataMapper map2a<br />
map2a SetInput [procrustes GetOutput 0]<br />
</source><br />
<br />
with:<br />
<br />
<source lang="tcl"><br />
vtkMultiBlockDataGroupFilter group<br />
group AddInputConnection [sphere GetOutputPort]<br />
group AddInputConnection [transformer1 GetOutputPort]<br />
group AddInputConnection [transformer2 GetOutputPort]<br />
<br />
vtkProcrustesAlignmentFilter procrustes<br />
procrustes SetInputConnection [group GetOutputPort]<br />
[procrustes GetLandmarkTransform] SetModeToRigidBody<br />
procrustes Update<br />
<br />
vtkPolyDataMapper map2a<br />
map2a SetInputData [[procrustes GetOutput] GetBlock 0]<br />
</source><br />
<br />
Note the use of [[procustes GetOutput] GetBlock 0]. It is also possible to create a pipeline that works on one block (or more) by using the composite data pipeline and vtkExtractBlock filter.</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration_Guide&diff=46489VTK/VTK 6 Migration Guide2012-04-06T19:36:30Z<p>Berk: </p>
<hr />
<div>[[VTK/VTK_6_Migration/Overview | Overview]]<br />
<br />
[[VTK/VTK_6_Migration/Replacement_of_SetInput | Replacement of SetInput() with SetInputData() and SetInputConnection()]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetProducerPort | Removal of GetProducerPort() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_ GetPipelineInformation | Removal of GetPipelineInformation and GetExecutive from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Update | Removal of Pipeline Update Methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Execute | Removal of ExecuteInformation() and ExecuteData() from Algorithm Superclasses]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_SetExtentTranslator | Removal of SetExtentTranslator and GetExtentTranslator from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformation | Removal of CopyInformation and CopyTypeSpecificInformation from vtkDataObject and vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformationToPipeline | Removal of CopyInformationToPipeline and CopyInformationFromPipeline from vtkDataObject and sub-classes]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetEstimatedMemorySize | Removal of GetEstimatedMemorySize() method from vtkDataObject and vtkImageData ]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of SetWholeExtent | Removal of SetWholeExtent() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of ShouldIReleaseData | Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of Methods for Manipulating Update Extent | Removal of vtkDataObject Methods for Manipulating Update Extent]]<br />
<br />
[[VTK/VTK_6_Migration/Change to Crop | Change to Crop() in vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to Scalars Manipulation Functions | Changes to Scalars Manipulation Functions in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to CopyOriginAndSpacingFromPipeline | Change to CopyOriginAndSpacingFromPipeline in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to SetAxisUpdateExtent | Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to AllocateOutputData | Change to AllocateOutputData() in vtkImageAlgorithm]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to vtkProcrustesAlignmentFilter | Changes to vtkProcrustesAlignmentFilter]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Change_to_AllocateOutputData&diff=46488VTK/VTK 6 Migration/Change to AllocateOutputData2012-04-06T19:35:43Z<p>Berk: Created page with "= Change to AllocateOutputData() in vtkImageAlgorithm = In VTK 6, we changed the signature of vtkImageAlgorithm’s two AllocateOutputData() methods from * void AllocateOutputD..."</p>
<hr />
<div>= Change to AllocateOutputData() in vtkImageAlgorithm =<br />
<br />
In VTK 6, we changed the signature of vtkImageAlgorithm’s two AllocateOutputData() methods from<br />
<br />
* void AllocateOutputData(vtkImageData *out, int *uExtent);<br />
* vtkImageData *AllocateOutputData(vtkDataObject *out);<br />
<br />
to<br />
<br />
* AllocateOutputData(vtkImageData *out, vtkInformation* outInfo, int *uExtent);<br />
* vtkImageData *AllocateOutputData(vtkDataObject *out, vtkInformation *outInfo);<br />
<br />
This change was made to ensure that AllocateOutputData() has direct access to the output pipeline information from which it extracts meta-data about update extent, scalar type and number of scalar components. Even though the algorithm can directly access its pipeline information through the executive, this is discouraged because the executives are free to pass other information vectors to RequestData(). Note that if you overwrote Execute() or ExecuteData(vtkDataObject *output), you don’t have access to the output information. In order to make this easier, we added a new virtual function that takes output information as an argument:<br />
<br />
* ExecuteData(vtkDataObject *output, vtkInformation* outInfo);<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
void vtkImageGridSource::ExecuteData(vtkDataObject *output)<br />
{<br />
vtkImageData *data = this->AllocateOutputData(output);<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
void vtkImageGridSource::ExecuteData(vtkDataObject *output,<br />
vtkInformation* outInfo)<br />
{<br />
vtkImageData *data = this->AllocateOutputData(output, outInfo);<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration_Guide&diff=46487VTK/VTK 6 Migration Guide2012-04-06T19:34:36Z<p>Berk: </p>
<hr />
<div>[[VTK/VTK_6_Migration/Overview | Overview]]<br />
<br />
[[VTK/VTK_6_Migration/Replacement_of_SetInput | Replacement of SetInput() with SetInputData() and SetInputConnection()]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetProducerPort | Removal of GetProducerPort() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_ GetPipelineInformation | Removal of GetPipelineInformation and GetExecutive from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Update | Removal of Pipeline Update Methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Execute | Removal of ExecuteInformation() and ExecuteData() from Algorithm Superclasses]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_SetExtentTranslator | Removal of SetExtentTranslator and GetExtentTranslator from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformation | Removal of CopyInformation and CopyTypeSpecificInformation from vtkDataObject and vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformationToPipeline | Removal of CopyInformationToPipeline and CopyInformationFromPipeline from vtkDataObject and sub-classes]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetEstimatedMemorySize | Removal of GetEstimatedMemorySize() method from vtkDataObject and vtkImageData ]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of SetWholeExtent | Removal of SetWholeExtent() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of ShouldIReleaseData | Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of Methods for Manipulating Update Extent | Removal of vtkDataObject Methods for Manipulating Update Extent]]<br />
<br />
[[VTK/VTK_6_Migration/Change to Crop | Change to Crop() in vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to Scalars Manipulation Functions | Changes to Scalars Manipulation Functions in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to CopyOriginAndSpacingFromPipeline | Change to CopyOriginAndSpacingFromPipeline in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to SetAxisUpdateExtent | Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to AllocateOutputData | Change to AllocateOutputData() in vtkImageAlgorithm]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Changes_to_SetAxisUpdateExtent&diff=46486VTK/VTK 6 Migration/Changes to SetAxisUpdateExtent2012-04-06T19:33:43Z<p>Berk: </p>
<hr />
<div>= Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview | here]]. One of these changes is the removal of all pipeline related methods from vtkDataObject. Among these are several methods in vtkImageData that were used to facilitate meta-data management and memory allocation using pipeline meta-data. Since vtkImageData no longer has direct access to the pipeline information (see [[VTK/VTK 6 Migration/Removal_of_GetPipelineInformation | this document]]), these methods were changes to take the pipeline information as an argument.<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
cache->SetAxisUpdateExtent(axis, idx, idx);<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
int* updateExtent = vtkStreamingDemandDrivenPipeline::GetUpdateExtent(inInfo);<br />
<br />
…<br />
int axisUpdateExtent[6];<br />
cache->SetAxisUpdateExtent(axis, idx, idx, updateExtent, axisUpdateExtent);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(inInfo, axisUpdateExtent);<br />
</source><br />
<br />
See vtkImageWriter.cxx for details<br />
<br />
== Example 2 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
cache->GetAxisUpdateExtent(axis, min, max);<br />
</source><br />
with<br />
<br />
<source lang="cpp"><br />
int* updateExtent = vtkStreamingDemandDrivenPipeline::GetUpdateExtent(inInfo);<br />
cache->GetAxisUpdateExtent(axis, min, max, updateExtent);<br />
</source><br />
<br />
See vtkImageWriter.cxx for details</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Changes_to_SetAxisUpdateExtent&diff=46485VTK/VTK 6 Migration/Changes to SetAxisUpdateExtent2012-04-06T19:32:39Z<p>Berk: Created page with "= Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData = VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are desc..."</p>
<hr />
<div>= Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview | here]]. One of these changes is the removal of all pipeline related methods from vtkDataObject. Among these are several methods in vtkImageData that were used to facilitate meta-data management and memory allocation using pipeline meta-data. Since vtkImageData no longer has direct access to the pipeline information (see [[VTK/VTK 6 Migration/Removal_of_GetPipelineInformation | this document]]), these methods were changes to take the pipeline information as an argument.<br />
<br />
Example 1<br />
<br />
Replace<br />
<br />
cache->SetAxisUpdateExtent(axis, idx, idx);<br />
<br />
with<br />
<br />
int* updateExtent = vtkStreamingDemandDrivenPipeline::GetUpdateExtent(inInfo);<br />
<br />
…<br />
int axisUpdateExtent[6];<br />
cache->SetAxisUpdateExtent(axis, idx, idx, updateExtent, axisUpdateExtent);<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(inInfo, axisUpdateExtent);<br />
<br />
See vtkImageWriter.cxx for details<br />
<br />
Example 2<br />
<br />
Replace<br />
<br />
cache->GetAxisUpdateExtent(axis, min, max);<br />
<br />
with<br />
<br />
int* updateExtent = vtkStreamingDemandDrivenPipeline::GetUpdateExtent(inInfo);<br />
cache->GetAxisUpdateExtent(axis, min, max, updateExtent);<br />
<br />
See vtkImageWriter.cxx for details</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration_Guide&diff=46484VTK/VTK 6 Migration Guide2012-04-06T19:31:12Z<p>Berk: </p>
<hr />
<div>[[VTK/VTK_6_Migration/Overview | Overview]]<br />
<br />
[[VTK/VTK_6_Migration/Replacement_of_SetInput | Replacement of SetInput() with SetInputData() and SetInputConnection()]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetProducerPort | Removal of GetProducerPort() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_ GetPipelineInformation | Removal of GetPipelineInformation and GetExecutive from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Update | Removal of Pipeline Update Methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Execute | Removal of ExecuteInformation() and ExecuteData() from Algorithm Superclasses]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_SetExtentTranslator | Removal of SetExtentTranslator and GetExtentTranslator from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformation | Removal of CopyInformation and CopyTypeSpecificInformation from vtkDataObject and vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformationToPipeline | Removal of CopyInformationToPipeline and CopyInformationFromPipeline from vtkDataObject and sub-classes]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetEstimatedMemorySize | Removal of GetEstimatedMemorySize() method from vtkDataObject and vtkImageData ]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of SetWholeExtent | Removal of SetWholeExtent() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of ShouldIReleaseData | Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of Methods for Manipulating Update Extent | Removal of vtkDataObject Methods for Manipulating Update Extent]]<br />
<br />
[[VTK/VTK_6_Migration/Change to Crop | Change to Crop() in vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to Scalars Manipulation Functions | Changes to Scalars Manipulation Functions in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to CopyOriginAndSpacingFromPipeline | Change to CopyOriginAndSpacingFromPipeline in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to SetAxisUpdateExtent | Changes to SetAxisUpdateExtent and GetAxisUpdateExtent in vtkImageData]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Change_to_Crop&diff=46483VTK/VTK 6 Migration/Change to Crop2012-04-06T19:30:26Z<p>Berk: Created page with "= Change to Crop() in vtkDataObject = VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migr..."</p>
<hr />
<div>= Change to Crop() in vtkDataObject =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview | here]]. One of these changes is the removal of all pipeline related functionality from vtkDataObject. Prior to VTK 6, vtkDataObject’s Crop() method used the update extent stored in the pipeline information. Since a data object no longer has access to the pipeline information, we change Crop() to take the update extent as an argument<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = this->GetOutput();<br />
// Do something with image data<br />
output->Crop();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkInformation* outInfo = outInfoVec->GetInformationObject(0);<br />
vtkImageData* output = vtkImageData::GetData(outInfo);<br />
// Do something with image data<br />
output->Crop(<br />
outInfo->Get(vtkStreamingDemandDrivenPipeline::UPDATE_EXTENT());<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration_Guide&diff=46482VTK/VTK 6 Migration Guide2012-04-06T19:29:30Z<p>Berk: </p>
<hr />
<div>[[VTK/VTK_6_Migration/Overview | Overview]]<br />
<br />
[[VTK/VTK_6_Migration/Replacement_of_SetInput | Replacement of SetInput() with SetInputData() and SetInputConnection()]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetProducerPort | Removal of GetProducerPort() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_ GetPipelineInformation | Removal of GetPipelineInformation and GetExecutive from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Update | Removal of Pipeline Update Methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Execute | Removal of ExecuteInformation() and ExecuteData() from Algorithm Superclasses]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_SetExtentTranslator | Removal of SetExtentTranslator and GetExtentTranslator from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformation | Removal of CopyInformation and CopyTypeSpecificInformation from vtkDataObject and vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformationToPipeline | Removal of CopyInformationToPipeline and CopyInformationFromPipeline from vtkDataObject and sub-classes]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetEstimatedMemorySize | Removal of GetEstimatedMemorySize() method from vtkDataObject and vtkImageData ]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of SetWholeExtent | Removal of SetWholeExtent() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of ShouldIReleaseData | Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of Methods for Manipulating Update Extent | Removal of vtkDataObject Methods for Manipulating Update Extent]]<br />
<br />
[[VTK/VTK_6_Migration/Change to Crop | Change to Crop() in vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to Scalars Manipulation Functions | Changes to Scalars Manipulation Functions in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to CopyOriginAndSpacingFromPipeline | Change to CopyOriginAndSpacingFromPipeline in vtkImageData]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Change_to_CopyOriginAndSpacingFromPipeline&diff=46481VTK/VTK 6 Migration/Change to CopyOriginAndSpacingFromPipeline2012-04-06T19:28:37Z<p>Berk: Created page with "= Change to CopyOriginAndSpacingFromPipeline in vtkImageData = VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in mor..."</p>
<hr />
<div>= Change to CopyOriginAndSpacingFromPipeline in vtkImageData =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview | here]]. One of these changes is the removal of all pipeline related functionality from vtkDataObject. Prior to VTK 6, vtkDataObject’s CopyOriginAndSpacingFromPipeline() method used the origin and spacing meta-data stored in the pipeline information. Since a data object no longer has access to the pipeline information, we change CopyOriginAndSpacingFromPipeline() to take pipeline information as an argument.<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = this->GetOutput();<br />
output->CopyOriginAndSpacingFromPipeline();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkInformation* outInfo = outInfoVec->GetInformationObject(0);<br />
vtkImageData* output = vtkImageData::GetData(outInfo);<br />
output->CopyOriginAndSpacingFromPipeline(outInfo);<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration_Guide&diff=46480VTK/VTK 6 Migration Guide2012-04-06T19:27:47Z<p>Berk: </p>
<hr />
<div>[[VTK/VTK_6_Migration/Overview | Overview]]<br />
<br />
[[VTK/VTK_6_Migration/Replacement_of_SetInput | Replacement of SetInput() with SetInputData() and SetInputConnection()]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetProducerPort | Removal of GetProducerPort() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_ GetPipelineInformation | Removal of GetPipelineInformation and GetExecutive from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Update | Removal of Pipeline Update Methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_Execute | Removal of ExecuteInformation() and ExecuteData() from Algorithm Superclasses]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_SetExtentTranslator | Removal of SetExtentTranslator and GetExtentTranslator from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformation | Removal of CopyInformation and CopyTypeSpecificInformation from vtkDataObject and vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_CopyInformationToPipeline | Removal of CopyInformationToPipeline and CopyInformationFromPipeline from vtkDataObject and sub-classes]]<br />
<br />
[[VTK/VTK_6_Migration/Removal_of_GetEstimatedMemorySize | Removal of GetEstimatedMemorySize() method from vtkDataObject and vtkImageData ]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of SetWholeExtent | Removal of SetWholeExtent() from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of ShouldIReleaseData | Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject]]<br />
<br />
[[VTK/VTK_6_Migration/Removal of Methods for Manipulating Update Extent | Removal of vtkDataObject Methods for Manipulating Update Extent]]<br />
<br />
[[VTK/VTK_6_Migration/Changes to Scalars Manipulation Functions | Changes to Scalars Manipulation Functions in vtkImageData]]<br />
<br />
[[VTK/VTK_6_Migration/Change to CopyOriginAndSpacingFromPipeline | Change to CopyOriginAndSpacingFromPipeline in vtkImageData]]</div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Removal_of_Update&diff=46479VTK/VTK 6 Migration/Removal of Update2012-04-06T19:26:47Z<p>Berk: /* Removal of Pipeline Update Methods from vtkDataObject */</p>
<hr />
<div>= Removal of Pipeline Update Methods from vtkDataObject =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview|here]]. One of these changes is the removal of all pipeline related methods from vtkDataObject. Here we discuss the following update methods and provide suggestions in updating existing code.<br />
<br />
== Update() ==<br />
<br />
vtkDataObject::Update() was a convenience method that in turn called Update() on the algorithm that produced the given data object. Since the data object no longer has a reference to its producer, this function could not be maintained and was removed.<br />
<br />
=== Example 1 ===<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = someAlgorithm->GetOutput();<br />
dobj->Update();<br />
</source><br />
<br />
should become<br />
<br />
<source lang="cpp"><br />
someAlgorithm->Update();<br />
</source><br />
<br />
=== Example 2 ===<br />
<br />
Replace:<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput(1);<br />
dobj->Update();<br />
</source><br />
<br />
with:<br />
<br />
<source lang="cpp"><br />
aFilter->Update(1);<br />
</source><br />
<br />
== UpdateInformation() ==<br />
<br />
vtkDataObject::UpdateInformation() was a convenience method that in turn called UpdateInformation() on the algorithm that produced the given data object. Since the data object no longer has a reference to its producer, this function could not be maintained and was removed.<br />
<br />
=== Example 1 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
dobj->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/);<br />
dobj->Update();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(<br />
aFilter->GetOutputInformation(0 /*port number*/),<br />
0 /*piece*/,<br />
2 /*number of pieces*/,<br />
0 /*number of ghost levels*/);<br />
aFilter->Update();<br />
</source><br />
<br />
== PropagateUpdateExtent() ==<br />
<br />
vtkDataObject::UpdateInformation() was a convenience method that in turn called UpdateInformation() on the executive of the algorithm that produced the given data object. Since the data object no longer has a reference to its producer, this function could not be maintained and was removed.<br />
<br />
=== Example 1 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
dobj->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/);<br />
dobj->PropagateUpdateExtent();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
aFilter->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/, 0 /*ghost levels*/);<br />
aFilter->PropagateUpdateExtent ();<br />
</source><br />
<br />
== TriggerAsynchronousUpdate() ==<br />
<br />
This method was no longer being used and therefore was removed from VTK as of VTK 6.<br />
<br />
== UpdateData() ==<br />
<br />
vtkDataObject::Update() was a convenience method that in turn called UpdateData() on the executive of the algorithm that produced the given data object. Since the data object no longer has a reference to its producer, this function could not be maintained and was removed.<br />
<br />
=== Example 1 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
dobj->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/);<br />
dobj->PropagateUpdateExtent();<br />
dobj->UpdateData();<br />
</source><br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
vtkStreamingDemandDrivenPipeline::SetUpdateExtent(<br />
aFilter->GetOutputInformation(0 /*port number*/),<br />
0 /*piece*/,<br />
2 /*number of pieces*/,<br />
0 /*number of ghost levels*/);<br />
aFilter->Update();<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Removal_of_ShouldIReleaseData&diff=46478VTK/VTK 6 Migration/Removal of ShouldIReleaseData2012-04-06T19:25:43Z<p>Berk: /* Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject */</p>
<hr />
<div>= Removal of ShouldIReleaseData() and ReleaseDataFlag methods from vtkDataObject =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview|here]]. One of these changes is the removal of all pipeline related methods from vtkDataObject. Among these are the following vtkDataObject methods:<br />
<br />
* ShouldIReleaseData()<br />
* SetReleaseDataFlag()<br />
* GetReleaseDataFlag()<br />
* ReleaseDataFlagToOn()<br />
* ReleaseDataFlagToOff()<br />
<br />
All of these methods (except ShouldIReleaseData) have always had their counterparts in vtkDemandDrivenPipeline and any code that uses them can be fixed by using the vtkDemandDrivenPipeline methods instead. ShouldIReleaseData is a convenience method that was used by pipeline executives and is not normally used outside this context.<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = anAlgorithm->GetOutput();<br />
dobj->SetReleaseDataFlag(1);<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
vtkDemandDrivenPipeline* executive =<br />
vtkDemandDrivenPipeline::SafeDownCast(<br />
anAlgorithm->GetExecutive());<br />
if (executive)<br />
{<br />
executive->SetReleaseDataFlag(0, 1); // where 0 is the port index<br />
}<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Removal_of_Methods_for_Manipulating_Update_Extent&diff=46477VTK/VTK 6 Migration/Removal of Methods for Manipulating Update Extent2012-04-06T19:25:12Z<p>Berk: /* Removal of vtkDataObject Methods for Manipulating Update Extent */</p>
<hr />
<div>= Removal of vtkDataObject Methods for Manipulating Update Extent =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview|here]]. One of these changes is the removal of all pipeline related methods from vtkDataObject. Among these methods are those that manipulated the update extent. These are<br />
<br />
* SetUpdateExtent(int piece, int numPieces, int ghostLevel)<br />
* SetUpdateExtent(int piece, int numPieces)<br />
* SetUpdateExtent(int extent[6])<br />
* SetUpdateExtent(int x0, int x1, int y0, int y1, int z0, int z1)<br />
* int* GetUpdateExtent()<br />
* GetUpdateExtent(int& x0, int& x1, int& y0, int& y1,int& z0, int& z1)<br />
* GetUpdateExtent(int extent[6])<br />
* SetUpdateExtentToWholeExtent()<br />
<br />
These were convenience functions that actually forwarded to the executives. For convenience, we added similar convenience functions to vtkAlgorithm. This should make it easy to transition to VTK 6. These functions are as follows.<br />
<br />
* SetUpdateExtent(int port, int connection, int piece,int numPieces, int ghostLevel);<br />
* SetUpdateExtent(int piece,int numPieces, int ghostLevel);<br />
* SetUpdateExtent(int port, int connection, int extent[6]);<br />
* SetUpdateExtent(int extent[6]);<br />
* SetUpdateExtentToWholeExtent(int port, int connection);<br />
* SetUpdateExtentToWholeExtent();<br />
<br />
== Example 1 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
dobj->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/);<br />
dobj->Update();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
aFilter->SetUpdateExtent(0 /*piece*/, 2 /*number of pieces*/, 0 /*ghost levels*/);<br />
aFilter->Update();<br />
</source><br />
<br />
== Example 2 ==<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
vtkDataObject* dobj = aFilter->GetOutput();<br />
dobj->UpdateInformation();<br />
int updateExtent[6] = {0, 10, 0, 10, 0, 10};<br />
dobj->SetUpdateExtent(updateExtent);<br />
dobj->Update();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
aFilter->UpdateInformation();<br />
int updateExtent[6] = {0, 10, 0, 10, 0, 10};<br />
aFilter->SetUpdateExtent(updateExtent);<br />
aFilter->Update();<br />
</source></div>Berkhttps://public.kitware.com/Wiki/index.php?title=VTK/VTK_6_Migration/Changes_to_Scalars_Manipulation_Functions&diff=46476VTK/VTK 6 Migration/Changes to Scalars Manipulation Functions2012-04-06T19:24:33Z<p>Berk: </p>
<hr />
<div>= Changes to Scalars Manipulation Functions in vtkImageData =<br />
<br />
VTK 6 introduces a number of backwards-incompatible changes. The reasons behind these changes are described in more detail [[VTK/VTK 6 Migration/Overview | here]]. Among these are changes to several vtkImageData methods that were used to facilitate meta-data management and memory allocation using pipeline meta-data. Since vtkImageData no longer has direct access to the pipeline information (see this wiki document), these methods were changed to behave differently or to accept additional arguments for dealing with meta-data. These methods are as follows.<br />
* GetScalarTypeMin()<br />
*GetScalarTypeMax()<br />
* GetScalarType()<br />
* SetScalarType(int scalar_type)<br />
* GetNumberOfScalarComponents()<br />
* SetNumberOfScalarComponents(int n)<br />
* AllocateScalars()<br />
<br />
Note that these methods were all designed to work with pipeline meta-data (aka PipelineInformation). For example, SetScalarType() was implemented as follows:<br />
<br />
<source lang="cpp"><br />
void vtkImageData::SetScalarType(int type)<br />
{<br />
this->GetProducerPort();<br />
if(vtkInformation* info = this->GetPipelineInformation())<br />
{<br />
vtkDataObject::SetPointDataActiveScalarInfo(info, type, -1);<br />
}<br />
else<br />
{<br />
vtkErrorMacro("SetScalarType called with no "<br />
"executive producing this image data object.");<br />
}<br />
}<br />
</source><br />
<br />
Since data objects can no longer be used to manipulate meta-data directly, these methods were changed. Specific changes are as follows.<br />
<br />
== GetNumberOfScalarComponents(), GetScalarType(), GetScalarTypeMin() and GetScalarTypeMax() ==<br />
<br />
These methods were changed to return the number of scalar components, the scalar type or the min/max value for that scalar type of the actual vtkImageData. Therefore, they will no longer return the correct value if called before the scalars are allocated (in RequestInformation for example). If you need to access the scalar type before RequestData, you can still do it by passing the pipeline information to GetScalarType().<br />
<br />
=== Example 1 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestInformation(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = this->GetOutput();<br />
output->GetScalarType();<br />
output->GetNumberOfScalarComponents();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestInformation(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkInformation* outInfo = outInfoVec->GetInformationObject(0);<br />
vtkImageData::GetScalarType(outInfo);<br />
vtkImageData::GetNumberOfScalarComponents(outInfo);<br />
</source><br />
<br />
=== Example 2 ===<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = vtkImageData::GetData(outInfoVec);<br />
// Allocate output scalars here<br />
output->GetScalarType();<br />
output->GetNumberOfScalarComponents();<br />
</source><br />
<br />
This code does not need to be changed.<br />
<br />
== SetScalarType() and SetNumberOfScalarComponents() ==<br />
<br />
SetScalarType() and SetNumberOfScalarComponets() were previously used to populate pipeline information with scalar meta-data. In VTK 6, SetPointDataActiveScalarInfo() can be used to perform the same task.<br />
<br />
=== Example 1 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestInformation(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = this->GetOutput();<br />
output->SetScalarType(VTK_UNSIGNED_CHAR);<br />
output->SetNumberOfScalarComponents(3);<br />
return 1;<br />
}<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestInformation(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkInformation* outInfo = outInfoVec->GetInformationObject(0);<br />
vtkDataObject::SetPointDataActiveScalarInfo(<br />
outInfo, VTK_UNSIGNED_CHAR, 3);<br />
return 1;<br />
}<br />
</source><br />
<br />
== AllocateScalars() ==<br />
<br />
Before VTK 6, AllocateScalars() was used in conjunction with SetScalarType() and SetNumberOfScalarComponents(). The latter two stored meta-data in the pipeline information and AllocateScalars() allocated data by using this meta-data. Since AllocateScalars() can no longer access the pipeline information, it needs to be told what scalar type and how many components to allocate.<br />
<br />
=== Example 1 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
// set the extent of the image data first<br />
imageData->SetScalarTypeToFloat();<br />
imageData->SetNumberOfScalarComponents(3);<br />
imageData->AllocateScalars();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
// set the extent of the image data first<br />
imageData->AllocateScalars(VTK_FLOAT, 3);<br />
</source><br />
<br />
=== Example 2 ===<br />
<br />
Replace<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestInformation(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = this->GetOutput();<br />
output->SetScalarType(VTK_UNSIGNED_CHAR);<br />
output->SetNumberOfScalarComponents(3);<br />
return 1;<br />
}<br />
<br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkImageData* output = this->GetOutput();<br />
output->AllocateScalars();<br />
</source><br />
<br />
with<br />
<br />
<source lang="cpp"><br />
int vtkMyAlg::RequestInformation(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkInformation* outInfo = outInfoVec->GetInformationObject(0);<br />
vtkDataObject::SetPointDataActiveScalarInfo(<br />
outInfo, VTK_UNSIGNED_CHAR, 3);<br />
return 1;<br />
}<br />
<br />
int vtkMyAlg::RequestData(vtkInformation*, vtkInformationVector**, <br />
vtkInformationVector* outInfoVec)<br />
{<br />
vtkInformation* outInfo = outInfoVec->GetInformationObject(0);<br />
vtkImageData* output = vtkImageData::GetData(outInfoVec);<br />
output->AllocateScalars(outInfo);<br />
</source></div>Berk