https://public.kitware.com/Wiki/api.php?action=feedcontributions&user=Pratikm&feedformat=atomKitwarePublic - User contributions [en]2024-03-29T06:13:08ZUser contributionsMediaWiki 1.38.6https://public.kitware.com/Wiki/index.php?title=VTK/Composite_Data_Redesign&diff=40920VTK/Composite Data Redesign2011-06-19T14:28:02Z<p>Pratikm: </p>
<hr />
<div>== Composite dataset re-architecture ==<br />
<br />
=== Current design ===<br />
<center><br />
[[Image:Old_composite_design.png]]<br />
</center><br />
=== Issues with the current design ===<br />
<br />
* Most functionality is based on vtkMultiGroupDataSet instead of vtkCompositeDataSet. For example, most algorithms (and the executives) use vtkMultiGroupDataSet API to iterate. This makes it impossible to add new sub-classes of vtkCompositeDataSet without writing new executives.<br />
* The concept of sub-block is confusing. vtkMultiGroupDataSet stores a vector of vectors of datasets. When this concept is mapped to the multi-block (or temporal) datasets, each block ends up having multiple sub-blocks. Furthermore, the convention that these sub-block ids map to the process ids is very confusing.<br />
* Algorithms that want to pass blanking have to downcast to vtkHierarchicalBoxDataSet and copy blanking explicitely.<br />
* vtkCompositeDataPipeline is a mess.<br />
<br />
=== Suggested design ===<br />
<center><br />
[[Image:New_composite_design.png]]<br />
</center><br />
* Get rid of vtkMultiGroupDataSet. Any code shared between subclasses of vtkCompositeDataPipeline can be shared using helper implementation objects.<br />
* Improve the iterators so that it is not necessary to use vtkMultiGroupDataSet API to iterate over blocks.<br />
* Add a vtkMultiPieceDataSet class that can be used to group multiple pieces together. Example: when loading a dataset with multiple partitions on 1 processor, vtkMultiPieceDataSet can be used instead of appending datasets together. vtkMultiPieceDataSet would have additional meta-data about things like whole extent for structured datasets.<br />
* Clean up vtkCompositeDataPipeline.<br />
* Improve ghost level support for composite datasets.<br />
<br />
==== Iterators ====<br />
<br />
In the current architecture, the most common thing to do is the following:<br />
<br />
<br />
<source lang="cpp"><br />
unsigned int numGroups = mbInput->GetNumberOfGroups();<br />
output->SetNumberOfGroups(numGroups);<br />
for (unsigned int groupId=0; groupId<numGroups; groupId++)<br />
{<br />
unsigned int numBlocks = mbInput->GetNumberOfDataSets(groupId);<br />
output->SetNumberOfDataSets(groupId, numBlocks);<br />
for (unsigned int blockId=0; blockId<numBlocks; blockId++)<br />
{<br />
vtkDataObject* block = mbInput->GetDataSet(groupId, blockId);<br />
<br />
// do something with block to get an outBlock<br />
<br />
output->SetDataSet(groupId, blockId, outBlock);<br />
}<br />
} <br />
</source><br />
<br />
<br />
As mentioned above, problem with this approach is that it assumes that the<br />
composite dataset is a vtkMultiGroupDataSet. With the appropriate changes<br />
to the composite data iterators and composite datasets, the code above can<br />
be rewritten as<br />
<br />
<br />
<source lang="cpp"><br />
output->CopyStructure(mbInput);<br />
<br />
vtkCompositeDataIterator* iter = mbInput->NewIterator();<br />
iter->GoToFirstItem();<br />
while (!iter->IsDoneWithTraversal())<br />
{<br />
vtkDataObjects* block = iter->GetCurrentDataObject();<br />
// Note that the iterator will only visit the leaf nodes by default.<br />
<br />
// do something with block to get outBlock<br />
<br />
// copy the meta-data<br />
outBlock->CopyInformation(block);<br />
<br />
output->SetDataSet(iter, outBlock);<br />
iter->GoToNextItem();<br />
}<br />
iter->Delete();<br />
append->Update();<br />
</source><br />
<br />
<br />
The implementation above requires two additional methods: CopyStructure()<br />
and SetDataSet(iter, dataObject). The task of CopyStructure() is to create<br />
a tree structure on the output composite data object identical to that of<br />
the input. In the case of hierarchical datasets, this means same number of<br />
levels and same number of datasets on all levels. In the case of<br />
multi-block datasets, this means an identical tree. This may look like<br />
this:<br />
<center><br />
[[Image:Multiblock_tree.png]]<br />
</center><br />
After CopyStructure(), the output will have the same hierarchy except all<br />
vtkPolyData leaf nodes will be replaced by null pointers. CopyStructure()<br />
should also copy things like refinement ratios etc. This should also<br />
include all of the meta-data (information) of all non-leaf nodes. We are<br />
likely to use things like names for groups etc. when dealing with<br />
multi-block datasets.<br />
<br />
<em>Note on vtkHierarchicalBoxDataSet: Currently, a vtkHierarchicalBoxDataSet is converted to a vtkMultiGroupDataSet when it is processed by a simple algorithm or a vtkMultiGroupDataAlgorithm. We should think about this. Maybe when a vtkHierarchicalBoxDataSet is processed by a vtkDataSetAlgorithm, the output should be vtkHierarchicalBoxDataSet too?</em><br />
<br />
The task of SetDataSet(iter, dataObject) is to add a leaf dataset at the exact<br />
same position that the iterator is pointing at on the input. This will<br />
require changing iterators such that they are keeping track of their<br />
position in a composite dataset by some sort of index. The easiest way of<br />
doing this is to use two integers for hierarchical datasets (level, index)<br />
and a vector of integers of length equal to the current tree level for the<br />
multi-block datasets.<br />
<br />
==== vtkMultiPieceDataSet ====<br />
<br />
A multi-piece dataset groups multiple data pieces together. For example,<br />
say that a simulation broke a volume into 16 piece so that each piece can<br />
be processed with 1 process in parallel. We want to load this volume in a<br />
visualization cluster of 4 nodes. Each node will get 4 pieces, not<br />
necessarily forming a whole rectangular piece. In this case, it is not<br />
possible to append the 4 pieces together into a vtkImageData. In this case,<br />
these 4 pieces can be collected together using a<br />
vtkMultiPieceDataSet. Although it is possible to use a vtkMultiBlockDataSet<br />
for this purpose, a vtkMultiPieceDataSet makes it clear that these are<br />
pieces of one whole dataset that are collected together. Given this<br />
information, applications like paraview can treat these in a special<br />
way. For example, meta-data about the whole extent of the dataset can be<br />
displayed, neighborhood information can be obtained, ghost levels can be<br />
generated etc etc.<br />
<br />
<em>Note: The use of vtkMultiPieceDataSet is not yet very clear to me but I think it will be necessary.</em> <br />
<br />
==== vtkCompositeDataPipeline cleanup ====<br />
<br />
There will be a list of changes to vtkCompositeDataPipeline here. The<br />
executive is a mess right now due to all the use cases it supports and<br />
because it grew organically. We need to take a step back and clean it up,<br />
possibly rewriting portions of it.<br />
<br />
==== Ghost level support ====<br />
<br />
Currently, ghost level requests are passed up the pipeline but they are<br />
pretty much ignored by the pipeline. This will not do, specially when we<br />
improve D3 to support multi-block datasets. Getting unstructured and<br />
dataset algorithms to work with ghost levels is pretty<br />
straightforward. Getting structured data filters working is a little<br />
trickier.<br />
<br />
<em>Note: Realistically, readers do not produce more than 1 ghost level. We may want to take this into account.</em><br />
<br />
=Implementation=<br />
<br />
The implementation is based on the above design with some notable differences:<br />
<br />
* vtkHierarchicalDataSet is deprecated. Due to lack of use-cases to create a AMR-like hierarchy with unstructured data, this class was deprecated. Applications can implemented same behavior using vtkMultiBlockDataSet. vtkMultiBlockDataSet provides for meta-data associated with each node in the tree, thus making it possible for applications to attach level information with blocks.<br />
<br />
<center><br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="empty" <br />
dir="back"<br />
arrowsize="2.0"<br />
]<br />
vtkDataObject []<br />
vtkCompositeDataSet [ ]<br />
vtkMultiBlockDataSet [ ]<br />
vtkTemporalDataSet [ ]<br />
vtkHierarchicalBoxDataSet [ ]<br />
vtkMultiPieceDataSet [ ]<br />
<br />
<br />
vtkDataObject->vtkCompositeDataSet<br />
vtkCompositeDataSet->vtkMultiBlockDataSet<br />
vtkCompositeDataSet->vtkTemporalDataSet<br />
vtkCompositeDataSet->vtkHierarchicalBoxDataSet<br />
vtkCompositeDataSet->vtkMultiPieceDataSet<br />
}<br />
</graphviz><br />
'''Class Hierarchy: Class hierarchy for current implementation of composite datasets'''<br />
</center><br />
==vtkCompositeDataSet==<br />
<br />
vtkCompositeDataSet is the abstract superclass for all composite datasets. It implements a full tree structure in which nodes can be datasets or other composite datasets. However the API to access the tree directly is protected. Each subclass can build and maintain this tree as per its requirements eg. vtkHierarchicalBoxDataSet builds 1 level deep trees with the 1st level nodes being vtkMultiBlockDataSet instances which correspond to a ''level'' in the hierarchical dataset. One can obtain a vtkCompositeDataIterator instance from the vtkCompositeDataSet to iterate over the tree structure. vtkCompositeDataSet provides public API to get/set dataobjects and metadata using the iterator. Important API is listed below:<br />
<br />
<source lang="cpp"><br />
// Description:<br />
// Return a new iterator (the iterator has to be deleted by user).<br />
virtual vtkCompositeDataIterator* NewIterator();<br />
<br />
// Description:<br />
// Copies the tree structure from the input. All pointers to non-composite<br />
// data objects are intialized to NULL. This also shallow copies the meta data<br />
// associated with all the nodes.<br />
virtual void CopyStructure(vtkCompositeDataSet* input);<br />
<br />
// Description:<br />
// Sets the data set at the location pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be any composite datasite with similar structure (achieved by using<br />
// CopyStructure).<br />
virtual void SetDataSet(vtkCompositeDataIterator* iter, vtkDataObject* dataObj);<br />
<br />
// Description:<br />
// Returns the dataset located at the positiong pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkDataObject* GetDataSet(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the meta-data associated with the position pointed by the iterator.<br />
// This will create a new vtkInformation object if none already exists. Use<br />
// HasMetaData to avoid creating the vtkInformation object unnecessarily.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkInformation* GetMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns if any meta-data associated with the position pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual int HasMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Shallow and Deep copy.<br />
virtual void ShallowCopy(vtkDataObject *src);<br />
virtual void DeepCopy(vtkDataObject *src);<br />
</source><br />
<br />
==vtkTemporalDataSet==<br />
<br />
vtkTemporalDataSet is used to hold multiple timesteps.<br />
<br />
<br />
<source lang="cpp"><br />
// Description:<br />
// Set the number of time steps in theis dataset<br />
void SetNumberOfTimeSteps(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of time steps.<br />
unsigned int GetNumberOfTimeSteps();<br />
<br />
// Description:<br />
// Set a data object as a timestep. Cannot be vtkTemporalDataSet.<br />
void SetTimeStep(unsigned int timestep, vtkDataObject* dobj);<br />
<br />
// Description:<br />
// Get a timestep.<br />
vtkDataObject* GetTimeStep(unsigned int timestep);<br />
<br />
// Description:<br />
// Get timestep meta-data.<br />
vtkInformation* GetMetaData(unsigned int timestep);<br />
<br />
// Description:<br />
// Returns if timestep meta-data is present.<br />
int HasMetaData(unsigned int timestep);<br />
</source><br />
<br />
==vtkMultiBlockDataSet==<br />
<br />
vtkMultiBlockDataSet is a vtkCompositeDataSet in which the child nodes can either be vtkDataSet subclasses or vtkMultiBlockDataSet. This is used when full trees are required. Meta-data can be associated with leaf nodes as well as non-leaf nodes in the tree.<br />
<br />
<source lang="cpp"><br />
// Description:<br />
// Set the number of blocks. This will cause allocation if the new number of<br />
// blocks is greater than the current size. All new blocks are initialized to<br />
// null.<br />
void SetNumberOfBlocks(unsigned int numBlocks);<br />
<br />
// Description:<br />
// Returns the number of blocks.<br />
unsigned int GetNumberOfBlocks();<br />
<br />
// Description:<br />
// Returns the block at the given index. It is recommended that one uses the<br />
// iterators to iterate over composite datasets rather than using this API.<br />
vtkDataObject* GetBlock(unsigned int blockno);<br />
<br />
// Description:<br />
// Sets the data object as the given block. The total number of blocks will <br />
// be resized to fit the requested block no. The only vtkCompositeDataSet subclass <br />
// that can be added as a block is a vtkMultiBlockDataSet, <br />
// an error is raised otherwise. <br />
void SetBlock(unsigned int blockno, vtkDataObject* block);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given block.<br />
int HasMetaData(unsigned int blockno);<br />
<br />
// Description:<br />
// Returns the meta-data for the block. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int blockno);<br />
</source><br />
==vtkMultiPieceDataSet==<br />
<br />
A vtkMultiPieceDataSet dataset groups multiple data pieces together. <br />
For example, say that a simulation broke a volume into 16 piece so that <br />
each piece can be processed with 1 process in parallel. We want to load <br />
this volume in a visualization cluster of 4 nodes. Each node will get 4 <br />
pieces, not necessarily forming a whole rectangular piece. In this case, <br />
it is not possible to append the 4 pieces together into a vtkImageData. <br />
In this case, these 4 pieces can be collected together using a <br />
vtkMultiPieceDataSet. <br />
Note that vtkMultiPieceDataSet is intended to be included in other composite<br />
datasets eg. vtkMultiBlockDataSet, vtkHierarchicalBoxDataSet. Hence the lack<br />
of algorithms producting vtkMultiPieceDataSet.<br />
<br />
<source lang="cpp"><br />
// Description:<br />
// Set the number of pieces. This will cause allocation if the new number of<br />
// pieces is greater than the current size. All new pieces are initialized to<br />
// null.<br />
void SetNumberOfPieces(unsigned int numpieces);<br />
<br />
// Description:<br />
// Returns the number of pieces.<br />
unsigned int GetNumberOfPieces();<br />
<br />
// Description:<br />
// Returns the piece at the given index. <br />
vtkDataSet* GetPiece(unsigned int pieceno);<br />
<br />
// Description:<br />
// Sets the data object as the given piece. The total number of pieces will <br />
// be resized to fit the requested piece no.<br />
void SetPiece(unsigned int pieceno, vtkDataSet* piece);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given piece.<br />
int HasMetaData(unsigned int piece);<br />
<br />
// Description:<br />
// Returns the meta-data for the piece. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int pieceno);<br />
</source><br />
<br />
==vtkHierarchicalBoxDataSet==<br />
<br />
vtkHiererchicalBoxDataSet is a hierarchical dataset of Uniform grids. It is designed for AMR (Adaptive mesh refinement) dataset. The structure consists of ''levels'', with each level containing datasets. The dataset type is restricted to vtkUniformGrid. Each dataset has an associated vtkAMRBox that represents it's region (similar to extent) in space. Internally, each level in a vtkHierarchicalBoxDataSet is nothing but a vtkMultiPieceDataSet. <br />
<br />
<br />
<source lang="cpp"><br />
// Description:<br />
// Set the number of refinement levels. This call might cause<br />
// allocation if the new number of levels is larger than the<br />
// current one.<br />
void SetNumberOfLevels(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of levels.<br />
unsigned int GetNumberOfLevels();<br />
<br />
// Description:<br />
// Set the number of data set at a given level.<br />
void SetNumberOfDataSets(unsigned int level, unsigned int numdatasets);<br />
<br />
// Description:<br />
// Returns the number of data sets available at any level.<br />
unsigned int GetNumberOfDataSets(unsigned int level);<br />
<br />
// Description:<br />
// Set the dataset pointer for a given node. This will resize the number of<br />
// levels and the number of datasets in the level to fit level, id requested. <br />
void SetDataSet(unsigned int level, unsigned int id, <br />
vtkAMRBox& box, vtkUniformGrid* dataSet);<br />
<br />
// Description:<br />
// Get a dataset given a level and an id.<br />
vtkUniformGrid* GetDataSet(unsigned int level,<br />
unsigned int id,<br />
vtkAMRBox& box);<br />
<br />
// Description:<br />
// Get meta-data associated with a level. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasLevelMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given level.<br />
int HasLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Get meta-data associated with a dataset. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given dataset under a given level.<br />
int HasMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Sets the refinement of a given level. The spacing at level<br />
// level+1 is defined as spacing(level+1) = spacing(level)/refRatio(level).<br />
// Note that currently, this is not enforced by this class however<br />
// some algorithms might not function properly if the spacing in<br />
// the blocks (vtkUniformGrid) does not match the one described<br />
// by the refinement ratio.<br />
void SetRefinementRatio(unsigned int level, int refRatio);<br />
<br />
// Description:<br />
// Returns the refinement of a given level.<br />
int GetRefinementRatio(unsigned int level);<br />
<br />
// Description:<br />
// Returns the AMR box for the location pointer by the iterator.<br />
vtkAMRBox GetAMRBox(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the refinement ratio for the position pointed by the iterator.<br />
int GetRefinementRatio(vtkCompositeDataIterator* iter);<br />
</source><br />
<br />
==vtkCompositeDataIterator==<br />
<br />
vtkCompositeDataIterator is used to iterate over composite datasets. <br />
<br />
<br />
<source lang="cpp"><br />
// Description:<br />
// Set the composite dataset this iterator is iterating over. <br />
// Must be set before traversal begins.<br />
virtual void SetDataSet(vtkCompositeDataSet* ds);<br />
vtkGetObjectMacro(DataSet, vtkCompositeDataSet);<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure.<br />
virtual void InitTraversal();<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure in reverse order.<br />
virtual void InitReverseTraversal();<br />
<br />
// Description:<br />
// Move the iterator to the beginning of the collection.<br />
virtual void GoToFirstItem();<br />
<br />
// Description:<br />
// Move the iterator to the next item in the collection.<br />
virtual void GoToNextItem();<br />
<br />
// Description:<br />
// Test whether the iterator is currently pointing to a valid item. Returns 1<br />
// for yes, and 0 for no.<br />
virtual int IsDoneWithTraversal();<br />
<br />
// Description:<br />
// Returns the current item. Valid only when IsDoneWithTraversal() returns 0.<br />
virtual vtkDataObject* GetCurrentDataObject();<br />
<br />
// Description:<br />
// Returns the meta-data associated with the current item. This will allocate<br />
// a new vtkInformation object is none is already present. Use<br />
// HasCurrentMetaData to avoid unnecessary creation of vtkInformation objects.<br />
virtual vtkInformation* GetCurrentMetaData();<br />
<br />
// Description:<br />
// Returns if the a meta-data information object is present for the current<br />
// item. Return 1 on success, 0 otherwise.<br />
virtual int HasCurrentMetaData();<br />
<br />
// Description:<br />
// If VisitOnlyLeaves is true, the iterator will only visit nodes<br />
// (sub-datasets) that are not composite. If it encounters a composite<br />
// data set, it will automatically traverse that composite dataset until<br />
// it finds non-composite datasets (see also TraverseSubTree). <br />
// With this options, it is possible to<br />
// visit all non-composite datasets in tree of composite datasets<br />
// (composite of composite of composite for example :-) ) If<br />
// VisitOnlyLeaves is false, GetCurrentDataObject() may return<br />
// vtkCompositeDataSet. By default, VisitOnlyLeaves is 1.<br />
vtkSetMacro(VisitOnlyLeaves, int);<br />
vtkGetMacro(VisitOnlyLeaves, int);<br />
vtkBooleanMacro(VisitOnlyLeaves, int);<br />
<br />
// Description:<br />
// If TraverseSubTree is set to true, the iterator will visit the entire tree<br />
// structure, otherwise it only visits the first level children. Set to 1 by<br />
// default.<br />
vtkSetMacro(TraverseSubTree, int);<br />
vtkGetMacro(TraverseSubTree, int);<br />
vtkBooleanMacro(TraverseSubTree, int);<br />
<br />
// Description:<br />
// If SkipEmptyNodes is true, then NULL datasets will be skipped. Default is<br />
// true.<br />
vtkSetMacro(SkipEmptyNodes, int);<br />
vtkGetMacro(SkipEmptyNodes, int);<br />
vtkBooleanMacro(SkipEmptyNodes, int);<br />
<br />
// Description:<br />
// Flat index is an index obtained by traversing the tree in preorder.<br />
// This can be used to uniquely identify nodes in the tree.<br />
// Not valid if IsDoneWithTraversal() returns true.<br />
vtkGetMacro(CurrentFlatIndex, unsigned int);<br />
<br />
</source><br />
<br />
===Examples===<br />
====Copy all non-empty leaf nodes====<br />
<br />
<source lang="cpp"><br />
// This can be very easily done with a ShallowCopy, but we use the iterators for illustration.<br />
vtkCompositeDataSet* CreateLeafCopy(vtkCompositeDataSet* src)<br />
{<br />
vtkCompositeDataSet* output = src->NewInstance();<br />
// Copy the structure as well as the meta-data assciated with all nodes in the composite tree.<br />
output->CopyStructure(src);<br />
<br />
vtkCompositeDataIterator* iter = src->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
output->SetDataSet(iter, iter->GetCurrentDataObject());<br />
}<br />
iter->Delete();<br />
return output;<br />
}<br />
<br />
</source><br />
<br />
====Iterate over immediate child nodes of a composite dataset====<br />
<br />
<source lang="cpp"><br />
vtkCompositeDataSet* input = ...<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
iter->TraverseSubTreeOff(); // we are only interested in immediate children.<br />
iter->VisitOnlyLeavesOff(); // we want all immediate children, including composite dataset child nodes.<br />
// To not skip empty children, simply call iter->SkipEmptyNodesOff();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
...<br />
}<br />
</source><br />
<br />
====Flat Index====<br />
Iterators can be used to determine what we refer to as the '''flat-index''' of any node. Flat index is the index of any node in the pre-order traversal of the tree eg. the following diagram shows the tree structure and the flat-index of each node (all rectangular nodes are composite datasets, while circular nodes are vtkDataSet subclasses). The flat-index for the current location can be obtained from the iterator using ''GetCurrentFlatIndex''.<br />
<center><br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="ediamond"<br />
arrowhead="none<br />
]<br />
A [label="A (0)"]<br />
B [label="B (1)" shape="circle"]<br />
C [label="C (2)"]<br />
D [label="D (3)"]<br />
E [label="E (4)" shape="circle"]<br />
F [label="F (5)" shape="circle"]<br />
<br />
A->B<br />
A->C<br />
C->D<br />
D->E<br />
D->F<br />
<br />
}<br />
</graphviz><br />
</center><br />
=Changes from VTK 5.0=<br />
==vtkCompositeDataPipeline==<br />
This executive is used to iterative execute a non-composite data aware filter over all the leaves in a composite dataset. In VTK 5.0, the vtkHierarchicalBoxDataSet was always converted to a vtkMultiBlockDataSet when a non-composite aware filter was present in the pipeline. This is no longer the case. vtkCompositePipeline now verifies if the non-composite aware algorithm can produce vtkUniformGrid given a vtkUniformGrid as an input. If so, for a vtkHierarchicalBoxDataSet input, the output is a vtkHierarchicalBoxDataSet otherwise it is a vtkMultiBlockDataSet. Even when the vtkHierarchicalBoxDataSet is converted to a vtkMutliBlockDataSet the composite data tree structure is preserved in other words: since vtkHierarchicalBoxDataSet has vtkMutliPieceDataSet instances for each level, the converted vtkMultiBlockDataSet will also have vtkMutliPieceDataSet instances as the child blocks of the root node.<br />
<br />
==Class Names==<br />
A few class names have changed, a few others are no longer available. This table lists the old class name and an equivalent class in the new design.<br />
<br />
{| border="1"<br />
|+ '''Class Name Changes''' ['*' -- no longer applicable ]<br />
! Old Class !! Equivalent Class<br />
|- <br />
| vtkHierarchicalDataInformation || *<br />
|- <br />
| vtkHierarchicalDataIterator || vtkCompositeIterator<br />
|-<br />
| vtkHierarchicalDataSet || *<br />
|-<br />
| vtkHierarchicalDataSetAlgorithm || *<br />
|- <br />
| vtkMultiGroupDataInformation || *<br />
|- <br />
| vtkMultiGroupDataIterator || vtkCompositeIterator<br />
|- <br />
| vtkMultiGroupDataSet || vtkCompositeDataSet<br />
|- <br />
| vtkMultiGroupDataSetAlgorithm || vtkCompositeAlgorithm<br />
|- <br />
| vtkHierarchicalDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataExtractDataSets || vtkExtractDataSets<br />
|- <br />
| vtkMultiGroupDataExtractGroup || vtkExtractBlock, vtkExtractLevel<br />
|- <br />
| vtkMultiGroupDataGeometryFilter || vtkCompositeDataGeometryFilter<br />
|- <br />
| vtkMultiGroupDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataGroupIdScalars || vtkBlockIdScalars, vtkLevelIdScalars<br />
|- <br />
| vtkMultiGroupProbeFilter || vtkCompositeDataProbeFilter<br />
|- <br />
| vtkXMLHierarchicalDataReader || *<br />
|- <br />
| vtkXMLMultiGroupDataReader || vtkXMLCompositeDataReader,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataReader,<br />
|- <br />
| || vtkXMLMultiBlockDataReader<br />
|- <br />
| vtkXMLMultiGroupDataWriter || vtkXMLCompositeDataWriter,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataWriter,<br />
|- <br />
| || vtkXMLMultiBlockDataWriter<br />
|- <br />
| vtkMultiGroupDataExtractPiece || vtkExtractPiece<br />
|- <br />
| vtkXMLPMultiGroupDataWriter || vtkXMLPMultiBlockDataWriter,<br />
|- <br />
| || vtkXMLPHierarchicalBoxDataWriter<br />
|- <br />
| vtkMultiGroupPolyDataMapper || vtkCompositePolyDataMapper<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40918VTK/Tutorials2011-06-19T14:21:38Z<p>Pratikm: </p>
<hr />
<div>In this page, we hope to gather a collection of tutorials on specific topics that are not clearly elucidated elsewhere.<br />
== Introduction to VTK==<br />
A catalog of several external tutorials (from courses, slides, etc around the world) can be found [[VTK/Tutorials/External_Tutorials|here]]. These will help the absolute beginner learn the basics of VTK.<br />
<br />
== <center>Advanced Tutorials</center> ==<br />
{| border="0" align="center" width="98%" valign="top" cellspacing="7" cellpadding="2"<br />
|-<br />
! width="45%"|<br />
! |<br />
! width="60%"|<br />
|- <br />
|valign="top"|<br />
<br />
==System Configuration/General Information==<br />
<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
==Tutorials==<br />
=== VTK Pipeline===<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Information Keys | VTK Information Keys ]] and their significance<br />
* [[VTK/Tutorials/Composite Datasets|Composite Datasets]] in VTK <br />
* [[VTK/Streaming | Streaming data]] in VTK<br />
<br />
=== General Topics ===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
<br />
=== Wrapping ===<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
*[[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
* [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
<br />
<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40917VTK/Tutorials2011-06-19T14:20:03Z<p>Pratikm: /* Introduction to VTK */</p>
<hr />
<div>In this page, we hope to gather a collection of tutorials on specific topics that are not clearly elucidated elsewhere.<br />
== Introduction to VTK==<br />
A catalog of several external tutorials (from courses, slides, etc around the world) can be found [[VTK/Tutorials/External_Tutorials|here]]. These will help the absolute beginner learn the basics of VTK.<br />
<br />
== <center>Advanced Tutorials</center> ==<br />
{| border="0" align="center" width="98%" valign="top" cellspacing="7" cellpadding="2"<br />
|-<br />
! width="45%"|<br />
! |<br />
! width="60%"|<br />
|- <br />
|valign="top"|<br />
<br />
==System Configuration/General Information==<br />
<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
==Tutorials==<br />
=== VTK Pipeline===<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Information Keys | VTK Information Keys ]] and their significance<br />
* [[VTK/Tutorials/Composite Datasets|Composite Datasets]] in VTK <br />
* [[VTK/Streaming | Streaming data]] in VTK<br />
<br />
=== General Topics ===<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
<br />
== Wrapping ==<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40916VTK/Tutorials2011-06-19T14:18:48Z<p>Pratikm: /* Tutorials */</p>
<hr />
<div>In this page, we hope to gather a collection of tutorials on specific topics that are not clearly elucidated elsewhere.<br />
== Introduction to VTK==<br />
A catalog of several external tutorials (from courses, slides, etc around the world) can be found here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]. These will help the absolute beginner learn the basics of VTK. <br />
== <center>Advanced Tutorials</center> ==<br />
{| border="0" align="center" width="98%" valign="top" cellspacing="7" cellpadding="2"<br />
|-<br />
! width="45%"|<br />
! |<br />
! width="60%"|<br />
|- <br />
|valign="top"|<br />
<br />
==System Configuration/General Information==<br />
<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
==Tutorials==<br />
=== VTK Pipeline===<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Information Keys | VTK Information Keys ]] and their significance<br />
* [[VTK/Tutorials/Composite Datasets|Composite Datasets]] in VTK <br />
* [[VTK/Streaming | Streaming data]] in VTK<br />
<br />
=== General Topics ===<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
<br />
== Wrapping ==<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40915VTK/Tutorials2011-06-19T14:15:29Z<p>Pratikm: </p>
<hr />
<div>In this page, we hope to gather a collection of tutorials on specific topics that are not clearly elucidated elsewhere.<br />
== Introduction to VTK==<br />
A catalog of several external tutorials (from courses, slides, etc around the world) can be found here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]. These will help the absolute beginner learn the basics of VTK. <br />
== <center>Advanced Tutorials</center> ==<br />
{| border="0" align="center" width="98%" valign="top" cellspacing="7" cellpadding="2"<br />
|-<br />
! width="45%"|<br />
! |<br />
! width="60%"|<br />
|- <br />
|valign="top"|<br />
<br />
==System Configuration/General Information==<br />
<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
==Tutorials==<br />
<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
* [[VTK/Information Keys | VTK Information Keys and their significance]]<br />
* [[VTK/Tutorials/Composite Datasets|Composite Datasets]] in VTK <br />
* [[VTK/Streaming | Streaming data in VTK]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
== Wrapping ==<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40914VTK/Tutorials2011-06-19T14:14:23Z<p>Pratikm: </p>
<hr />
<div>In this page, we hope to gather a collection of tutorials on specific topics that are not clearly elucidated elsewhere.<br />
== Introduction to VTK==<br />
A catalog of several external tutorials (from courses, slides, etc around the world) can be found here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]. These will help the absolute beginner learn the basics of VTK. <br />
== <center>Advanced Tutorials</center> ==<br />
{| border="0" align="center" width="98%" valign="top" cellspacing="7" cellpadding="2"<br />
|-<br />
! width="50%"|<br />
! |<br />
! width="50%"|<br />
|- <br />
|valign="top"|<br />
<br />
==System Configuration/General Information==<br />
<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
|bgcolor="#CCCCCC"|<br />
|valign="top"|<br />
<br />
==Tutorials==<br />
<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
* [[VTK/Information Keys | VTK Information Keys and their significance]]<br />
* [[VTK/Tutorials/Composite Datasets|Composite Datasets]] in VTK <br />
* [[VTK/Streaming | Streaming data in VTK]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
== Wrapping ==<br />
<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40913VTK/Tutorials2011-06-19T14:00:33Z<p>Pratikm: /* Tutorials */</p>
<hr />
<div>==System Configuration/General Information==<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
<br />
==Tutorials==<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
* [[VTK/Information Keys | VTK Information Keys and their significance]]<br />
* [[VTK/Tutorials/Composite Datasets|Composite Datasets]] in VTK <br />
* [[VTK/Streaming | Streaming data in VTK]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
=== Wrapping ===<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
===External Tutorials===<br />
Pratik was nice enough to catalog several external tutorials (from courses, slides, etc around the world) here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Composite_Datasets&diff=40912VTK/Tutorials/Composite Datasets2011-06-19T13:58:59Z<p>Pratikm: </p>
<hr />
<div>VTK 5.0 introduced composite datasets. Composite datasets are nothing but datasets comprising of other datasets. This notion is useful in defining complex structures comprising of other smaller components e.g. an unstructured grid for a car made of grids for the tires, chassis, seats etc. It is also used for representing datasets with adaptive mesh refinement (AMR). AMR refers to the technique of automatically refining certain regions of the physical domain during a numerical simulation. <br />
<br />
The October 2006 Kitware Source included an article describing the composite datasets in VTK and how to use them. Since then, the implementation of composite datasets in VTK has undergone some major rework. The main goal was to make the use simple and intuitive. This article describes these changes. These changes should make it into VTK 5.2.<br />
<br />
A rough summary of the design changes can be found [[VTK/Composite_Data_Redesign|here]]<br />
<br />
=Composite Datasets=<br />
The new class hierarchy for composite datasets is as follows:<br />
<center><br />
[[Image:Composite1.png|400px]]<br />
</center><br />
As is obvious from the above diagram, we have 3 concrete subclasses of vtkCompositeDataSet. vtkMultiBlockDataSet is a dataset comprising of blocks. Each block can be a non-composite vtkDataObject subclass (or a leaf) or an instance of vtkMultiBlockDataSet itself. This makes is possible to build full trees. vtkHierarchicalBoxDataSet is used for AMR datasets which comprises of refinement levels and uniform grid datasets at each refinement level. vtkMultiPieceDataSet can be thought of as a specialization of vtkMultiBlockDataSet where none of the blocks can be a composite dataset. vtkMultiPieceDataSet is used to group multiple pieces of a dataset together.<br />
<br />
vtkCompositeDataSet is the abstract base class for all composite datasets. It provides an implementation for a tree data structure. All subclasses of composite datasets are basically trees of vtkDataSet instances with certain restrictions. Hence, vtkCompositeDataSet provides the internal tree implementation with protected API for the subclasses to access this internal tree, while leaving it to the subclasses to provide public API to populate the dataset. The only public API that this class provides relates to iterators. <br />
<br />
Iterators are used to access nodes in the composite dataset. Here’s an example showing the use of an iterator to iterate over non-empty, non-composite dataset nodes. <br />
<br />
<br />
<source lang="cpp"><br />
vtkCompositeDataIterator* iter = compositeData->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
vtkDataObject* dObj = iter->GetCurrentDataObject();<br />
cout << dObj->GetClassName() <<endl;<br />
}<br />
</source><br />
<br />
<br />
As we see, accessing nodes within a composite dataset hasn’t really changed. However, the generic API provided by vtkCompositeDataSet for setting datasets using an iterator makes it possible to create composite dataset trees with identical structures without having to downcast to a concrete type. Following is an example of an outline filter that applies the standard outline filter to each leaf dataset with the composite tree. The output is also a composite tree with each node replaced by the output of the outline filter.<br />
<br />
<br />
<source lang="cpp"><br />
vtkCompositeDataSet* input = …<br />
vtkCompositeDataSet* output = input->NewInstance();<br />
output->CopyStructure(input);<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
vtkDataSet* inputDS = vtkDataSet::SafeDownCast(iter->GetCurrentDataObject());<br />
vtkPolyData* outputDS = this->NewOutline(inputDS);<br />
output->SetDataSet(iter, outputDS);<br />
outputDS->Delete();<br />
}<br />
iter->Delete();<br />
</source><br />
<br />
<br />
By default, the iterator only visits leaf nodes i.e. non-composite datasets within the composite tree. This can be changed by toggling the VisitOnlyLeaves flag. Default behavior to skip empty nodes can be avoided by setting the SkipEmptyNodes flag to false. Similarly, to avoid traversing the entire sub-tree, instead of just visiting the first level children, set the TraverseSubTree flag to false.<br />
<br />
To make it possible to address a particular node within a composite tree, the iterator also provides a flat index for each node. Flat index for a node is the index of the node in a preorder traversal of the tree. In the following diagram the preorder traversal of the tree yields: A, B, D, E, C. Hence the flat index for A is 0, while that for C is 4. Filters such as vtkExtractBlockFilter use the flat index to identify nodes. <br />
<center><br />
[[Image:Composite2.png|400px]]<br />
</center><br />
==MultiPiece Dataset==<br />
vtkMultiPieceDataSet is the simplest of all composite datasets. It is used to combine a bunch of non-composite datasets together. It is useful to hold pieces of a dataset partitioned among processes and hence the name. To reiterate, a piece in a multi-piece dataset cannot be a composite dataset.<br />
<br />
==Multi-Block Dataset==<br />
A vtkMultiBlockDataSet is a composite dataset comprising of blocks. It provides API to set/access blocks such as SetBlock, GetBlock, GetNumberOfBlocks etc. A block can be an instance of vtkMultiBlockDataSet or any other subclass of vtkDataObject which is not a vtkCompositeDataSet. Multiblock datasets no longer support the notion of subdatasets with a block. To achieve the same effect, one can add a vtkMultiPieceDataSet as the block and then put the subdatasets as pieces in the vtkMultiPieceDataSet.<br />
<br />
==Hierarchical-Box Dataset==<br />
vtkHierarchicalBoxDataSet is used to represent AMR datasets. It comprises of levels of refinements and datasets associated with each level. The datasets at each level are restricted to vtkUniformGrid. vtkUniformGrid is vtkImageData with blanking support for cells and points. Internally, vtkHierarchicalBoxDataSet creates a vtkMultiPieceDataSet instance for each level. All datasets at a level are added as pieces to the multipiece dataset.<br />
vtkHierarchicalDataSet has been deprecated and no longer supported. It is not much different from a vtkMultiBlockDataSet and hence was deprecated.<br />
<br />
==Pipeline Execution==<br />
It is possible to create mixed pipelines of filters which can or cannot handle composite datasets. For filters that are not composite data aware, vtkCompositeDataPipeline executes the filter for each leaf node in the composite dataset to produce an output similar in structure to the input composite dataset. In the previous implementation of this executive, the output would always be a generic superclass of the concrete composite datasets. In other words, if vtkCellDataToPointData filter was inserted into a composite data pipeline, if the input was a vtkHierarchicalBoxDataSet, the output would still be vtkMultiGroupDataSet. This has been changed to try to preserve the input data type. Since vtkCellDataToPointData does not change the data type of the input datasets, if the input is vtkHierarchicalBoxDataSet, then now, the output will be vtkHierarchicalBoxDataSet. However, for filters such as vtkContourFilter where the output type is not a vtkUniformGrid, the output will be vtkMultiBlockDataSet with structure similar to the input vtkHierarchicalBoxDataSet.<br />
<br />
=Extraction Filters=<br />
A few new extraction filters have been added with enable extracting component dataset from a composite dataset.<br />
<br />
==Extract Block==<br />
vtkExtractBlock filter is used to extract a set of blocks from a vtkMultiBlockDataSet. The blocks to extract are identified by their flat indices. If PruneOutput is true, then the output will be pruned to remove empty branches and redundant vtkMultiBlockDataSet nodes i.e. vtkMultiBlockDataSet node with a single child which is also a vtkMultiBlockDataSet. The output of this filter is always a vtkMultiBlockDataSet, even if a single leaf node is selected to be extracted.<br />
<br />
==Extract Level==<br />
vtkExtractLevel is used to extract a set of levels from a vtkHierarhicalBoxDataSet. It simply removes the datasets from all levels except the ones chosen to be extracted. It always produces a vtkHierarchicalBoxDataSet as the output.<br />
<br />
==Extract Datasets==<br />
vtkExtractDataSets is used to extract datasets from a vtkHierarchicalBoxDataSet. The user identifies the datasets to extract using their level number and the dataset index within that level. Output is a vtkHierarchicalBoxDataSet with same structure as the input, with only the selected datasets.<br />
<br />
=Conclusion=<br />
With the redesign, composite datasets now use a full tree data structure to store the datasets rather than the table-of-tables approached used earlier. This makes it easier to build/parse the structure. Iterators have been empowered and can now be used to getting as well as setting datasets in the composite tree, thus minimizing the need to downcast to concrete subclasses for simple filters.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Composite_Datasets&diff=40911VTK/Tutorials/Composite Datasets2011-06-19T13:58:19Z<p>Pratikm: /* Composite Datasets */</p>
<hr />
<div>VTK 5.0 introduced composite datasets. Composite datasets are nothing but datasets comprising of other datasets. This notion is useful in defining complex structures comprising of other smaller components e.g. an unstructured grid for a car made of grids for the tires, chassis, seats etc. It is also used for representing datasets with adaptive mesh refinement (AMR). AMR refers to the technique of automatically refining certain regions of the physical domain during a numerical simulation. <br />
<br />
The October 2006 Kitware Source included an article describing the composite datasets in VTK and how to use them. Since then, the implementation of composite datasets in VTK has undergone some major rework. The main goal was to make the use simple and intuitive. This article describes these changes. These changes should make it into VTK 5.2.<br />
<br />
A rough summary of the design can be found [[VTK/Composite_Data_Redesign|here]]<br />
<br />
=Composite Datasets=<br />
The new class hierarchy for composite datasets is as follows:<br />
<center><br />
[[Image:Composite1.png|400px]]<br />
</center><br />
As is obvious from the above diagram, we have 3 concrete subclasses of vtkCompositeDataSet. vtkMultiBlockDataSet is a dataset comprising of blocks. Each block can be a non-composite vtkDataObject subclass (or a leaf) or an instance of vtkMultiBlockDataSet itself. This makes is possible to build full trees. vtkHierarchicalBoxDataSet is used for AMR datasets which comprises of refinement levels and uniform grid datasets at each refinement level. vtkMultiPieceDataSet can be thought of as a specialization of vtkMultiBlockDataSet where none of the blocks can be a composite dataset. vtkMultiPieceDataSet is used to group multiple pieces of a dataset together.<br />
<br />
vtkCompositeDataSet is the abstract base class for all composite datasets. It provides an implementation for a tree data structure. All subclasses of composite datasets are basically trees of vtkDataSet instances with certain restrictions. Hence, vtkCompositeDataSet provides the internal tree implementation with protected API for the subclasses to access this internal tree, while leaving it to the subclasses to provide public API to populate the dataset. The only public API that this class provides relates to iterators. <br />
<br />
Iterators are used to access nodes in the composite dataset. Here’s an example showing the use of an iterator to iterate over non-empty, non-composite dataset nodes. <br />
<br />
<br />
<source lang="cpp"><br />
vtkCompositeDataIterator* iter = compositeData->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
vtkDataObject* dObj = iter->GetCurrentDataObject();<br />
cout << dObj->GetClassName() <<endl;<br />
}<br />
</source><br />
<br />
<br />
As we see, accessing nodes within a composite dataset hasn’t really changed. However, the generic API provided by vtkCompositeDataSet for setting datasets using an iterator makes it possible to create composite dataset trees with identical structures without having to downcast to a concrete type. Following is an example of an outline filter that applies the standard outline filter to each leaf dataset with the composite tree. The output is also a composite tree with each node replaced by the output of the outline filter.<br />
<br />
<br />
<source lang="cpp"><br />
vtkCompositeDataSet* input = …<br />
vtkCompositeDataSet* output = input->NewInstance();<br />
output->CopyStructure(input);<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
vtkDataSet* inputDS = vtkDataSet::SafeDownCast(iter->GetCurrentDataObject());<br />
vtkPolyData* outputDS = this->NewOutline(inputDS);<br />
output->SetDataSet(iter, outputDS);<br />
outputDS->Delete();<br />
}<br />
iter->Delete();<br />
</source><br />
<br />
<br />
By default, the iterator only visits leaf nodes i.e. non-composite datasets within the composite tree. This can be changed by toggling the VisitOnlyLeaves flag. Default behavior to skip empty nodes can be avoided by setting the SkipEmptyNodes flag to false. Similarly, to avoid traversing the entire sub-tree, instead of just visiting the first level children, set the TraverseSubTree flag to false.<br />
<br />
To make it possible to address a particular node within a composite tree, the iterator also provides a flat index for each node. Flat index for a node is the index of the node in a preorder traversal of the tree. In the following diagram the preorder traversal of the tree yields: A, B, D, E, C. Hence the flat index for A is 0, while that for C is 4. Filters such as vtkExtractBlockFilter use the flat index to identify nodes. <br />
<center><br />
[[Image:Composite2.png|400px]]<br />
</center><br />
==MultiPiece Dataset==<br />
vtkMultiPieceDataSet is the simplest of all composite datasets. It is used to combine a bunch of non-composite datasets together. It is useful to hold pieces of a dataset partitioned among processes and hence the name. To reiterate, a piece in a multi-piece dataset cannot be a composite dataset.<br />
<br />
==Multi-Block Dataset==<br />
A vtkMultiBlockDataSet is a composite dataset comprising of blocks. It provides API to set/access blocks such as SetBlock, GetBlock, GetNumberOfBlocks etc. A block can be an instance of vtkMultiBlockDataSet or any other subclass of vtkDataObject which is not a vtkCompositeDataSet. Multiblock datasets no longer support the notion of subdatasets with a block. To achieve the same effect, one can add a vtkMultiPieceDataSet as the block and then put the subdatasets as pieces in the vtkMultiPieceDataSet.<br />
<br />
==Hierarchical-Box Dataset==<br />
vtkHierarchicalBoxDataSet is used to represent AMR datasets. It comprises of levels of refinements and datasets associated with each level. The datasets at each level are restricted to vtkUniformGrid. vtkUniformGrid is vtkImageData with blanking support for cells and points. Internally, vtkHierarchicalBoxDataSet creates a vtkMultiPieceDataSet instance for each level. All datasets at a level are added as pieces to the multipiece dataset.<br />
vtkHierarchicalDataSet has been deprecated and no longer supported. It is not much different from a vtkMultiBlockDataSet and hence was deprecated.<br />
<br />
==Pipeline Execution==<br />
It is possible to create mixed pipelines of filters which can or cannot handle composite datasets. For filters that are not composite data aware, vtkCompositeDataPipeline executes the filter for each leaf node in the composite dataset to produce an output similar in structure to the input composite dataset. In the previous implementation of this executive, the output would always be a generic superclass of the concrete composite datasets. In other words, if vtkCellDataToPointData filter was inserted into a composite data pipeline, if the input was a vtkHierarchicalBoxDataSet, the output would still be vtkMultiGroupDataSet. This has been changed to try to preserve the input data type. Since vtkCellDataToPointData does not change the data type of the input datasets, if the input is vtkHierarchicalBoxDataSet, then now, the output will be vtkHierarchicalBoxDataSet. However, for filters such as vtkContourFilter where the output type is not a vtkUniformGrid, the output will be vtkMultiBlockDataSet with structure similar to the input vtkHierarchicalBoxDataSet.<br />
<br />
=Extraction Filters=<br />
A few new extraction filters have been added with enable extracting component dataset from a composite dataset.<br />
<br />
==Extract Block==<br />
vtkExtractBlock filter is used to extract a set of blocks from a vtkMultiBlockDataSet. The blocks to extract are identified by their flat indices. If PruneOutput is true, then the output will be pruned to remove empty branches and redundant vtkMultiBlockDataSet nodes i.e. vtkMultiBlockDataSet node with a single child which is also a vtkMultiBlockDataSet. The output of this filter is always a vtkMultiBlockDataSet, even if a single leaf node is selected to be extracted.<br />
<br />
==Extract Level==<br />
vtkExtractLevel is used to extract a set of levels from a vtkHierarhicalBoxDataSet. It simply removes the datasets from all levels except the ones chosen to be extracted. It always produces a vtkHierarchicalBoxDataSet as the output.<br />
<br />
==Extract Datasets==<br />
vtkExtractDataSets is used to extract datasets from a vtkHierarchicalBoxDataSet. The user identifies the datasets to extract using their level number and the dataset index within that level. Output is a vtkHierarchicalBoxDataSet with same structure as the input, with only the selected datasets.<br />
<br />
=Conclusion=<br />
With the redesign, composite datasets now use a full tree data structure to store the datasets rather than the table-of-tables approached used earlier. This makes it easier to build/parse the structure. Iterators have been empowered and can now be used to getting as well as setting datasets in the composite tree, thus minimizing the need to downcast to concrete subclasses for simple filters.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Composite_Datasets&diff=40910VTK/Tutorials/Composite Datasets2011-06-19T13:56:21Z<p>Pratikm: Created page with "VTK 5.0 introduced composite datasets. Composite datasets are nothing but datasets comprising of other datasets. This notion is useful in defining complex structures comprising o..."</p>
<hr />
<div>VTK 5.0 introduced composite datasets. Composite datasets are nothing but datasets comprising of other datasets. This notion is useful in defining complex structures comprising of other smaller components e.g. an unstructured grid for a car made of grids for the tires, chassis, seats etc. It is also used for representing datasets with adaptive mesh refinement (AMR). AMR refers to the technique of automatically refining certain regions of the physical domain during a numerical simulation. <br />
<br />
The October 2006 Kitware Source included an article describing the composite datasets in VTK and how to use them. Since then, the implementation of composite datasets in VTK has undergone some major rework. The main goal was to make the use simple and intuitive. This article describes these changes. These changes should make it into VTK 5.2.<br />
<br />
A rough summary of the design can be found [[VTK/Composite_Data_Redesign|here]]<br />
<br />
=Composite Datasets=<br />
The new class hierarchy for composite datasets is as follows:<br />
<br />
[[Image:Composite1.png|400px]]<br />
<br />
As is obvious from the above diagram, we have 3 concrete subclasses of vtkCompositeDataSet. vtkMultiBlockDataSet is a dataset comprising of blocks. Each block can be a non-composite vtkDataObject subclass (or a leaf) or an instance of vtkMultiBlockDataSet itself. This makes is possible to build full trees. vtkHierarchicalBoxDataSet is used for AMR datasets which comprises of refinement levels and uniform grid datasets at each refinement level. vtkMultiPieceDataSet can be thought of as a specialization of vtkMultiBlockDataSet where none of the blocks can be a composite dataset. vtkMultiPieceDataSet is used to group multiple pieces of a dataset together.<br />
<br />
vtkCompositeDataSet is the abstract base class for all composite datasets. It provides an implementation for a tree data structure. All subclasses of composite datasets are basically trees of vtkDataSet instances with certain restrictions. Hence, vtkCompositeDataSet provides the internal tree implementation with protected API for the subclasses to access this internal tree, while leaving it to the subclasses to provide public API to populate the dataset. The only public API that this class provides relates to iterators. <br />
<br />
Iterators are used to access nodes in the composite dataset. Here’s an example showing the use of an iterator to iterate over non-empty, non-composite dataset nodes. <br />
<br />
<source lang="cpp"><br />
vtkCompositeDataIterator* iter = compositeData->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
vtkDataObject* dObj = iter->GetCurrentDataObject();<br />
cout << dObj->GetClassName() <<endl;<br />
}<br />
</source><br />
<br />
As we see, accessing nodes within a composite dataset hasn’t really changed. However, the generic API provided by vtkCompositeDataSet for setting datasets using an iterator makes it possible to create composite dataset trees with identical structures without having to downcast to a concrete type. Following is an example of an outline filter that applies the standard outline filter to each leaf dataset with the composite tree. The output is also a composite tree with each node replaced by the output of the outline filter.<br />
<br />
<source lang="cpp"><br />
vtkCompositeDataSet* input = …<br />
vtkCompositeDataSet* output = input->NewInstance();<br />
output->CopyStructure(input);<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
vtkDataSet* inputDS = vtkDataSet::SafeDownCast(iter->GetCurrentDataObject());<br />
vtkPolyData* outputDS = this->NewOutline(inputDS);<br />
output->SetDataSet(iter, outputDS);<br />
outputDS->Delete();<br />
}<br />
iter->Delete();<br />
</source><br />
<br />
By default, the iterator only visits leaf nodes i.e. non-composite datasets within the composite tree. This can be changed by toggling the VisitOnlyLeaves flag. Default behavior to skip empty nodes can be avoided by setting the SkipEmptyNodes flag to false. Similarly, to avoid traversing the entire sub-tree, instead of just visiting the first level children, set the TraverseSubTree flag to false.<br />
<br />
To make it possible to address a particular node within a composite tree, the iterator also provides a flat index for each node. Flat index for a node is the index of the node in a preorder traversal of the tree. In the following diagram the preorder traversal of the tree yields: A, B, D, E, C. Hence the flat index for A is 0, while that for C is 4. Filters such as vtkExtractBlockFilter use the flat index to identify nodes. <br />
<br />
[[Image:Composite2.png|400px]]<br />
<br />
==MultiPiece Dataset==<br />
vtkMultiPieceDataSet is the simplest of all composite datasets. It is used to combine a bunch of non-composite datasets together. It is useful to hold pieces of a dataset partitioned among processes and hence the name. To reiterate, a piece in a multi-piece dataset cannot be a composite dataset.<br />
<br />
==Multi-Block Dataset==<br />
A vtkMultiBlockDataSet is a composite dataset comprising of blocks. It provides API to set/access blocks such as SetBlock, GetBlock, GetNumberOfBlocks etc. A block can be an instance of vtkMultiBlockDataSet or any other subclass of vtkDataObject which is not a vtkCompositeDataSet. Multiblock datasets no longer support the notion of subdatasets with a block. To achieve the same effect, one can add a vtkMultiPieceDataSet as the block and then put the subdatasets as pieces in the vtkMultiPieceDataSet.<br />
<br />
==Hierarchical-Box Dataset==<br />
vtkHierarchicalBoxDataSet is used to represent AMR datasets. It comprises of levels of refinements and datasets associated with each level. The datasets at each level are restricted to vtkUniformGrid. vtkUniformGrid is vtkImageData with blanking support for cells and points. Internally, vtkHierarchicalBoxDataSet creates a vtkMultiPieceDataSet instance for each level. All datasets at a level are added as pieces to the multipiece dataset.<br />
vtkHierarchicalDataSet has been deprecated and no longer supported. It is not much different from a vtkMultiBlockDataSet and hence was deprecated.<br />
<br />
==Pipeline Execution==<br />
It is possible to create mixed pipelines of filters which can or cannot handle composite datasets. For filters that are not composite data aware, vtkCompositeDataPipeline executes the filter for each leaf node in the composite dataset to produce an output similar in structure to the input composite dataset. In the previous implementation of this executive, the output would always be a generic superclass of the concrete composite datasets. In other words, if vtkCellDataToPointData filter was inserted into a composite data pipeline, if the input was a vtkHierarchicalBoxDataSet, the output would still be vtkMultiGroupDataSet. This has been changed to try to preserve the input data type. Since vtkCellDataToPointData does not change the data type of the input datasets, if the input is vtkHierarchicalBoxDataSet, then now, the output will be vtkHierarchicalBoxDataSet. However, for filters such as vtkContourFilter where the output type is not a vtkUniformGrid, the output will be vtkMultiBlockDataSet with structure similar to the input vtkHierarchicalBoxDataSet.<br />
<br />
=Extraction Filters=<br />
A few new extraction filters have been added with enable extracting component dataset from a composite dataset.<br />
<br />
==Extract Block==<br />
vtkExtractBlock filter is used to extract a set of blocks from a vtkMultiBlockDataSet. The blocks to extract are identified by their flat indices. If PruneOutput is true, then the output will be pruned to remove empty branches and redundant vtkMultiBlockDataSet nodes i.e. vtkMultiBlockDataSet node with a single child which is also a vtkMultiBlockDataSet. The output of this filter is always a vtkMultiBlockDataSet, even if a single leaf node is selected to be extracted.<br />
<br />
==Extract Level==<br />
vtkExtractLevel is used to extract a set of levels from a vtkHierarhicalBoxDataSet. It simply removes the datasets from all levels except the ones chosen to be extracted. It always produces a vtkHierarchicalBoxDataSet as the output.<br />
<br />
==Extract Datasets==<br />
vtkExtractDataSets is used to extract datasets from a vtkHierarchicalBoxDataSet. The user identifies the datasets to extract using their level number and the dataset index within that level. Output is a vtkHierarchicalBoxDataSet with same structure as the input, with only the selected datasets.<br />
<br />
=Conclusion=<br />
With the redesign, composite datasets now use a full tree data structure to store the datasets rather than the table-of-tables approached used earlier. This makes it easier to build/parse the structure. Iterators have been empowered and can now be used to getting as well as setting datasets in the composite tree, thus minimizing the need to downcast to concrete subclasses for simple filters.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=Composite_Datasets_in_VTK&diff=40909Composite Datasets in VTK2011-06-19T13:55:57Z<p>Pratikm: Redirected page to VTK/Tutorials/Composite Datasets</p>
<hr />
<div>#REDIRECT [[VTK/Tutorials/Composite Datasets]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40908VTK/Tutorials2011-06-19T13:45:38Z<p>Pratikm: /* Tutorials */</p>
<hr />
<div>==System Configuration/General Information==<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
<br />
==Tutorials==<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
* [[VTK/Information Keys | VTK Information Keys and their significance]]<br />
* [[VTK/Composite_Data_Redesign |Composite Data Redesign]] <br />
* [[VTK/Streaming | Streaming data in VTK]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
=== Wrapping ===<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
===External Tutorials===<br />
Pratik was nice enough to catalog several external tutorials (from courses, slides, etc around the world) here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Composite_Data_Redesign&diff=40907VTK/Composite Data Redesign2011-06-19T13:41:01Z<p>Pratikm: /* Iterators */</p>
<hr />
<div>== Composite dataset re-architecture ==<br />
<br />
=== Current design ===<br />
<center><br />
[[Image:Old_composite_design.png]]<br />
</center><br />
=== Issues with the current design ===<br />
<br />
* Most functionality is based on vtkMultiGroupDataSet instead of vtkCompositeDataSet. For example, most algorithms (and the executives) use vtkMultiGroupDataSet API to iterate. This makes it impossible to add new sub-classes of vtkCompositeDataSet without writing new executives.<br />
* The concept of sub-block is confusing. vtkMultiGroupDataSet stores a vector of vectors of datasets. When this concept is mapped to the multi-block (or temporal) datasets, each block ends up having multiple sub-blocks. Furthermore, the convention that these sub-block ids map to the process ids is very confusing.<br />
* Algorithms that want to pass blanking have to downcast to vtkHierarchicalBoxDataSet and copy blanking explicitely.<br />
* vtkCompositeDataPipeline is a mess.<br />
<br />
=== Suggested design ===<br />
<center><br />
[[Image:New_composite_design.png]]<br />
</center><br />
* Get rid of vtkMultiGroupDataSet. Any code shared between subclasses of vtkCompositeDataPipeline can be shared using helper implementation objects.<br />
* Improve the iterators so that it is not necessary to use vtkMultiGroupDataSet API to iterate over blocks.<br />
* Add a vtkMultiPieceDataSet class that can be used to group multiple pieces together. Example: when loading a dataset with multiple partitions on 1 processor, vtkMultiPieceDataSet can be used instead of appending datasets together. vtkMultiPieceDataSet would have additional meta-data about things like whole extent for structured datasets.<br />
* Clean up vtkCompositeDataPipeline.<br />
* Improve ghost level support for composite datasets.<br />
<br />
==== Iterators ====<br />
<br />
In the current architecture, the most common thing to do is the following:<br />
<br />
<br />
<source lang="cpp"><br />
unsigned int numGroups = mbInput->GetNumberOfGroups();<br />
output->SetNumberOfGroups(numGroups);<br />
for (unsigned int groupId=0; groupId<numGroups; groupId++)<br />
{<br />
unsigned int numBlocks = mbInput->GetNumberOfDataSets(groupId);<br />
output->SetNumberOfDataSets(groupId, numBlocks);<br />
for (unsigned int blockId=0; blockId<numBlocks; blockId++)<br />
{<br />
vtkDataObject* block = mbInput->GetDataSet(groupId, blockId);<br />
<br />
// do something with block to get an outBlock<br />
<br />
output->SetDataSet(groupId, blockId, outBlock);<br />
}<br />
} <br />
</source><br />
<br />
<br />
As mentioned above, problem with this approach is that it assumes that the<br />
composite dataset is a vtkMultiGroupDataSet. With the appropriate changes<br />
to the composite data iterators and composite datasets, the code above can<br />
be rewritten as<br />
<br />
<br />
<source lang="cpp"><br />
output->CopyStructure(mbInput);<br />
<br />
vtkCompositeDataIterator* iter = mbInput->NewIterator();<br />
iter->GoToFirstItem();<br />
while (!iter->IsDoneWithTraversal())<br />
{<br />
vtkDataObjects* block = iter->GetCurrentDataObject();<br />
// Note that the iterator will only visit the leaf nodes by default.<br />
<br />
// do something with block to get outBlock<br />
<br />
// copy the meta-data<br />
outBlock->CopyInformation(block);<br />
<br />
output->SetDataSet(iter, outBlock);<br />
iter->GoToNextItem();<br />
}<br />
iter->Delete();<br />
append->Update();<br />
</source><br />
<br />
<br />
The implementation above requires two additional methods: CopyStructure()<br />
and SetDataSet(iter, dataObject). The task of CopyStructure() is to create<br />
a tree structure on the output composite data object identical to that of<br />
the input. In the case of hierarchical datasets, this means same number of<br />
levels and same number of datasets on all levels. In the case of<br />
multi-block datasets, this means an identical tree. This may look like<br />
this:<br />
<center><br />
[[Image:Multiblock_tree.png]]<br />
</center><br />
After CopyStructure(), the output will have the same hierarchy except all<br />
vtkPolyData leaf nodes will be replaced by null pointers. CopyStructure()<br />
should also copy things like refinement ratios etc. This should also<br />
include all of the meta-data (information) of all non-leaf nodes. We are<br />
likely to use things like names for groups etc. when dealing with<br />
multi-block datasets.<br />
<br />
<em>Note on vtkHierarchicalBoxDataSet: Currently, a vtkHierarchicalBoxDataSet is converted to a vtkMultiGroupDataSet when it is processed by a simple algorithm or a vtkMultiGroupDataAlgorithm. We should think about this. Maybe when a vtkHierarchicalBoxDataSet is processed by a vtkDataSetAlgorithm, the output should be vtkHierarchicalBoxDataSet too?</em><br />
<br />
The task of SetDataSet(iter, dataObject) is to add a leaf dataset at the exact<br />
same position that the iterator is pointing at on the input. This will<br />
require changing iterators such that they are keeping track of their<br />
position in a composite dataset by some sort of index. The easiest way of<br />
doing this is to use two integers for hierarchical datasets (level, index)<br />
and a vector of integers of length equal to the current tree level for the<br />
multi-block datasets.<br />
<br />
==== vtkMultiPieceDataSet ====<br />
<br />
A multi-piece dataset groups multiple data pieces together. For example,<br />
say that a simulation broke a volume into 16 piece so that each piece can<br />
be processed with 1 process in parallel. We want to load this volume in a<br />
visualization cluster of 4 nodes. Each node will get 4 pieces, not<br />
necessarily forming a whole rectangular piece. In this case, it is not<br />
possible to append the 4 pieces together into a vtkImageData. In this case,<br />
these 4 pieces can be collected together using a<br />
vtkMultiPieceDataSet. Although it is possible to use a vtkMultiBlockDataSet<br />
for this purpose, a vtkMultiPieceDataSet makes it clear that these are<br />
pieces of one whole dataset that are collected together. Given this<br />
information, applications like paraview can treat these in a special<br />
way. For example, meta-data about the whole extent of the dataset can be<br />
displayed, neighborhood information can be obtained, ghost levels can be<br />
generated etc etc.<br />
<br />
<em>Note: The use of vtkMultiPieceDataSet is not yet very clear to me but I think it will be necessary.</em> <br />
<br />
==== vtkCompositeDataPipeline cleanup ====<br />
<br />
There will be a list of changes to vtkCompositeDataPipeline here. The<br />
executive is a mess right now due to all the use cases it supports and<br />
because it grew organically. We need to take a step back and clean it up,<br />
possibly rewriting portions of it.<br />
<br />
==== Ghost level support ====<br />
<br />
Currently, ghost level requests are passed up the pipeline but they are<br />
pretty much ignored by the pipeline. This will not do, specially when we<br />
improve D3 to support multi-block datasets. Getting unstructured and<br />
dataset algorithms to work with ghost levels is pretty<br />
straightforward. Getting structured data filters working is a little<br />
trickier.<br />
<br />
<em>Note: Realistically, readers do not produce more than 1 ghost level. We may want to take this into account.</em><br />
<br />
=Implementation=<br />
<br />
The implementation is based on the above design with some notable differences:<br />
<br />
* vtkHierarchicalDataSet is deprecated. Due to lack of use-cases to create a AMR-like hierarchy with unstructured data, this class was deprecated. Applications can implemented same behavior using vtkMultiBlockDataSet. vtkMultiBlockDataSet provides for meta-data associated with each node in the tree, thus making it possible for applications to attach level information with blocks.<br />
<br />
----<br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="empty" <br />
dir="back"<br />
arrowsize="2.0"<br />
]<br />
vtkDataObject []<br />
vtkCompositeDataSet [ ]<br />
vtkMultiBlockDataSet [ ]<br />
vtkTemporalDataSet [ ]<br />
vtkHierarchicalBoxDataSet [ ]<br />
vtkMultiPieceDataSet [ ]<br />
<br />
<br />
vtkDataObject->vtkCompositeDataSet<br />
vtkCompositeDataSet->vtkMultiBlockDataSet<br />
vtkCompositeDataSet->vtkTemporalDataSet<br />
vtkCompositeDataSet->vtkHierarchicalBoxDataSet<br />
vtkCompositeDataSet->vtkMultiPieceDataSet<br />
}<br />
</graphviz><br />
<br />
'''Class Hierarchy: Class hierarchy for current implementation of composite datasets'''<br />
----<br />
==vtkCompositeDataSet==<br />
<br />
vtkCompositeDataSet is the abstract superclass for all composite datasets. It implements a full tree structure in which nodes can be datasets or other composite datasets. However the API to access the tree directly is protected. Each subclass can build and maintain this tree as per its requirements eg. vtkHierarchicalBoxDataSet builds 1 level deep trees with the 1st level nodes being vtkMultiBlockDataSet instances which correspond to a ''level'' in the hierarchical dataset. One can obtain a vtkCompositeDataIterator instance from the vtkCompositeDataSet to iterate over the tree structure. vtkCompositeDataSet provides public API to get/set dataobjects and metadata using the iterator. Important API is listed below:<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Return a new iterator (the iterator has to be deleted by user).<br />
virtual vtkCompositeDataIterator* NewIterator();<br />
<br />
// Description:<br />
// Copies the tree structure from the input. All pointers to non-composite<br />
// data objects are intialized to NULL. This also shallow copies the meta data<br />
// associated with all the nodes.<br />
virtual void CopyStructure(vtkCompositeDataSet* input);<br />
<br />
// Description:<br />
// Sets the data set at the location pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be any composite datasite with similar structure (achieved by using<br />
// CopyStructure).<br />
virtual void SetDataSet(vtkCompositeDataIterator* iter, vtkDataObject* dataObj);<br />
<br />
// Description:<br />
// Returns the dataset located at the positiong pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkDataObject* GetDataSet(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the meta-data associated with the position pointed by the iterator.<br />
// This will create a new vtkInformation object if none already exists. Use<br />
// HasMetaData to avoid creating the vtkInformation object unnecessarily.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkInformation* GetMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns if any meta-data associated with the position pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual int HasMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Shallow and Deep copy.<br />
virtual void ShallowCopy(vtkDataObject *src);<br />
virtual void DeepCopy(vtkDataObject *src);<br />
</pre><br />
</font><br />
<br />
==vtkTemporalDataSet==<br />
<br />
vtkTemporalDataSet is used to hold multiple timesteps.<br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of time steps in theis dataset<br />
void SetNumberOfTimeSteps(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of time steps.<br />
unsigned int GetNumberOfTimeSteps();<br />
<br />
// Description:<br />
// Set a data object as a timestep. Cannot be vtkTemporalDataSet.<br />
void SetTimeStep(unsigned int timestep, vtkDataObject* dobj);<br />
<br />
// Description:<br />
// Get a timestep.<br />
vtkDataObject* GetTimeStep(unsigned int timestep);<br />
<br />
// Description:<br />
// Get timestep meta-data.<br />
vtkInformation* GetMetaData(unsigned int timestep);<br />
<br />
// Description:<br />
// Returns if timestep meta-data is present.<br />
int HasMetaData(unsigned int timestep);<br />
</pre><br />
</font><br />
<br />
==vtkMultiBlockDataSet==<br />
<br />
vtkMultiBlockDataSet is a vtkCompositeDataSet in which the child nodes can either be vtkDataSet subclasses or vtkMultiBlockDataSet. This is used when full trees are required. Meta-data can be associated with leaf nodes as well as non-leaf nodes in the tree.<br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of blocks. This will cause allocation if the new number of<br />
// blocks is greater than the current size. All new blocks are initialized to<br />
// null.<br />
void SetNumberOfBlocks(unsigned int numBlocks);<br />
<br />
// Description:<br />
// Returns the number of blocks.<br />
unsigned int GetNumberOfBlocks();<br />
<br />
// Description:<br />
// Returns the block at the given index. It is recommended that one uses the<br />
// iterators to iterate over composite datasets rather than using this API.<br />
vtkDataObject* GetBlock(unsigned int blockno);<br />
<br />
// Description:<br />
// Sets the data object as the given block. The total number of blocks will <br />
// be resized to fit the requested block no. The only vtkCompositeDataSet subclass <br />
// that can be added as a block is a vtkMultiBlockDataSet, <br />
// an error is raised otherwise. <br />
void SetBlock(unsigned int blockno, vtkDataObject* block);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given block.<br />
int HasMetaData(unsigned int blockno);<br />
<br />
// Description:<br />
// Returns the meta-data for the block. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int blockno);<br />
</pre><br />
</font><br />
<br />
==vtkMultiPieceDataSet==<br />
<br />
A vtkMultiPieceDataSet dataset groups multiple data pieces together. <br />
For example, say that a simulation broke a volume into 16 piece so that <br />
each piece can be processed with 1 process in parallel. We want to load <br />
this volume in a visualization cluster of 4 nodes. Each node will get 4 <br />
pieces, not necessarily forming a whole rectangular piece. In this case, <br />
it is not possible to append the 4 pieces together into a vtkImageData. <br />
In this case, these 4 pieces can be collected together using a <br />
vtkMultiPieceDataSet. <br />
Note that vtkMultiPieceDataSet is intended to be included in other composite<br />
datasets eg. vtkMultiBlockDataSet, vtkHierarchicalBoxDataSet. Hence the lack<br />
of algorithms producting vtkMultiPieceDataSet.<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of pieces. This will cause allocation if the new number of<br />
// pieces is greater than the current size. All new pieces are initialized to<br />
// null.<br />
void SetNumberOfPieces(unsigned int numpieces);<br />
<br />
// Description:<br />
// Returns the number of pieces.<br />
unsigned int GetNumberOfPieces();<br />
<br />
// Description:<br />
// Returns the piece at the given index. <br />
vtkDataSet* GetPiece(unsigned int pieceno);<br />
<br />
// Description:<br />
// Sets the data object as the given piece. The total number of pieces will <br />
// be resized to fit the requested piece no.<br />
void SetPiece(unsigned int pieceno, vtkDataSet* piece);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given piece.<br />
int HasMetaData(unsigned int piece);<br />
<br />
// Description:<br />
// Returns the meta-data for the piece. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int pieceno);<br />
</pre><br />
</font><br />
<br />
==vtkHierarchicalBoxDataSet==<br />
<br />
vtkHiererchicalBoxDataSet is a hierarchical dataset of Uniform grids. It is designed for AMR (Adaptive mesh refinement) dataset. The structure consists of ''levels'', with each level containing datasets. The dataset type is restricted to vtkUniformGrid. Each dataset has an associated vtkAMRBox that represents it's region (similar to extent) in space. Internally, each level in a vtkHierarchicalBoxDataSet is nothing but a vtkMultiPieceDataSet. <br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of refinement levels. This call might cause<br />
// allocation if the new number of levels is larger than the<br />
// current one.<br />
void SetNumberOfLevels(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of levels.<br />
unsigned int GetNumberOfLevels();<br />
<br />
// Description:<br />
// Set the number of data set at a given level.<br />
void SetNumberOfDataSets(unsigned int level, unsigned int numdatasets);<br />
<br />
// Description:<br />
// Returns the number of data sets available at any level.<br />
unsigned int GetNumberOfDataSets(unsigned int level);<br />
<br />
// Description:<br />
// Set the dataset pointer for a given node. This will resize the number of<br />
// levels and the number of datasets in the level to fit level, id requested. <br />
void SetDataSet(unsigned int level, unsigned int id, <br />
vtkAMRBox& box, vtkUniformGrid* dataSet);<br />
<br />
// Description:<br />
// Get a dataset given a level and an id.<br />
vtkUniformGrid* GetDataSet(unsigned int level,<br />
unsigned int id,<br />
vtkAMRBox& box);<br />
<br />
// Description:<br />
// Get meta-data associated with a level. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasLevelMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given level.<br />
int HasLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Get meta-data associated with a dataset. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given dataset under a given level.<br />
int HasMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Sets the refinement of a given level. The spacing at level<br />
// level+1 is defined as spacing(level+1) = spacing(level)/refRatio(level).<br />
// Note that currently, this is not enforced by this class however<br />
// some algorithms might not function properly if the spacing in<br />
// the blocks (vtkUniformGrid) does not match the one described<br />
// by the refinement ratio.<br />
void SetRefinementRatio(unsigned int level, int refRatio);<br />
<br />
// Description:<br />
// Returns the refinement of a given level.<br />
int GetRefinementRatio(unsigned int level);<br />
<br />
// Description:<br />
// Returns the AMR box for the location pointer by the iterator.<br />
vtkAMRBox GetAMRBox(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the refinement ratio for the position pointed by the iterator.<br />
int GetRefinementRatio(vtkCompositeDataIterator* iter);<br />
</pre><br />
</font><br />
<br />
==vtkCompositeDataIterator==<br />
<br />
vtkCompositeDataIterator is used to iterate over composite datasets. <br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the composite dataset this iterator is iterating over. <br />
// Must be set before traversal begins.<br />
virtual void SetDataSet(vtkCompositeDataSet* ds);<br />
vtkGetObjectMacro(DataSet, vtkCompositeDataSet);<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure.<br />
virtual void InitTraversal();<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure in reverse order.<br />
virtual void InitReverseTraversal();<br />
<br />
// Description:<br />
// Move the iterator to the beginning of the collection.<br />
virtual void GoToFirstItem();<br />
<br />
// Description:<br />
// Move the iterator to the next item in the collection.<br />
virtual void GoToNextItem();<br />
<br />
// Description:<br />
// Test whether the iterator is currently pointing to a valid item. Returns 1<br />
// for yes, and 0 for no.<br />
virtual int IsDoneWithTraversal();<br />
<br />
// Description:<br />
// Returns the current item. Valid only when IsDoneWithTraversal() returns 0.<br />
virtual vtkDataObject* GetCurrentDataObject();<br />
<br />
// Description:<br />
// Returns the meta-data associated with the current item. This will allocate<br />
// a new vtkInformation object is none is already present. Use<br />
// HasCurrentMetaData to avoid unnecessary creation of vtkInformation objects.<br />
virtual vtkInformation* GetCurrentMetaData();<br />
<br />
// Description:<br />
// Returns if the a meta-data information object is present for the current<br />
// item. Return 1 on success, 0 otherwise.<br />
virtual int HasCurrentMetaData();<br />
<br />
// Description:<br />
// If VisitOnlyLeaves is true, the iterator will only visit nodes<br />
// (sub-datasets) that are not composite. If it encounters a composite<br />
// data set, it will automatically traverse that composite dataset until<br />
// it finds non-composite datasets (see also TraverseSubTree). <br />
// With this options, it is possible to<br />
// visit all non-composite datasets in tree of composite datasets<br />
// (composite of composite of composite for example :-) ) If<br />
// VisitOnlyLeaves is false, GetCurrentDataObject() may return<br />
// vtkCompositeDataSet. By default, VisitOnlyLeaves is 1.<br />
vtkSetMacro(VisitOnlyLeaves, int);<br />
vtkGetMacro(VisitOnlyLeaves, int);<br />
vtkBooleanMacro(VisitOnlyLeaves, int);<br />
<br />
// Description:<br />
// If TraverseSubTree is set to true, the iterator will visit the entire tree<br />
// structure, otherwise it only visits the first level children. Set to 1 by<br />
// default.<br />
vtkSetMacro(TraverseSubTree, int);<br />
vtkGetMacro(TraverseSubTree, int);<br />
vtkBooleanMacro(TraverseSubTree, int);<br />
<br />
// Description:<br />
// If SkipEmptyNodes is true, then NULL datasets will be skipped. Default is<br />
// true.<br />
vtkSetMacro(SkipEmptyNodes, int);<br />
vtkGetMacro(SkipEmptyNodes, int);<br />
vtkBooleanMacro(SkipEmptyNodes, int);<br />
<br />
// Description:<br />
// Flat index is an index obtained by traversing the tree in preorder.<br />
// This can be used to uniquely identify nodes in the tree.<br />
// Not valid if IsDoneWithTraversal() returns true.<br />
vtkGetMacro(CurrentFlatIndex, unsigned int);<br />
<br />
</pre><br />
</font><br />
<br />
===Examples===<br />
====Copy all non-empty leaf nodes====<br />
<font color="blue"><br />
<pre><br />
// This can be very easily done with a ShallowCopy, but we use the iterators for illustration.<br />
vtkCompositeDataSet* CreateLeafCopy(vtkCompositeDataSet* src)<br />
{<br />
vtkCompositeDataSet* output = src->NewInstance();<br />
// Copy the structure as well as the meta-data assciated with all nodes in the composite tree.<br />
output->CopyStructure(src);<br />
<br />
vtkCompositeDataIterator* iter = src->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
output->SetDataSet(iter, iter->GetCurrentDataObject());<br />
}<br />
iter->Delete();<br />
return output;<br />
}<br />
<br />
</pre><br />
</font><br />
<br />
====Iterate over immediate child nodes of a composite dataset====<br />
<font color="blue"><br />
<pre><br />
vtkCompositeDataSet* input = ...<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
iter->TraverseSubTreeOff(); // we are only interested in immediate children.<br />
iter->VisitOnlyLeavesOff(); // we want all immediate children, including composite dataset child nodes.<br />
// To not skip empty children, simply call iter->SkipEmptyNodesOff();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
...<br />
}<br />
</pre><br />
</font><br />
<br />
====Flat Index====<br />
Iterators can be used to determine what we refer to as the '''flat-index''' of any node. Flat index is the index of any node in the pre-order traversal of the tree eg. the following diagram shows the tree structure and the flat-index of each node (all rectangular nodes are composite datasets, while circular nodes are vtkDataSet subclasses). The flat-index for the current location can be obtained from the iterator using ''GetCurrentFlatIndex''.<br />
<br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="ediamond"<br />
arrowhead="none<br />
]<br />
A [label="A (0)"]<br />
B [label="B (1)" shape="circle"]<br />
C [label="C (2)"]<br />
D [label="D (3)"]<br />
E [label="E (4)" shape="circle"]<br />
F [label="F (5)" shape="circle"]<br />
<br />
A->B<br />
A->C<br />
C->D<br />
D->E<br />
D->F<br />
<br />
}<br />
</graphviz><br />
<br />
=Changes from VTK 5.0=<br />
==vtkCompositeDataPipeline==<br />
This executive is used to iterative execute a non-composite data aware filter over all the leaves in a composite dataset. In VTK 5.0, the vtkHierarchicalBoxDataSet was always converted to a vtkMultiBlockDataSet when a non-composite aware filter was present in the pipeline. This is no longer the case. vtkCompositePipeline now verifies if the non-composite aware algorithm can produce vtkUniformGrid given a vtkUniformGrid as an input. If so, for a vtkHierarchicalBoxDataSet input, the output is a vtkHierarchicalBoxDataSet otherwise it is a vtkMultiBlockDataSet. Even when the vtkHierarchicalBoxDataSet is converted to a vtkMutliBlockDataSet the composite data tree structure is preserved in other words: since vtkHierarchicalBoxDataSet has vtkMutliPieceDataSet instances for each level, the converted vtkMultiBlockDataSet will also have vtkMutliPieceDataSet instances as the child blocks of the root node.<br />
<br />
==Class Names==<br />
A few class names have changed, a few others are no longer available. This table lists the old class name and an equivalent class in the new design.<br />
<br />
{| border="1"<br />
|+ '''Class Name Changes''' ['*' -- no longer applicable ]<br />
! Old Class !! Equivalent Class<br />
|- <br />
| vtkHierarchicalDataInformation || *<br />
|- <br />
| vtkHierarchicalDataIterator || vtkCompositeIterator<br />
|-<br />
| vtkHierarchicalDataSet || *<br />
|-<br />
| vtkHierarchicalDataSetAlgorithm || *<br />
|- <br />
| vtkMultiGroupDataInformation || *<br />
|- <br />
| vtkMultiGroupDataIterator || vtkCompositeIterator<br />
|- <br />
| vtkMultiGroupDataSet || vtkCompositeDataSet<br />
|- <br />
| vtkMultiGroupDataSetAlgorithm || vtkCompositeAlgorithm<br />
|- <br />
| vtkHierarchicalDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataExtractDataSets || vtkExtractDataSets<br />
|- <br />
| vtkMultiGroupDataExtractGroup || vtkExtractBlock, vtkExtractLevel<br />
|- <br />
| vtkMultiGroupDataGeometryFilter || vtkCompositeDataGeometryFilter<br />
|- <br />
| vtkMultiGroupDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataGroupIdScalars || vtkBlockIdScalars, vtkLevelIdScalars<br />
|- <br />
| vtkMultiGroupProbeFilter || vtkCompositeDataProbeFilter<br />
|- <br />
| vtkXMLHierarchicalDataReader || *<br />
|- <br />
| vtkXMLMultiGroupDataReader || vtkXMLCompositeDataReader,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataReader,<br />
|- <br />
| || vtkXMLMultiBlockDataReader<br />
|- <br />
| vtkXMLMultiGroupDataWriter || vtkXMLCompositeDataWriter,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataWriter,<br />
|- <br />
| || vtkXMLMultiBlockDataWriter<br />
|- <br />
| vtkMultiGroupDataExtractPiece || vtkExtractPiece<br />
|- <br />
| vtkXMLPMultiGroupDataWriter || vtkXMLPMultiBlockDataWriter,<br />
|- <br />
| || vtkXMLPHierarchicalBoxDataWriter<br />
|- <br />
| vtkMultiGroupPolyDataMapper || vtkCompositePolyDataMapper<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Composite_Data_Redesign&diff=40906VTK/Composite Data Redesign2011-06-19T13:39:20Z<p>Pratikm: /* Suggested design */</p>
<hr />
<div>== Composite dataset re-architecture ==<br />
<br />
=== Current design ===<br />
<center><br />
[[Image:Old_composite_design.png]]<br />
</center><br />
=== Issues with the current design ===<br />
<br />
* Most functionality is based on vtkMultiGroupDataSet instead of vtkCompositeDataSet. For example, most algorithms (and the executives) use vtkMultiGroupDataSet API to iterate. This makes it impossible to add new sub-classes of vtkCompositeDataSet without writing new executives.<br />
* The concept of sub-block is confusing. vtkMultiGroupDataSet stores a vector of vectors of datasets. When this concept is mapped to the multi-block (or temporal) datasets, each block ends up having multiple sub-blocks. Furthermore, the convention that these sub-block ids map to the process ids is very confusing.<br />
* Algorithms that want to pass blanking have to downcast to vtkHierarchicalBoxDataSet and copy blanking explicitely.<br />
* vtkCompositeDataPipeline is a mess.<br />
<br />
=== Suggested design ===<br />
<center><br />
[[Image:New_composite_design.png]]<br />
</center><br />
* Get rid of vtkMultiGroupDataSet. Any code shared between subclasses of vtkCompositeDataPipeline can be shared using helper implementation objects.<br />
* Improve the iterators so that it is not necessary to use vtkMultiGroupDataSet API to iterate over blocks.<br />
* Add a vtkMultiPieceDataSet class that can be used to group multiple pieces together. Example: when loading a dataset with multiple partitions on 1 processor, vtkMultiPieceDataSet can be used instead of appending datasets together. vtkMultiPieceDataSet would have additional meta-data about things like whole extent for structured datasets.<br />
* Clean up vtkCompositeDataPipeline.<br />
* Improve ghost level support for composite datasets.<br />
<br />
==== Iterators ====<br />
<br />
In the current architecture, the most common thing to do is the following:<br />
<br />
<pre><br />
unsigned int numGroups = mbInput->GetNumberOfGroups();<br />
output->SetNumberOfGroups(numGroups);<br />
for (unsigned int groupId=0; groupId<numGroups; groupId++)<br />
{<br />
unsigned int numBlocks = mbInput->GetNumberOfDataSets(groupId);<br />
output->SetNumberOfDataSets(groupId, numBlocks);<br />
for (unsigned int blockId=0; blockId<numBlocks; blockId++)<br />
{<br />
vtkDataObject* block = mbInput->GetDataSet(groupId, blockId);<br />
<br />
// do something with block to get an outBlock<br />
<br />
output->SetDataSet(groupId, blockId, outBlock);<br />
}<br />
} <br />
</pre><br />
<br />
As mentioned above, problem with this approach is that it assumes that the<br />
composite dataset is a vtkMultiGroupDataSet. With the appropriate changes<br />
to the composite data iterators and composite datasets, the code above can<br />
be rewritten as<br />
<br />
<pre><br />
output->CopyStructure(mbInput);<br />
<br />
vtkCompositeDataIterator* iter = mbInput->NewIterator();<br />
iter->GoToFirstItem();<br />
while (!iter->IsDoneWithTraversal())<br />
{<br />
vtkDataObjects* block = iter->GetCurrentDataObject();<br />
// Note that the iterator will only visit the leaf nodes by default.<br />
<br />
// do something with block to get outBlock<br />
<br />
// copy the meta-data<br />
outBlock->CopyInformation(block);<br />
<br />
output->SetDataSet(iter, outBlock);<br />
iter->GoToNextItem();<br />
}<br />
iter->Delete();<br />
append->Update();<br />
</pre><br />
<br />
The implementation above requires two additional methods: CopyStructure()<br />
and SetDataSet(iter, dataObject). The task of CopyStructure() is to create<br />
a tree structure on the output composite data object identical to that of<br />
the input. In the case of hierarchical datasets, this means same number of<br />
levels and same number of datasets on all levels. In the case of<br />
multi-block datasets, this means an identical tree. This may look like<br />
this:<br />
<br />
[[Image:Multiblock_tree.png]]<br />
<br />
After CopyStructure(), the output will have the same hierarchy except all<br />
vtkPolyData leaf nodes will be replaced by null pointers. CopyStructure()<br />
should also copy things like refinement ratios etc. This should also<br />
include all of the meta-data (information) of all non-leaf nodes. We are<br />
likely to use things like names for groups etc. when dealing with<br />
multi-block datasets.<br />
<br />
<em>Note on vtkHierarchicalBoxDataSet: Currently, a vtkHierarchicalBoxDataSet is converted to a vtkMultiGroupDataSet when it is processed by a simple algorithm or a vtkMultiGroupDataAlgorithm. We should think about this. Maybe when a vtkHierarchicalBoxDataSet is processed by a vtkDataSetAlgorithm, the output should be vtkHierarchicalBoxDataSet too?</em><br />
<br />
The task of SetDataSet(iter, dataObject) is to add a leaf dataset at the exact<br />
same position that the iterator is pointing at on the input. This will<br />
require changing iterators such that they are keeping track of their<br />
position in a composite dataset by some sort of index. The easiest way of<br />
doing this is to use two integers for hierarchical datasets (level, index)<br />
and a vector of integers of length equal to the current tree level for the<br />
multi-block datasets.<br />
<br />
==== vtkMultiPieceDataSet ====<br />
<br />
A multi-piece dataset groups multiple data pieces together. For example,<br />
say that a simulation broke a volume into 16 piece so that each piece can<br />
be processed with 1 process in parallel. We want to load this volume in a<br />
visualization cluster of 4 nodes. Each node will get 4 pieces, not<br />
necessarily forming a whole rectangular piece. In this case, it is not<br />
possible to append the 4 pieces together into a vtkImageData. In this case,<br />
these 4 pieces can be collected together using a<br />
vtkMultiPieceDataSet. Although it is possible to use a vtkMultiBlockDataSet<br />
for this purpose, a vtkMultiPieceDataSet makes it clear that these are<br />
pieces of one whole dataset that are collected together. Given this<br />
information, applications like paraview can treat these in a special<br />
way. For example, meta-data about the whole extent of the dataset can be<br />
displayed, neighborhood information can be obtained, ghost levels can be<br />
generated etc etc.<br />
<br />
<em>Note: The use of vtkMultiPieceDataSet is not yet very clear to me but I think it will be necessary.</em> <br />
<br />
==== vtkCompositeDataPipeline cleanup ====<br />
<br />
There will be a list of changes to vtkCompositeDataPipeline here. The<br />
executive is a mess right now due to all the use cases it supports and<br />
because it grew organically. We need to take a step back and clean it up,<br />
possibly rewriting portions of it.<br />
<br />
==== Ghost level support ====<br />
<br />
Currently, ghost level requests are passed up the pipeline but they are<br />
pretty much ignored by the pipeline. This will not do, specially when we<br />
improve D3 to support multi-block datasets. Getting unstructured and<br />
dataset algorithms to work with ghost levels is pretty<br />
straightforward. Getting structured data filters working is a little<br />
trickier.<br />
<br />
<em>Note: Realistically, readers do not produce more than 1 ghost level. We may want to take this into account.</em><br />
<br />
=Implementation=<br />
<br />
The implementation is based on the above design with some notable differences:<br />
<br />
* vtkHierarchicalDataSet is deprecated. Due to lack of use-cases to create a AMR-like hierarchy with unstructured data, this class was deprecated. Applications can implemented same behavior using vtkMultiBlockDataSet. vtkMultiBlockDataSet provides for meta-data associated with each node in the tree, thus making it possible for applications to attach level information with blocks.<br />
<br />
----<br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="empty" <br />
dir="back"<br />
arrowsize="2.0"<br />
]<br />
vtkDataObject []<br />
vtkCompositeDataSet [ ]<br />
vtkMultiBlockDataSet [ ]<br />
vtkTemporalDataSet [ ]<br />
vtkHierarchicalBoxDataSet [ ]<br />
vtkMultiPieceDataSet [ ]<br />
<br />
<br />
vtkDataObject->vtkCompositeDataSet<br />
vtkCompositeDataSet->vtkMultiBlockDataSet<br />
vtkCompositeDataSet->vtkTemporalDataSet<br />
vtkCompositeDataSet->vtkHierarchicalBoxDataSet<br />
vtkCompositeDataSet->vtkMultiPieceDataSet<br />
}<br />
</graphviz><br />
<br />
'''Class Hierarchy: Class hierarchy for current implementation of composite datasets'''<br />
----<br />
==vtkCompositeDataSet==<br />
<br />
vtkCompositeDataSet is the abstract superclass for all composite datasets. It implements a full tree structure in which nodes can be datasets or other composite datasets. However the API to access the tree directly is protected. Each subclass can build and maintain this tree as per its requirements eg. vtkHierarchicalBoxDataSet builds 1 level deep trees with the 1st level nodes being vtkMultiBlockDataSet instances which correspond to a ''level'' in the hierarchical dataset. One can obtain a vtkCompositeDataIterator instance from the vtkCompositeDataSet to iterate over the tree structure. vtkCompositeDataSet provides public API to get/set dataobjects and metadata using the iterator. Important API is listed below:<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Return a new iterator (the iterator has to be deleted by user).<br />
virtual vtkCompositeDataIterator* NewIterator();<br />
<br />
// Description:<br />
// Copies the tree structure from the input. All pointers to non-composite<br />
// data objects are intialized to NULL. This also shallow copies the meta data<br />
// associated with all the nodes.<br />
virtual void CopyStructure(vtkCompositeDataSet* input);<br />
<br />
// Description:<br />
// Sets the data set at the location pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be any composite datasite with similar structure (achieved by using<br />
// CopyStructure).<br />
virtual void SetDataSet(vtkCompositeDataIterator* iter, vtkDataObject* dataObj);<br />
<br />
// Description:<br />
// Returns the dataset located at the positiong pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkDataObject* GetDataSet(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the meta-data associated with the position pointed by the iterator.<br />
// This will create a new vtkInformation object if none already exists. Use<br />
// HasMetaData to avoid creating the vtkInformation object unnecessarily.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkInformation* GetMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns if any meta-data associated with the position pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual int HasMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Shallow and Deep copy.<br />
virtual void ShallowCopy(vtkDataObject *src);<br />
virtual void DeepCopy(vtkDataObject *src);<br />
</pre><br />
</font><br />
<br />
==vtkTemporalDataSet==<br />
<br />
vtkTemporalDataSet is used to hold multiple timesteps.<br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of time steps in theis dataset<br />
void SetNumberOfTimeSteps(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of time steps.<br />
unsigned int GetNumberOfTimeSteps();<br />
<br />
// Description:<br />
// Set a data object as a timestep. Cannot be vtkTemporalDataSet.<br />
void SetTimeStep(unsigned int timestep, vtkDataObject* dobj);<br />
<br />
// Description:<br />
// Get a timestep.<br />
vtkDataObject* GetTimeStep(unsigned int timestep);<br />
<br />
// Description:<br />
// Get timestep meta-data.<br />
vtkInformation* GetMetaData(unsigned int timestep);<br />
<br />
// Description:<br />
// Returns if timestep meta-data is present.<br />
int HasMetaData(unsigned int timestep);<br />
</pre><br />
</font><br />
<br />
==vtkMultiBlockDataSet==<br />
<br />
vtkMultiBlockDataSet is a vtkCompositeDataSet in which the child nodes can either be vtkDataSet subclasses or vtkMultiBlockDataSet. This is used when full trees are required. Meta-data can be associated with leaf nodes as well as non-leaf nodes in the tree.<br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of blocks. This will cause allocation if the new number of<br />
// blocks is greater than the current size. All new blocks are initialized to<br />
// null.<br />
void SetNumberOfBlocks(unsigned int numBlocks);<br />
<br />
// Description:<br />
// Returns the number of blocks.<br />
unsigned int GetNumberOfBlocks();<br />
<br />
// Description:<br />
// Returns the block at the given index. It is recommended that one uses the<br />
// iterators to iterate over composite datasets rather than using this API.<br />
vtkDataObject* GetBlock(unsigned int blockno);<br />
<br />
// Description:<br />
// Sets the data object as the given block. The total number of blocks will <br />
// be resized to fit the requested block no. The only vtkCompositeDataSet subclass <br />
// that can be added as a block is a vtkMultiBlockDataSet, <br />
// an error is raised otherwise. <br />
void SetBlock(unsigned int blockno, vtkDataObject* block);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given block.<br />
int HasMetaData(unsigned int blockno);<br />
<br />
// Description:<br />
// Returns the meta-data for the block. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int blockno);<br />
</pre><br />
</font><br />
<br />
==vtkMultiPieceDataSet==<br />
<br />
A vtkMultiPieceDataSet dataset groups multiple data pieces together. <br />
For example, say that a simulation broke a volume into 16 piece so that <br />
each piece can be processed with 1 process in parallel. We want to load <br />
this volume in a visualization cluster of 4 nodes. Each node will get 4 <br />
pieces, not necessarily forming a whole rectangular piece. In this case, <br />
it is not possible to append the 4 pieces together into a vtkImageData. <br />
In this case, these 4 pieces can be collected together using a <br />
vtkMultiPieceDataSet. <br />
Note that vtkMultiPieceDataSet is intended to be included in other composite<br />
datasets eg. vtkMultiBlockDataSet, vtkHierarchicalBoxDataSet. Hence the lack<br />
of algorithms producting vtkMultiPieceDataSet.<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of pieces. This will cause allocation if the new number of<br />
// pieces is greater than the current size. All new pieces are initialized to<br />
// null.<br />
void SetNumberOfPieces(unsigned int numpieces);<br />
<br />
// Description:<br />
// Returns the number of pieces.<br />
unsigned int GetNumberOfPieces();<br />
<br />
// Description:<br />
// Returns the piece at the given index. <br />
vtkDataSet* GetPiece(unsigned int pieceno);<br />
<br />
// Description:<br />
// Sets the data object as the given piece. The total number of pieces will <br />
// be resized to fit the requested piece no.<br />
void SetPiece(unsigned int pieceno, vtkDataSet* piece);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given piece.<br />
int HasMetaData(unsigned int piece);<br />
<br />
// Description:<br />
// Returns the meta-data for the piece. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int pieceno);<br />
</pre><br />
</font><br />
<br />
==vtkHierarchicalBoxDataSet==<br />
<br />
vtkHiererchicalBoxDataSet is a hierarchical dataset of Uniform grids. It is designed for AMR (Adaptive mesh refinement) dataset. The structure consists of ''levels'', with each level containing datasets. The dataset type is restricted to vtkUniformGrid. Each dataset has an associated vtkAMRBox that represents it's region (similar to extent) in space. Internally, each level in a vtkHierarchicalBoxDataSet is nothing but a vtkMultiPieceDataSet. <br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of refinement levels. This call might cause<br />
// allocation if the new number of levels is larger than the<br />
// current one.<br />
void SetNumberOfLevels(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of levels.<br />
unsigned int GetNumberOfLevels();<br />
<br />
// Description:<br />
// Set the number of data set at a given level.<br />
void SetNumberOfDataSets(unsigned int level, unsigned int numdatasets);<br />
<br />
// Description:<br />
// Returns the number of data sets available at any level.<br />
unsigned int GetNumberOfDataSets(unsigned int level);<br />
<br />
// Description:<br />
// Set the dataset pointer for a given node. This will resize the number of<br />
// levels and the number of datasets in the level to fit level, id requested. <br />
void SetDataSet(unsigned int level, unsigned int id, <br />
vtkAMRBox& box, vtkUniformGrid* dataSet);<br />
<br />
// Description:<br />
// Get a dataset given a level and an id.<br />
vtkUniformGrid* GetDataSet(unsigned int level,<br />
unsigned int id,<br />
vtkAMRBox& box);<br />
<br />
// Description:<br />
// Get meta-data associated with a level. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasLevelMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given level.<br />
int HasLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Get meta-data associated with a dataset. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given dataset under a given level.<br />
int HasMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Sets the refinement of a given level. The spacing at level<br />
// level+1 is defined as spacing(level+1) = spacing(level)/refRatio(level).<br />
// Note that currently, this is not enforced by this class however<br />
// some algorithms might not function properly if the spacing in<br />
// the blocks (vtkUniformGrid) does not match the one described<br />
// by the refinement ratio.<br />
void SetRefinementRatio(unsigned int level, int refRatio);<br />
<br />
// Description:<br />
// Returns the refinement of a given level.<br />
int GetRefinementRatio(unsigned int level);<br />
<br />
// Description:<br />
// Returns the AMR box for the location pointer by the iterator.<br />
vtkAMRBox GetAMRBox(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the refinement ratio for the position pointed by the iterator.<br />
int GetRefinementRatio(vtkCompositeDataIterator* iter);<br />
</pre><br />
</font><br />
<br />
==vtkCompositeDataIterator==<br />
<br />
vtkCompositeDataIterator is used to iterate over composite datasets. <br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the composite dataset this iterator is iterating over. <br />
// Must be set before traversal begins.<br />
virtual void SetDataSet(vtkCompositeDataSet* ds);<br />
vtkGetObjectMacro(DataSet, vtkCompositeDataSet);<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure.<br />
virtual void InitTraversal();<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure in reverse order.<br />
virtual void InitReverseTraversal();<br />
<br />
// Description:<br />
// Move the iterator to the beginning of the collection.<br />
virtual void GoToFirstItem();<br />
<br />
// Description:<br />
// Move the iterator to the next item in the collection.<br />
virtual void GoToNextItem();<br />
<br />
// Description:<br />
// Test whether the iterator is currently pointing to a valid item. Returns 1<br />
// for yes, and 0 for no.<br />
virtual int IsDoneWithTraversal();<br />
<br />
// Description:<br />
// Returns the current item. Valid only when IsDoneWithTraversal() returns 0.<br />
virtual vtkDataObject* GetCurrentDataObject();<br />
<br />
// Description:<br />
// Returns the meta-data associated with the current item. This will allocate<br />
// a new vtkInformation object is none is already present. Use<br />
// HasCurrentMetaData to avoid unnecessary creation of vtkInformation objects.<br />
virtual vtkInformation* GetCurrentMetaData();<br />
<br />
// Description:<br />
// Returns if the a meta-data information object is present for the current<br />
// item. Return 1 on success, 0 otherwise.<br />
virtual int HasCurrentMetaData();<br />
<br />
// Description:<br />
// If VisitOnlyLeaves is true, the iterator will only visit nodes<br />
// (sub-datasets) that are not composite. If it encounters a composite<br />
// data set, it will automatically traverse that composite dataset until<br />
// it finds non-composite datasets (see also TraverseSubTree). <br />
// With this options, it is possible to<br />
// visit all non-composite datasets in tree of composite datasets<br />
// (composite of composite of composite for example :-) ) If<br />
// VisitOnlyLeaves is false, GetCurrentDataObject() may return<br />
// vtkCompositeDataSet. By default, VisitOnlyLeaves is 1.<br />
vtkSetMacro(VisitOnlyLeaves, int);<br />
vtkGetMacro(VisitOnlyLeaves, int);<br />
vtkBooleanMacro(VisitOnlyLeaves, int);<br />
<br />
// Description:<br />
// If TraverseSubTree is set to true, the iterator will visit the entire tree<br />
// structure, otherwise it only visits the first level children. Set to 1 by<br />
// default.<br />
vtkSetMacro(TraverseSubTree, int);<br />
vtkGetMacro(TraverseSubTree, int);<br />
vtkBooleanMacro(TraverseSubTree, int);<br />
<br />
// Description:<br />
// If SkipEmptyNodes is true, then NULL datasets will be skipped. Default is<br />
// true.<br />
vtkSetMacro(SkipEmptyNodes, int);<br />
vtkGetMacro(SkipEmptyNodes, int);<br />
vtkBooleanMacro(SkipEmptyNodes, int);<br />
<br />
// Description:<br />
// Flat index is an index obtained by traversing the tree in preorder.<br />
// This can be used to uniquely identify nodes in the tree.<br />
// Not valid if IsDoneWithTraversal() returns true.<br />
vtkGetMacro(CurrentFlatIndex, unsigned int);<br />
<br />
</pre><br />
</font><br />
<br />
===Examples===<br />
====Copy all non-empty leaf nodes====<br />
<font color="blue"><br />
<pre><br />
// This can be very easily done with a ShallowCopy, but we use the iterators for illustration.<br />
vtkCompositeDataSet* CreateLeafCopy(vtkCompositeDataSet* src)<br />
{<br />
vtkCompositeDataSet* output = src->NewInstance();<br />
// Copy the structure as well as the meta-data assciated with all nodes in the composite tree.<br />
output->CopyStructure(src);<br />
<br />
vtkCompositeDataIterator* iter = src->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
output->SetDataSet(iter, iter->GetCurrentDataObject());<br />
}<br />
iter->Delete();<br />
return output;<br />
}<br />
<br />
</pre><br />
</font><br />
<br />
====Iterate over immediate child nodes of a composite dataset====<br />
<font color="blue"><br />
<pre><br />
vtkCompositeDataSet* input = ...<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
iter->TraverseSubTreeOff(); // we are only interested in immediate children.<br />
iter->VisitOnlyLeavesOff(); // we want all immediate children, including composite dataset child nodes.<br />
// To not skip empty children, simply call iter->SkipEmptyNodesOff();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
...<br />
}<br />
</pre><br />
</font><br />
<br />
====Flat Index====<br />
Iterators can be used to determine what we refer to as the '''flat-index''' of any node. Flat index is the index of any node in the pre-order traversal of the tree eg. the following diagram shows the tree structure and the flat-index of each node (all rectangular nodes are composite datasets, while circular nodes are vtkDataSet subclasses). The flat-index for the current location can be obtained from the iterator using ''GetCurrentFlatIndex''.<br />
<br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="ediamond"<br />
arrowhead="none<br />
]<br />
A [label="A (0)"]<br />
B [label="B (1)" shape="circle"]<br />
C [label="C (2)"]<br />
D [label="D (3)"]<br />
E [label="E (4)" shape="circle"]<br />
F [label="F (5)" shape="circle"]<br />
<br />
A->B<br />
A->C<br />
C->D<br />
D->E<br />
D->F<br />
<br />
}<br />
</graphviz><br />
<br />
=Changes from VTK 5.0=<br />
==vtkCompositeDataPipeline==<br />
This executive is used to iterative execute a non-composite data aware filter over all the leaves in a composite dataset. In VTK 5.0, the vtkHierarchicalBoxDataSet was always converted to a vtkMultiBlockDataSet when a non-composite aware filter was present in the pipeline. This is no longer the case. vtkCompositePipeline now verifies if the non-composite aware algorithm can produce vtkUniformGrid given a vtkUniformGrid as an input. If so, for a vtkHierarchicalBoxDataSet input, the output is a vtkHierarchicalBoxDataSet otherwise it is a vtkMultiBlockDataSet. Even when the vtkHierarchicalBoxDataSet is converted to a vtkMutliBlockDataSet the composite data tree structure is preserved in other words: since vtkHierarchicalBoxDataSet has vtkMutliPieceDataSet instances for each level, the converted vtkMultiBlockDataSet will also have vtkMutliPieceDataSet instances as the child blocks of the root node.<br />
<br />
==Class Names==<br />
A few class names have changed, a few others are no longer available. This table lists the old class name and an equivalent class in the new design.<br />
<br />
{| border="1"<br />
|+ '''Class Name Changes''' ['*' -- no longer applicable ]<br />
! Old Class !! Equivalent Class<br />
|- <br />
| vtkHierarchicalDataInformation || *<br />
|- <br />
| vtkHierarchicalDataIterator || vtkCompositeIterator<br />
|-<br />
| vtkHierarchicalDataSet || *<br />
|-<br />
| vtkHierarchicalDataSetAlgorithm || *<br />
|- <br />
| vtkMultiGroupDataInformation || *<br />
|- <br />
| vtkMultiGroupDataIterator || vtkCompositeIterator<br />
|- <br />
| vtkMultiGroupDataSet || vtkCompositeDataSet<br />
|- <br />
| vtkMultiGroupDataSetAlgorithm || vtkCompositeAlgorithm<br />
|- <br />
| vtkHierarchicalDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataExtractDataSets || vtkExtractDataSets<br />
|- <br />
| vtkMultiGroupDataExtractGroup || vtkExtractBlock, vtkExtractLevel<br />
|- <br />
| vtkMultiGroupDataGeometryFilter || vtkCompositeDataGeometryFilter<br />
|- <br />
| vtkMultiGroupDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataGroupIdScalars || vtkBlockIdScalars, vtkLevelIdScalars<br />
|- <br />
| vtkMultiGroupProbeFilter || vtkCompositeDataProbeFilter<br />
|- <br />
| vtkXMLHierarchicalDataReader || *<br />
|- <br />
| vtkXMLMultiGroupDataReader || vtkXMLCompositeDataReader,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataReader,<br />
|- <br />
| || vtkXMLMultiBlockDataReader<br />
|- <br />
| vtkXMLMultiGroupDataWriter || vtkXMLCompositeDataWriter,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataWriter,<br />
|- <br />
| || vtkXMLMultiBlockDataWriter<br />
|- <br />
| vtkMultiGroupDataExtractPiece || vtkExtractPiece<br />
|- <br />
| vtkXMLPMultiGroupDataWriter || vtkXMLPMultiBlockDataWriter,<br />
|- <br />
| || vtkXMLPHierarchicalBoxDataWriter<br />
|- <br />
| vtkMultiGroupPolyDataMapper || vtkCompositePolyDataMapper<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Composite_Data_Redesign&diff=40905VTK/Composite Data Redesign2011-06-19T13:38:23Z<p>Pratikm: /* Composite dataset re-architecture */</p>
<hr />
<div>== Composite dataset re-architecture ==<br />
<br />
=== Current design ===<br />
<center><br />
[[Image:Old_composite_design.png]]<br />
</center><br />
=== Issues with the current design ===<br />
<br />
* Most functionality is based on vtkMultiGroupDataSet instead of vtkCompositeDataSet. For example, most algorithms (and the executives) use vtkMultiGroupDataSet API to iterate. This makes it impossible to add new sub-classes of vtkCompositeDataSet without writing new executives.<br />
* The concept of sub-block is confusing. vtkMultiGroupDataSet stores a vector of vectors of datasets. When this concept is mapped to the multi-block (or temporal) datasets, each block ends up having multiple sub-blocks. Furthermore, the convention that these sub-block ids map to the process ids is very confusing.<br />
* Algorithms that want to pass blanking have to downcast to vtkHierarchicalBoxDataSet and copy blanking explicitely.<br />
* vtkCompositeDataPipeline is a mess.<br />
<br />
=== Suggested design ===<br />
<br />
[[Image:New_composite_design.png]]<br />
<br />
* Get rid of vtkMultiGroupDataSet. Any code shared between subclasses of vtkCompositeDataPipeline can be shared using helper implementation objects.<br />
* Improve the iterators so that it is not necessary to use vtkMultiGroupDataSet API to iterate over blocks.<br />
* Add a vtkMultiPieceDataSet class that can be used to group multiple pieces together. Example: when loading a dataset with multiple partitions on 1 processor, vtkMultiPieceDataSet can be used instead of appending datasets together. vtkMultiPieceDataSet would have additional meta-data about things like whole extent for structured datasets.<br />
* Clean up vtkCompositeDataPipeline.<br />
* Improve ghost level support for composite datasets.<br />
<br />
==== Iterators ====<br />
<br />
In the current architecture, the most common thing to do is the following:<br />
<br />
<pre><br />
unsigned int numGroups = mbInput->GetNumberOfGroups();<br />
output->SetNumberOfGroups(numGroups);<br />
for (unsigned int groupId=0; groupId<numGroups; groupId++)<br />
{<br />
unsigned int numBlocks = mbInput->GetNumberOfDataSets(groupId);<br />
output->SetNumberOfDataSets(groupId, numBlocks);<br />
for (unsigned int blockId=0; blockId<numBlocks; blockId++)<br />
{<br />
vtkDataObject* block = mbInput->GetDataSet(groupId, blockId);<br />
<br />
// do something with block to get an outBlock<br />
<br />
output->SetDataSet(groupId, blockId, outBlock);<br />
}<br />
} <br />
</pre><br />
<br />
As mentioned above, problem with this approach is that it assumes that the<br />
composite dataset is a vtkMultiGroupDataSet. With the appropriate changes<br />
to the composite data iterators and composite datasets, the code above can<br />
be rewritten as<br />
<br />
<pre><br />
output->CopyStructure(mbInput);<br />
<br />
vtkCompositeDataIterator* iter = mbInput->NewIterator();<br />
iter->GoToFirstItem();<br />
while (!iter->IsDoneWithTraversal())<br />
{<br />
vtkDataObjects* block = iter->GetCurrentDataObject();<br />
// Note that the iterator will only visit the leaf nodes by default.<br />
<br />
// do something with block to get outBlock<br />
<br />
// copy the meta-data<br />
outBlock->CopyInformation(block);<br />
<br />
output->SetDataSet(iter, outBlock);<br />
iter->GoToNextItem();<br />
}<br />
iter->Delete();<br />
append->Update();<br />
</pre><br />
<br />
The implementation above requires two additional methods: CopyStructure()<br />
and SetDataSet(iter, dataObject). The task of CopyStructure() is to create<br />
a tree structure on the output composite data object identical to that of<br />
the input. In the case of hierarchical datasets, this means same number of<br />
levels and same number of datasets on all levels. In the case of<br />
multi-block datasets, this means an identical tree. This may look like<br />
this:<br />
<br />
[[Image:Multiblock_tree.png]]<br />
<br />
After CopyStructure(), the output will have the same hierarchy except all<br />
vtkPolyData leaf nodes will be replaced by null pointers. CopyStructure()<br />
should also copy things like refinement ratios etc. This should also<br />
include all of the meta-data (information) of all non-leaf nodes. We are<br />
likely to use things like names for groups etc. when dealing with<br />
multi-block datasets.<br />
<br />
<em>Note on vtkHierarchicalBoxDataSet: Currently, a vtkHierarchicalBoxDataSet is converted to a vtkMultiGroupDataSet when it is processed by a simple algorithm or a vtkMultiGroupDataAlgorithm. We should think about this. Maybe when a vtkHierarchicalBoxDataSet is processed by a vtkDataSetAlgorithm, the output should be vtkHierarchicalBoxDataSet too?</em><br />
<br />
The task of SetDataSet(iter, dataObject) is to add a leaf dataset at the exact<br />
same position that the iterator is pointing at on the input. This will<br />
require changing iterators such that they are keeping track of their<br />
position in a composite dataset by some sort of index. The easiest way of<br />
doing this is to use two integers for hierarchical datasets (level, index)<br />
and a vector of integers of length equal to the current tree level for the<br />
multi-block datasets.<br />
<br />
==== vtkMultiPieceDataSet ====<br />
<br />
A multi-piece dataset groups multiple data pieces together. For example,<br />
say that a simulation broke a volume into 16 piece so that each piece can<br />
be processed with 1 process in parallel. We want to load this volume in a<br />
visualization cluster of 4 nodes. Each node will get 4 pieces, not<br />
necessarily forming a whole rectangular piece. In this case, it is not<br />
possible to append the 4 pieces together into a vtkImageData. In this case,<br />
these 4 pieces can be collected together using a<br />
vtkMultiPieceDataSet. Although it is possible to use a vtkMultiBlockDataSet<br />
for this purpose, a vtkMultiPieceDataSet makes it clear that these are<br />
pieces of one whole dataset that are collected together. Given this<br />
information, applications like paraview can treat these in a special<br />
way. For example, meta-data about the whole extent of the dataset can be<br />
displayed, neighborhood information can be obtained, ghost levels can be<br />
generated etc etc.<br />
<br />
<em>Note: The use of vtkMultiPieceDataSet is not yet very clear to me but I think it will be necessary.</em> <br />
<br />
==== vtkCompositeDataPipeline cleanup ====<br />
<br />
There will be a list of changes to vtkCompositeDataPipeline here. The<br />
executive is a mess right now due to all the use cases it supports and<br />
because it grew organically. We need to take a step back and clean it up,<br />
possibly rewriting portions of it.<br />
<br />
==== Ghost level support ====<br />
<br />
Currently, ghost level requests are passed up the pipeline but they are<br />
pretty much ignored by the pipeline. This will not do, specially when we<br />
improve D3 to support multi-block datasets. Getting unstructured and<br />
dataset algorithms to work with ghost levels is pretty<br />
straightforward. Getting structured data filters working is a little<br />
trickier.<br />
<br />
<em>Note: Realistically, readers do not produce more than 1 ghost level. We may want to take this into account.</em><br />
<br />
=Implementation=<br />
<br />
The implementation is based on the above design with some notable differences:<br />
<br />
* vtkHierarchicalDataSet is deprecated. Due to lack of use-cases to create a AMR-like hierarchy with unstructured data, this class was deprecated. Applications can implemented same behavior using vtkMultiBlockDataSet. vtkMultiBlockDataSet provides for meta-data associated with each node in the tree, thus making it possible for applications to attach level information with blocks.<br />
<br />
----<br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="empty" <br />
dir="back"<br />
arrowsize="2.0"<br />
]<br />
vtkDataObject []<br />
vtkCompositeDataSet [ ]<br />
vtkMultiBlockDataSet [ ]<br />
vtkTemporalDataSet [ ]<br />
vtkHierarchicalBoxDataSet [ ]<br />
vtkMultiPieceDataSet [ ]<br />
<br />
<br />
vtkDataObject->vtkCompositeDataSet<br />
vtkCompositeDataSet->vtkMultiBlockDataSet<br />
vtkCompositeDataSet->vtkTemporalDataSet<br />
vtkCompositeDataSet->vtkHierarchicalBoxDataSet<br />
vtkCompositeDataSet->vtkMultiPieceDataSet<br />
}<br />
</graphviz><br />
<br />
'''Class Hierarchy: Class hierarchy for current implementation of composite datasets'''<br />
----<br />
==vtkCompositeDataSet==<br />
<br />
vtkCompositeDataSet is the abstract superclass for all composite datasets. It implements a full tree structure in which nodes can be datasets or other composite datasets. However the API to access the tree directly is protected. Each subclass can build and maintain this tree as per its requirements eg. vtkHierarchicalBoxDataSet builds 1 level deep trees with the 1st level nodes being vtkMultiBlockDataSet instances which correspond to a ''level'' in the hierarchical dataset. One can obtain a vtkCompositeDataIterator instance from the vtkCompositeDataSet to iterate over the tree structure. vtkCompositeDataSet provides public API to get/set dataobjects and metadata using the iterator. Important API is listed below:<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Return a new iterator (the iterator has to be deleted by user).<br />
virtual vtkCompositeDataIterator* NewIterator();<br />
<br />
// Description:<br />
// Copies the tree structure from the input. All pointers to non-composite<br />
// data objects are intialized to NULL. This also shallow copies the meta data<br />
// associated with all the nodes.<br />
virtual void CopyStructure(vtkCompositeDataSet* input);<br />
<br />
// Description:<br />
// Sets the data set at the location pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be any composite datasite with similar structure (achieved by using<br />
// CopyStructure).<br />
virtual void SetDataSet(vtkCompositeDataIterator* iter, vtkDataObject* dataObj);<br />
<br />
// Description:<br />
// Returns the dataset located at the positiong pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkDataObject* GetDataSet(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the meta-data associated with the position pointed by the iterator.<br />
// This will create a new vtkInformation object if none already exists. Use<br />
// HasMetaData to avoid creating the vtkInformation object unnecessarily.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual vtkInformation* GetMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns if any meta-data associated with the position pointed by the iterator.<br />
// The iterator does not need to be iterating over this dataset itself. It can<br />
// be an iterator for composite dataset with similar structure (achieved by<br />
// using CopyStructure).<br />
virtual int HasMetaData(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Shallow and Deep copy.<br />
virtual void ShallowCopy(vtkDataObject *src);<br />
virtual void DeepCopy(vtkDataObject *src);<br />
</pre><br />
</font><br />
<br />
==vtkTemporalDataSet==<br />
<br />
vtkTemporalDataSet is used to hold multiple timesteps.<br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of time steps in theis dataset<br />
void SetNumberOfTimeSteps(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of time steps.<br />
unsigned int GetNumberOfTimeSteps();<br />
<br />
// Description:<br />
// Set a data object as a timestep. Cannot be vtkTemporalDataSet.<br />
void SetTimeStep(unsigned int timestep, vtkDataObject* dobj);<br />
<br />
// Description:<br />
// Get a timestep.<br />
vtkDataObject* GetTimeStep(unsigned int timestep);<br />
<br />
// Description:<br />
// Get timestep meta-data.<br />
vtkInformation* GetMetaData(unsigned int timestep);<br />
<br />
// Description:<br />
// Returns if timestep meta-data is present.<br />
int HasMetaData(unsigned int timestep);<br />
</pre><br />
</font><br />
<br />
==vtkMultiBlockDataSet==<br />
<br />
vtkMultiBlockDataSet is a vtkCompositeDataSet in which the child nodes can either be vtkDataSet subclasses or vtkMultiBlockDataSet. This is used when full trees are required. Meta-data can be associated with leaf nodes as well as non-leaf nodes in the tree.<br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of blocks. This will cause allocation if the new number of<br />
// blocks is greater than the current size. All new blocks are initialized to<br />
// null.<br />
void SetNumberOfBlocks(unsigned int numBlocks);<br />
<br />
// Description:<br />
// Returns the number of blocks.<br />
unsigned int GetNumberOfBlocks();<br />
<br />
// Description:<br />
// Returns the block at the given index. It is recommended that one uses the<br />
// iterators to iterate over composite datasets rather than using this API.<br />
vtkDataObject* GetBlock(unsigned int blockno);<br />
<br />
// Description:<br />
// Sets the data object as the given block. The total number of blocks will <br />
// be resized to fit the requested block no. The only vtkCompositeDataSet subclass <br />
// that can be added as a block is a vtkMultiBlockDataSet, <br />
// an error is raised otherwise. <br />
void SetBlock(unsigned int blockno, vtkDataObject* block);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given block.<br />
int HasMetaData(unsigned int blockno);<br />
<br />
// Description:<br />
// Returns the meta-data for the block. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int blockno);<br />
</pre><br />
</font><br />
<br />
==vtkMultiPieceDataSet==<br />
<br />
A vtkMultiPieceDataSet dataset groups multiple data pieces together. <br />
For example, say that a simulation broke a volume into 16 piece so that <br />
each piece can be processed with 1 process in parallel. We want to load <br />
this volume in a visualization cluster of 4 nodes. Each node will get 4 <br />
pieces, not necessarily forming a whole rectangular piece. In this case, <br />
it is not possible to append the 4 pieces together into a vtkImageData. <br />
In this case, these 4 pieces can be collected together using a <br />
vtkMultiPieceDataSet. <br />
Note that vtkMultiPieceDataSet is intended to be included in other composite<br />
datasets eg. vtkMultiBlockDataSet, vtkHierarchicalBoxDataSet. Hence the lack<br />
of algorithms producting vtkMultiPieceDataSet.<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of pieces. This will cause allocation if the new number of<br />
// pieces is greater than the current size. All new pieces are initialized to<br />
// null.<br />
void SetNumberOfPieces(unsigned int numpieces);<br />
<br />
// Description:<br />
// Returns the number of pieces.<br />
unsigned int GetNumberOfPieces();<br />
<br />
// Description:<br />
// Returns the piece at the given index. <br />
vtkDataSet* GetPiece(unsigned int pieceno);<br />
<br />
// Description:<br />
// Sets the data object as the given piece. The total number of pieces will <br />
// be resized to fit the requested piece no.<br />
void SetPiece(unsigned int pieceno, vtkDataSet* piece);<br />
<br />
// Description:<br />
// Returns true if meta-data is available for a given piece.<br />
int HasMetaData(unsigned int piece);<br />
<br />
// Description:<br />
// Returns the meta-data for the piece. If none is already present, a new<br />
// vtkInformation object will be allocated. Use HasMetaData to avoid<br />
// allocating vtkInformation objects.<br />
vtkInformation* GetMetaData(unsigned int pieceno);<br />
</pre><br />
</font><br />
<br />
==vtkHierarchicalBoxDataSet==<br />
<br />
vtkHiererchicalBoxDataSet is a hierarchical dataset of Uniform grids. It is designed for AMR (Adaptive mesh refinement) dataset. The structure consists of ''levels'', with each level containing datasets. The dataset type is restricted to vtkUniformGrid. Each dataset has an associated vtkAMRBox that represents it's region (similar to extent) in space. Internally, each level in a vtkHierarchicalBoxDataSet is nothing but a vtkMultiPieceDataSet. <br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the number of refinement levels. This call might cause<br />
// allocation if the new number of levels is larger than the<br />
// current one.<br />
void SetNumberOfLevels(unsigned int numLevels);<br />
<br />
// Description:<br />
// Returns the number of levels.<br />
unsigned int GetNumberOfLevels();<br />
<br />
// Description:<br />
// Set the number of data set at a given level.<br />
void SetNumberOfDataSets(unsigned int level, unsigned int numdatasets);<br />
<br />
// Description:<br />
// Returns the number of data sets available at any level.<br />
unsigned int GetNumberOfDataSets(unsigned int level);<br />
<br />
// Description:<br />
// Set the dataset pointer for a given node. This will resize the number of<br />
// levels and the number of datasets in the level to fit level, id requested. <br />
void SetDataSet(unsigned int level, unsigned int id, <br />
vtkAMRBox& box, vtkUniformGrid* dataSet);<br />
<br />
// Description:<br />
// Get a dataset given a level and an id.<br />
vtkUniformGrid* GetDataSet(unsigned int level,<br />
unsigned int id,<br />
vtkAMRBox& box);<br />
<br />
// Description:<br />
// Get meta-data associated with a level. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasLevelMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given level.<br />
int HasLevelMetaData(unsigned int level);<br />
<br />
// Description:<br />
// Get meta-data associated with a dataset. This may allocate a new<br />
// vtkInformation object if none is already present. Use HasMetaData to<br />
// avoid unnecessary allocations.<br />
vtkInformation* GetMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Returns if meta-data exists for a given dataset under a given level.<br />
int HasMetaData(unsigned int level, unsigned int index);<br />
<br />
// Description:<br />
// Sets the refinement of a given level. The spacing at level<br />
// level+1 is defined as spacing(level+1) = spacing(level)/refRatio(level).<br />
// Note that currently, this is not enforced by this class however<br />
// some algorithms might not function properly if the spacing in<br />
// the blocks (vtkUniformGrid) does not match the one described<br />
// by the refinement ratio.<br />
void SetRefinementRatio(unsigned int level, int refRatio);<br />
<br />
// Description:<br />
// Returns the refinement of a given level.<br />
int GetRefinementRatio(unsigned int level);<br />
<br />
// Description:<br />
// Returns the AMR box for the location pointer by the iterator.<br />
vtkAMRBox GetAMRBox(vtkCompositeDataIterator* iter);<br />
<br />
// Description:<br />
// Returns the refinement ratio for the position pointed by the iterator.<br />
int GetRefinementRatio(vtkCompositeDataIterator* iter);<br />
</pre><br />
</font><br />
<br />
==vtkCompositeDataIterator==<br />
<br />
vtkCompositeDataIterator is used to iterate over composite datasets. <br />
<br />
<font color="green"><br />
<pre><br />
// Description:<br />
// Set the composite dataset this iterator is iterating over. <br />
// Must be set before traversal begins.<br />
virtual void SetDataSet(vtkCompositeDataSet* ds);<br />
vtkGetObjectMacro(DataSet, vtkCompositeDataSet);<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure.<br />
virtual void InitTraversal();<br />
<br />
// Description:<br />
// Begin iterating over the composite dataset structure in reverse order.<br />
virtual void InitReverseTraversal();<br />
<br />
// Description:<br />
// Move the iterator to the beginning of the collection.<br />
virtual void GoToFirstItem();<br />
<br />
// Description:<br />
// Move the iterator to the next item in the collection.<br />
virtual void GoToNextItem();<br />
<br />
// Description:<br />
// Test whether the iterator is currently pointing to a valid item. Returns 1<br />
// for yes, and 0 for no.<br />
virtual int IsDoneWithTraversal();<br />
<br />
// Description:<br />
// Returns the current item. Valid only when IsDoneWithTraversal() returns 0.<br />
virtual vtkDataObject* GetCurrentDataObject();<br />
<br />
// Description:<br />
// Returns the meta-data associated with the current item. This will allocate<br />
// a new vtkInformation object is none is already present. Use<br />
// HasCurrentMetaData to avoid unnecessary creation of vtkInformation objects.<br />
virtual vtkInformation* GetCurrentMetaData();<br />
<br />
// Description:<br />
// Returns if the a meta-data information object is present for the current<br />
// item. Return 1 on success, 0 otherwise.<br />
virtual int HasCurrentMetaData();<br />
<br />
// Description:<br />
// If VisitOnlyLeaves is true, the iterator will only visit nodes<br />
// (sub-datasets) that are not composite. If it encounters a composite<br />
// data set, it will automatically traverse that composite dataset until<br />
// it finds non-composite datasets (see also TraverseSubTree). <br />
// With this options, it is possible to<br />
// visit all non-composite datasets in tree of composite datasets<br />
// (composite of composite of composite for example :-) ) If<br />
// VisitOnlyLeaves is false, GetCurrentDataObject() may return<br />
// vtkCompositeDataSet. By default, VisitOnlyLeaves is 1.<br />
vtkSetMacro(VisitOnlyLeaves, int);<br />
vtkGetMacro(VisitOnlyLeaves, int);<br />
vtkBooleanMacro(VisitOnlyLeaves, int);<br />
<br />
// Description:<br />
// If TraverseSubTree is set to true, the iterator will visit the entire tree<br />
// structure, otherwise it only visits the first level children. Set to 1 by<br />
// default.<br />
vtkSetMacro(TraverseSubTree, int);<br />
vtkGetMacro(TraverseSubTree, int);<br />
vtkBooleanMacro(TraverseSubTree, int);<br />
<br />
// Description:<br />
// If SkipEmptyNodes is true, then NULL datasets will be skipped. Default is<br />
// true.<br />
vtkSetMacro(SkipEmptyNodes, int);<br />
vtkGetMacro(SkipEmptyNodes, int);<br />
vtkBooleanMacro(SkipEmptyNodes, int);<br />
<br />
// Description:<br />
// Flat index is an index obtained by traversing the tree in preorder.<br />
// This can be used to uniquely identify nodes in the tree.<br />
// Not valid if IsDoneWithTraversal() returns true.<br />
vtkGetMacro(CurrentFlatIndex, unsigned int);<br />
<br />
</pre><br />
</font><br />
<br />
===Examples===<br />
====Copy all non-empty leaf nodes====<br />
<font color="blue"><br />
<pre><br />
// This can be very easily done with a ShallowCopy, but we use the iterators for illustration.<br />
vtkCompositeDataSet* CreateLeafCopy(vtkCompositeDataSet* src)<br />
{<br />
vtkCompositeDataSet* output = src->NewInstance();<br />
// Copy the structure as well as the meta-data assciated with all nodes in the composite tree.<br />
output->CopyStructure(src);<br />
<br />
vtkCompositeDataIterator* iter = src->NewIterator();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
output->SetDataSet(iter, iter->GetCurrentDataObject());<br />
}<br />
iter->Delete();<br />
return output;<br />
}<br />
<br />
</pre><br />
</font><br />
<br />
====Iterate over immediate child nodes of a composite dataset====<br />
<font color="blue"><br />
<pre><br />
vtkCompositeDataSet* input = ...<br />
vtkCompositeDataIterator* iter = input->NewIterator();<br />
iter->TraverseSubTreeOff(); // we are only interested in immediate children.<br />
iter->VisitOnlyLeavesOff(); // we want all immediate children, including composite dataset child nodes.<br />
// To not skip empty children, simply call iter->SkipEmptyNodesOff();<br />
for (iter->InitTraversal(); !iter->IsDoneWithTraversal(); iter->GoToNextItem())<br />
{<br />
...<br />
}<br />
</pre><br />
</font><br />
<br />
====Flat Index====<br />
Iterators can be used to determine what we refer to as the '''flat-index''' of any node. Flat index is the index of any node in the pre-order traversal of the tree eg. the following diagram shows the tree structure and the flat-index of each node (all rectangular nodes are composite datasets, while circular nodes are vtkDataSet subclasses). The flat-index for the current location can be obtained from the iterator using ''GetCurrentFlatIndex''.<br />
<br />
<graphviz><br />
digraph G {<br />
node [shape = "record"]<br />
edge <br />
[<br />
arrowtail="ediamond"<br />
arrowhead="none<br />
]<br />
A [label="A (0)"]<br />
B [label="B (1)" shape="circle"]<br />
C [label="C (2)"]<br />
D [label="D (3)"]<br />
E [label="E (4)" shape="circle"]<br />
F [label="F (5)" shape="circle"]<br />
<br />
A->B<br />
A->C<br />
C->D<br />
D->E<br />
D->F<br />
<br />
}<br />
</graphviz><br />
<br />
=Changes from VTK 5.0=<br />
==vtkCompositeDataPipeline==<br />
This executive is used to iterative execute a non-composite data aware filter over all the leaves in a composite dataset. In VTK 5.0, the vtkHierarchicalBoxDataSet was always converted to a vtkMultiBlockDataSet when a non-composite aware filter was present in the pipeline. This is no longer the case. vtkCompositePipeline now verifies if the non-composite aware algorithm can produce vtkUniformGrid given a vtkUniformGrid as an input. If so, for a vtkHierarchicalBoxDataSet input, the output is a vtkHierarchicalBoxDataSet otherwise it is a vtkMultiBlockDataSet. Even when the vtkHierarchicalBoxDataSet is converted to a vtkMutliBlockDataSet the composite data tree structure is preserved in other words: since vtkHierarchicalBoxDataSet has vtkMutliPieceDataSet instances for each level, the converted vtkMultiBlockDataSet will also have vtkMutliPieceDataSet instances as the child blocks of the root node.<br />
<br />
==Class Names==<br />
A few class names have changed, a few others are no longer available. This table lists the old class name and an equivalent class in the new design.<br />
<br />
{| border="1"<br />
|+ '''Class Name Changes''' ['*' -- no longer applicable ]<br />
! Old Class !! Equivalent Class<br />
|- <br />
| vtkHierarchicalDataInformation || *<br />
|- <br />
| vtkHierarchicalDataIterator || vtkCompositeIterator<br />
|-<br />
| vtkHierarchicalDataSet || *<br />
|-<br />
| vtkHierarchicalDataSetAlgorithm || *<br />
|- <br />
| vtkMultiGroupDataInformation || *<br />
|- <br />
| vtkMultiGroupDataIterator || vtkCompositeIterator<br />
|- <br />
| vtkMultiGroupDataSet || vtkCompositeDataSet<br />
|- <br />
| vtkMultiGroupDataSetAlgorithm || vtkCompositeAlgorithm<br />
|- <br />
| vtkHierarchicalDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataExtractDataSets || vtkExtractDataSets<br />
|- <br />
| vtkMultiGroupDataExtractGroup || vtkExtractBlock, vtkExtractLevel<br />
|- <br />
| vtkMultiGroupDataGeometryFilter || vtkCompositeDataGeometryFilter<br />
|- <br />
| vtkMultiGroupDataGroupFilter || vtkMultiBlockDataGroupFilter<br />
|- <br />
| vtkMultiGroupDataGroupIdScalars || vtkBlockIdScalars, vtkLevelIdScalars<br />
|- <br />
| vtkMultiGroupProbeFilter || vtkCompositeDataProbeFilter<br />
|- <br />
| vtkXMLHierarchicalDataReader || *<br />
|- <br />
| vtkXMLMultiGroupDataReader || vtkXMLCompositeDataReader,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataReader,<br />
|- <br />
| || vtkXMLMultiBlockDataReader<br />
|- <br />
| vtkXMLMultiGroupDataWriter || vtkXMLCompositeDataWriter,<br />
|- <br />
| || vtkXMLHierarchicalBoxDataWriter,<br />
|- <br />
| || vtkXMLMultiBlockDataWriter<br />
|- <br />
| vtkMultiGroupDataExtractPiece || vtkExtractPiece<br />
|- <br />
| vtkXMLPMultiGroupDataWriter || vtkXMLPMultiBlockDataWriter,<br />
|- <br />
| || vtkXMLPHierarchicalBoxDataWriter<br />
|- <br />
| vtkMultiGroupPolyDataMapper || vtkCompositePolyDataMapper<br />
|}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=ParaView/Plugin_HowTo&diff=40903ParaView/Plugin HowTo2011-06-19T09:52:43Z<p>Pratikm: </p>
<hr />
<div>ParaView comes with plethora of functionality bundled in: several readers, multitude of filters, quite a few different types of views etc. However, it is not uncommon for developers to want to add new functionality to ParaView for eg. to add support to their new file format, incorporate a new filter into paraview etc. ParaView makes it possible to add new functionlity by using an extensive plugin mechanism. <br />
<br />
Plugins can be used to extend ParaView in several ways:<br />
* Add new readers, writers, filters <br />
* Add custom GUI components such as toolbar buttons to perform common tasks<br />
* Add new views in for display data<br />
<br />
Examples for different types of plugins are provided with the ParaView source under '''Examples/Plugins/'''.<br />
<br />
This document has major sections:<br />
* First section covers how to use existing plugins in ParaView.<br />
* Second section contains information for developers about writing new plugins for ParaView.<br />
<br />
== Using Plugins ==<br />
<br />
Plugins are distributed as shared libraries (*.so on Unix, *.dylib on Mac, *.dll on Windows etc). For a plugin to be loadable in ParaView, it must be built with the same version of ParaView as it is expected to be deployed on. Plugins can be classified into two broad categories:<br />
* Server-side plugins<br />
: These are plugins that extend the algorithmic capabilities for ParaView eg. new filters, readers, writers etc. Since in ParaView data is processed on the server-side, these plugins need to be loaded on the server.<br />
* Client-side plugins<br />
: These are plugins that extend the ParaView GUI eg. property panels for new filters, toolbars, views etc. These plugins need to be loaded on the client.<br />
<br />
Oftentimes a plugin has both server-side as well as client-side components to it eg. a plugin that adds a new filter and a property panel that goes with that filter. Such plugins need to be loaded both on the server as well as the client. <br />
<br />
Generally, users don't have to worry whether a plugin is a server-side or client-side plugin. Simply load the plugin on the server as well as the client. ParaView will include relevant components from plugin on each of the processes.<br />
<br />
There are three ways for loading plugins:<br />
<br />
* Using the GUI ('''Plugin Manager''')<br />
: Plugins can be loaded into ParaView using the '''Plugin Manager''' accessible from '''Tools | Manage Plugins/Extensions''' menu. The Plugin Manager has two sections for loading local plugins and remote plugins (enabled only when connected to a server). To load a plugin on the local as well as remote side, simply browse to the plugin shared library. If the loading is successful, the plugin will appear in the list of loaded plugins. The Plugin manager also lists the paths it searched to load plugins automatically.<br />
: The Plugin Manager remembers all loaded plugins, so next time to load the plugin, simply locate it in the list and click "Load Selected" button. <br />
: You can set up ParaView to automatically load the plugin at startup (in case of client-side plugins) or on connecting to the server (in case of server-side plugins) by checking the "Auto Load" checkbox on a loaded plugin.<br />
<table><br />
<tr><br />
<td><br />
[[Image:LocalPlugin_Manager.png|thumb|300px|'''Figure 1:''' Plugin Manager when not connected to a remote server, showing loaded plugins on the local site.''']]<br />
</td><br />
<td><br />
[[Image:RemotePlugin_Manager.png|thumb|300px|'''Figure 2:''' Plugin Manager when connected to a server showing loaded plugins on the local as well as remote sites.''']]<br />
</td><br />
</table><br />
* Using environment variable (Auto-loading plugins)<br />
: If one wants ParaView to automatically load a set of plugins on startup, one can use the '''PV_PLUGIN_PATH''' environment variable. '''PV_PLUGIN_PATH''' can be used to list a set of directories (separated by colon (:) or semi-colon (;)) which ParaView will search on startup to load plugins. This enviromnent variable needs to be set on both the client node to load local plugins as well as the remote server to load remote plugins. Note that plugins in PV_PLUGIN_PATH are always auto-loaded irrespective of the status of the "Auto Load" checkbox in the Plugin Manager.<br />
* Placing the plugins in a recognized location. Recognized locations are:<br />
** A plugins subdirectory beneath the directory containing the paraview client or server executables. This can be a system-wide location if installed as such.<br />
** A Plugins subdirectory in the user's home area. On Unix/Linux/Mac, $HOME/.config/ParaView/ParaView<version>/Plugins. On Windows %APPDATA$\ParaView\ParaView<version>\Plugins.<br />
<br />
==Debugging Plugins==<br />
If plugin loading failed, try setting the '''PV_PLUGIN_DEBUG''' environment variable for all processes that you were trying to load the plugin on. ParaView will then try to print verbose information about each step and causes for failure, as show below.<br />
<br />
----<br />
<br />
<source lang="python"><br />
<br />
***************************************************<br />
Attempting to load /home/utkarsh/Kitware/ParaView3/ParaView3Bin/bin/libSurfaceLIC.so<br />
Loaded shared library successfully. Now trying to validate that it's a ParaView plugin.<br />
Plugin's signature: paraviewplugin|GNU|3.7<br />
Plugin signature verification successful. This is definitely a ParaView plugin compiled with correct compiler for correct ParaView version.<br />
Updating Shared Library Paths: /home/utkarsh/Kitware/ParaView3/ParaView3Bin/bin<br />
Plugin instance located successfully. Now loading components from the plugin instance based on the interfaces it implements.<br />
----------------------------------------------------------------<br />
Plugin Information: <br />
Name : SurfaceLIC<br />
Version : 1.0<br />
ReqOnServer : 1<br />
ReqOnClient : 1<br />
ReqPlugins : <br />
ServerManager Plugin : Yes<br />
Python Plugin : No<br />
</source><br />
<br />
----<br />
<br />
<font color="magenta">Plugin debug information is not available for ParaView 3.6 or earlier</font><br />
<br />
== Writing Plugins ==<br />
This section covers writing and compiling different types of Plugins. To create a plugin, one must have their own build of ParaView3. Binaries downloaded from www.paraview.org do not include necessary header files or import libraries (where applicable) for compiling plugins.<br />
<br />
The beginning of a CMakeLists.txt file contains<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
Where CMake will ask for the ParaView_DIR which you point to your ParaView build. The PARAVIEW_USE_FILE includes build parameters and macros for building plugins.<br />
<br />
=== Adding a Filter ===<br />
<br />
In this plugin, we want to add a new filter to ParaView. The filter has to be a VTK-based algorithm, written as following the standard procedures for writing VTK algorithms. Generally for such cases where we are adding a new VTK class to ParaView (be it a filter, reader or a writer), we need to do the following tasks:<br />
* Write a '''Server Manager Configuration XML''' which describes the ''Proxy'' interface for the new VTK class. Basically, this defines the interface for the client to create and modify instances of the new class on the server side. Please refer to the [http://www.kitware.com/products/paraview.html ParaView Guide] for details about writing these server-manager xmls.<br />
* Write a configuration XML for the GUI to make ParaView GUI aware of this new class, if applicable. For filters, this is optional, since ParaView automatically recognizes filters added through plugins and lists them in the '''Alphabetical''' sub-menu. One may use the GUI configuration xml to add the new filter to a specific category in the ''Filters'' menu, or add a new category etc. For readers and writers, this is required since ParaView GUI needs to know what extensions your reader/writer supports etc.<br />
<br />
==== Enabling an existing VTK filter ====<br />
<br />
Sometimes, the filter that one wants to add to ParaView is already available in VTK, it's just not exposed through the ParaView GUI. This is the easiest type of plugin to create. There are two options: 1) setup the plugin using only an XML file and 2) actually compile the plugin into a shared library. The first option is the easiest, but the second option will prepare you for creating a custom filter in the future as the process is nearly identical. <br />
<br />
===== XML Only =====<br />
If you have not built Paraview from source, using an xml plugin is your only option.<br />
<br />
We need to write the server manager configuration xml for the filter describing its API. The GUI xml to add the filter to any specific category is optional. <br />
<br />
For example, let's say we simply want to expose the '''vtkCellDerivatives''' in VTK. Then first, we'll write the server manager configuration XML (call it CellDerivatives.xml), similar to what we would have done for adding a new filter. <br />
<div class="MainPageBG" style="border: 1px solid #ffc9c9; color: #000; background-color: #fff3f3"><br />
<br />
<br />
<source lang="xml"><br />
<ServerManagerConfiguration><br />
<ProxyGroup name="filters"><br />
<SourceProxy name="MyCellDerivatives" class="vtkCellDerivatives" label="My Cell Derivatives"><br />
<Documentation<br />
long_help="Create point attribute array by projecting points onto an elevation vector."<br />
short_help="Create a point array representing elevation."><br />
</Documentation><br />
<InputProperty<br />
name="Input"<br />
command="SetInputConnection"><br />
<ProxyGroupDomain name="groups"><br />
<Group name="sources"/><br />
<Group name="filters"/><br />
</ProxyGroupDomain><br />
<DataTypeDomain name="input_type"><br />
<DataType value="vtkDataSet"/><br />
</DataTypeDomain><br />
</InputProperty><br />
<br />
</SourceProxy><br />
</ProxyGroup><br />
</ServerManagerConfiguration><br />
</source><br />
<br />
<br />
</div><br />
<br />
At this point, we can stop and use the plugin in Paraview by loading the XML file directly into the plugin manager.<br />
<br />
===== Compiling into a Shared Library =====<br />
If you have built Paraview from source, it is possible to compile the plugin into into a shared library. To do this, we can use the following CMakeLists.txt<br />
<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(CellDerivatives "1.0"<br />
<font color="purple">SERVER_MANAGER_XML</font> CellDerivatives.xml)<br />
<br />
We can now load the plugin through the plugin manager by selecting the .so file.<br />
<br />
Similarly compiled Qt resources (*.bqrc) can be loaded at runtime. *.bqrc is a binary file containing resources which can include icons, the GUI configuration xmls for adding catergories etc. A .bqrc can be made from a .qrc by running the rcc utility provided by Qt:<br />
<source lang="text"><br />
rcc -binary -o myfile.bqrc myfile.qrc.<br />
</source><br />
<br />
==== Adding a new VTK filter ====<br />
<br />
For this example, refer to '''Examples/Plugins/Filter''' in the ParaView source. Let's say we have written a new vtkMyElevationFilter (vtkMyElevationFilter.h|cxx), which extends the functionality of the vtkElevationFilter and we want to package that as a plugin for ParaView. For starters, we simply want to use this filter in ParaView (not doing anything fancy with Filters menu categories etc.). As described, we need to write the server manager configuration XML (MyElevationFilter.xml). Once that's done, we write a CMakeLists.txt file to package this into a plugin. <br />
<br />
This CMakeLists.txt simply needs to include the following lines:<br />
<br />
<font color="green"># Locate ParaView build and then import CMake configuration, <br />
# macros etc. from it.</font><br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
<font color="green"># Use the ADD_PARAVIEW_PLUGIN macro to build a plugin</font><br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(<br />
MyElevation <font color="green">#<--Name for the plugin</font><br />
"1.0" <font color="green">#<--Version string</font><br />
<font color="purple">SERVER_MANAGER_XML</font> MyElevationFilter.xml <font color="green">#<-- server manager xml</font><br />
<font color="purple">SERVER_MANAGER_SOURCES</font> vtkMyElevationFilter.cxx <font color="green">#<-- source files for the new classes</font><br />
)<br />
<br />
Then using cmake and a build system, one can build a plugin for this new filter. Once this plugin is loaded the filter will appear under the "Alphabetical" list in the Filters menu.<br />
<br />
<br />
===== Filters with Multiple Input Ports =====<br />
If your filter requires multiple input ports, you have two options - 1) You can create helper functions in the VTK filter such as SetYourInputName which deal with addressing the VTK pipeline in the c++ code. 2) Address/access the input connection by number in the XML. The port_index property specifies which input connection the particular input will be connected to. The SetInputConnection function is the command that will actually be called with this port_index to setup the pipeline.<br />
<br />
An example XML file for a filter with multiple inputs is below. The filter takes three vtkPolyData's as input.<br />
<div class="MainPageBG" style="border: 1px solid #ffc9c9; color: #000; background-color: #fff3f3"><br />
<br />
<br />
<source lang="xml"><br />
<ServerManagerConfiguration><br />
<ProxyGroup name="filters"><br />
<!-- ================================================================== --><br />
<SourceProxy name="LandmarkTransformFilter" class="vtkLandmarkTransformFilter" label="LandmarkTransformFilter"><br />
<Documentation<br />
long_help="Align two point sets using vtkLandmarkTransform to compute the best transformation between the two point sets."<br />
short_help="vtkLandmarkTransformFilter."><br />
</Documentation><br />
<br />
<InputProperty<br />
name="SourceLandmarks"<br />
port_index="0"<br />
command="SetInputConnection"><br />
<ProxyGroupDomain name="groups"><br />
<Group name="sources"/><br />
<Group name="filters"/><br />
</ProxyGroupDomain><br />
<DataTypeDomain name="input_type"><br />
<DataType value="vtkPolyData"/><br />
</DataTypeDomain><br />
<Documentation><br />
Set the source data set. This data set that will move towards the target data set.<br />
</Documentation><br />
</InputProperty><br />
<br />
<InputProperty<br />
name="TargetLandmarks"<br />
port_index="1"<br />
command="SetInputConnection"><br />
<ProxyGroupDomain name="groups"><br />
<Group name="sources"/><br />
<Group name="filters"/><br />
</ProxyGroupDomain><br />
<DataTypeDomain name="input_type"><br />
<DataType value="vtkPolyData"/><br />
</DataTypeDomain><br />
<Documentation><br />
Set the target data set. This data set will stay stationary.<br />
</Documentation><br />
</InputProperty><br />
<br />
<InputProperty<br />
name="SourceDataSet"<br />
port_index="2"<br />
command="SetInputConnection"><br />
<ProxyGroupDomain name="groups"><br />
<Group name="sources"/><br />
<Group name="filters"/><br />
</ProxyGroupDomain><br />
<DataTypeDomain name="input_type"><br />
<DataType value="vtkPolyData"/><br />
</DataTypeDomain><br />
<Documentation><br />
Set the source data set landmark points.<br />
</Documentation><br />
</InputProperty><br />
<br />
</SourceProxy><br />
<!-- End LandmarkTransformFilter --><br />
</ProxyGroup><br />
<!-- End Filters Group --><br />
</ServerManagerConfiguration><br />
</source><br />
<br />
<br />
</div><br />
<br />
To set the inputs in Paraview, simply select one of the inputs in the Pipeline Browser and then select the filter from the Filters menu. This will open a dialog box which will allow you to specify which object to connect to each input port.<br />
<br />
==== Adding ''Categories'' to the Filters Menu ====<br />
<br />
Now suppose we want to add a new category to the Filters menu, called "Extensions" and then show this filter in that submenu. In that case, we'll need a GUI configuration xml to tell the ParaView GUI to create the category. This GUI configuration xml will look as such:<br />
<br />
<source lang="xml"><br />
<ParaViewFilters><br />
<Category name="Extensions" menu_label="&amp;Extensions"><br />
<!-- adds a new category and then adds our filter to it --><br />
<Filter name="MyElevationFilter" /><br />
</Category><br />
</ParaViewFilters><br />
</source><br />
<br />
If the name of the category is same as an already existsing category eg. ''Data Analysis'', then the filter gets added to the existing category.<br />
<br />
The CMakeLists.txt must change to include this new xml (let's call it MyElevationGUI.xml) as follows:<br />
<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(MyElevation "1.0"<br />
<font color="purple">SERVER_MANAGER_XML</font> MyElevationFilter.xml <br />
<font color="purple">SERVER_MANAGER_SOURCES</font> vtkMyElevationFilter.cxx<br />
<font color="purple">GUI_RESOURCE_FILES</font> MyElevationGUI.xml)<br />
<br />
==== Adding Icons ====<br />
You can see that some filters in the Filters menu (eg. Clip) have icons associated with them. It's possible for the plugin to add icons for filters it adds as well. For that you need to write a Qt resource file (say MyElevation.qrc) as follows:<br />
<br />
<source lang="xml"><br />
<RCC><br />
<qresource prefix="/MyIcons" ><br />
<file>MyElevationIcon.png</file><br />
</qresource><br />
</RCC><br />
</source><br />
<br />
The GUI configuration xml now refers to the icon provided by this resource as follows:<br />
<source lang="xml"><br />
<ParaViewFilters><br />
<Category name="Extensions" menu_label="&amp;Extensions"><br />
<!-- adds a new category and then adds our filter to it --><br />
<Filter name="MyElevationFilter" icon=":/MyIcons/MyElevationIcon.png" /><br />
</Category><br />
</ParaViewFilters><br />
</source><br />
<br />
Finally, the CMakeLists.txt file much change to include our MyElevation.qrc file as follows:<br />
<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(MyElevation "1.0"<br />
<font color="purple">SERVER_MANAGER_XML</font> MyElevationFilter.xml <br />
<font color="purple">SERVER_MANAGER_SOURCES</font> vtkMyElevationFilter.cxx<br />
<font color="purple">GUI_RESOURCES</font> MyElevation.qrc<br />
<font color="purple">GUI_RESOURCE_FILES</font> MyElevationGUI.xml)<br />
<br />
==== Adding GUI Parameters ====<br />
Simply add these in the server manager xml to expose parameters of the filter to the paraview user.<br />
===== Integer property =====<br />
This property appears as a text box.<br />
<source lang="xml"><br />
<IntVectorProperty<br />
name="bStartByMatchingCentroids"<br />
command="SetbStartByMatchingCentroids"<br />
number_of_elements="1"<br />
default_values="1"><br />
</IntVectorProperty><br />
</source><br />
<br />
===== Boolean property =====<br />
This property appears as a check box control. A boolean property uses the IntVectorProperty with an extra line (BooleanDomain...) indicating this should be a check box rather than a text field.<br />
<source lang="xml"><br />
<IntVectorProperty<br />
name="bStartByMatchingCentroids"<br />
command="SetbStartByMatchingCentroids"<br />
number_of_elements="1"<br />
default_values="1"><br />
<BooleanDomain name="bool"/><br />
</IntVectorProperty><br />
</source><br />
<br />
===== String property =====<br />
This property appears as a text box.<br />
<source lang="xml"><br />
<StringVectorProperty<br />
name="YourStringVariable"<br />
command="SetYourStringVariable"<br />
number_of_elements="1"<br />
default_values="1"><br />
</StringVectorProperty><br />
</source><br />
<br />
===== Double property =====<br />
This property appears as a text box.<br />
<source lang="xml"><br />
<DoubleVectorProperty<br />
name="YourDoubleVariable"<br />
command="SetYourDoubleVariable"<br />
number_of_elements="1"<br />
default_values="1"><br />
</DoubleVectorProperty><br />
</source><br />
<br />
===== Multi-Value Double property =====<br />
This property appears as a text box.<br />
<source lang="xml"><br />
<DoubleVectorProperty<br />
name="YourDoubleVectorVariable"<br />
command="SetYourDoubleVectorVariable"<br />
number_of_elements="3"<br />
default_values="1.0 0.0 0.0"><br />
</DoubleVectorProperty><br />
</source><br />
<br />
===== Double property slider =====<br />
This creates a slider that ranges from 0.0 to 1.0<br />
<source lang="xml"><br />
<DoubleVectorProperty<br />
name="PercentToRemove"<br />
command="SetPercentToRemove"<br />
number_of_elements="1"<br />
default_values="0.1"><br />
<DoubleRangeDomain name="range" min="0.0" max="1.0" /><br />
</DoubleVectorProperty><br />
</source><br />
<br />
===== Drop down list =====<br />
This creates a drop down list with 3 choices. The values associated with the choices are specified.<br />
<source lang="xml"><br />
<br />
<IntVectorProperty<br />
name="TransformMode"<br />
command="SetTransformMode"<br />
number_of_elements="1"<br />
default_values="1"><br />
<EnumerationDomain name="enum"><br />
<Entry value="6" text="RigidBody"/><br />
<Entry value="7" text="Similarity"/><br />
<Entry value="12" text="Affine"/><br />
</EnumerationDomain><br />
<Documentation><br />
This property indicates which transform mode will be used.<br />
</Documentation><br />
</IntVectorProperty><br />
</source><br />
<br />
=== Adding a Reader ===<br />
<br />
Adding a new reader to a plugin is similar to adding a filter except that instead of the GUI configuration xml describing categories in the filter menu, we require the xml to define what file extensions this reader can handle. This xml (MyReaderGUI.xml) looks like this:<br />
<br />
<source lang="xml"><br />
<ParaViewReaders><br />
<Reader name="MyPNGReader" extensions="png"<br />
file_description="My PNG Files"><br />
</Reader><br />
</ParaViewReaders><br />
</source><br />
<br />
An example MyPNGReader.xml is shown below. In almost all cases you must have a SetFileName function property. You are free to have other properties as well, as with a standard (non reader) filter.<br />
<source lang="cmake"><br />
<ServerManagerConfiguration><br />
<ProxyGroup name="sources"><br />
<!-- ================================================================== --><br />
<SourceProxy name="MyPNGReader" class="vtkMyPNGReader" label="PNGReader"><br />
<Documentation<br />
long_help="Read a PNG file."<br />
short_help="Read a PNG file."><br />
</Documentation><br />
<StringVectorProperty<br />
name="FileName"<br />
animateable="0"<br />
command="SetFileName"<br />
number_of_elements="1"><br />
<FileListDomain name="files"/><br />
<Documentation><br />
This property specifies the file name for the PNG reader.<br />
</Documentation><br />
</StringVectorProperty><br />
<br />
<Hints><br />
<ReaderFactory extensions="png"<br />
file_description="PNG File Format" /><br />
</Hints><br />
</SourceProxy><br />
<!-- End MyPNGReader --><br />
</ProxyGroup><br />
<!-- End Filters Group --><br />
</ServerManagerConfiguration><br />
<br />
</source><br />
<br />
And the CMakeLists.txt looks as follows where vtkMyPNGReader.cxx is the source for the reader, MyPNGReader.xml is the server manager configuration xml, and MyReaderGUI.xml is the GUI configuration xml described above:<br />
<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(MyReader "1.0" <br />
<font color="purple">SERVER_MANAGER_XML</font> MyPNGReader.xml<br />
<font color="purple">SERVER_MANAGER_SOURCES</font> vtkMyPNGReader.cxx <br />
<font color="purple">GUI_RESOURCE_FILES</font> MyReaderGUI.xml)<br />
<br />
If you want your reader to work correctly with a file series, please refer to [[Animating legacy VTK file series#Making custom readers work with file series|file series animation]] for details.<br />
<br />
Once you generate the project using cmake and compile the project, in Paraview go to "Tools->Manage Plugins/Extensions". Under "Local Plugins", click "Load New" and browse for the shared library file you just created. You should now see your new file type in the "Files of type" list in the "Open file" dialog.<br />
<br />
=== Adding a Writer ===<br />
<br />
Similar to a reader, for a writer we need to tell ParaView what extensions this writer supports. This can be done using the GUI XML as follows:<br />
<br />
<source lang="xml"><br />
<ParaViewWriters><br />
<Writer name="MyTIFFWriter"<br />
extensions="tif"<br />
file_description="My Tiff Files"><br />
</Writer><br />
</ParaViewWriters><br />
</source><br />
<br />
=== Adding a Toolbar ===<br />
<br />
Filters, reader and writers are by far the most common ways for extending ParaView. However, ParaView plugin functionality goes far beyond that. The following sections cover some of these advanced plugins that can be written.<br />
<br />
Applications use toolbars to provide easy access to commonly used functionality. It is possible to have plugins that add new toolbars to ParaView. The plugin developer implements his own C++ code to handle the callback for each button on the toolbar. Hence one can do virtually any operation using the toolbar plugin with some understanding of the ParaView Server Manager framework and the ParaView GUI components. <br />
<br />
Please refer to '''Examples/Plugins/SourceToolbar''' for this section. There we are adding a toolbar with two buttons to create a sphere and a cylinder source. For adding a toolbar, one needs to implement a subclass for [http://doc.trolltech.com/4.3/qactiongroup.html QActionGroup] which adds the [http://doc.trolltech.com/4.3/qaction.html QAction]s for each of the toolbar button and then implements the handler for the callback when the user clicks any of the buttons. In the example '''SourceToobarActions.h|cxx''' is the QActionGroup subclass that adds the two tool buttons.<br />
<br />
To build the plugin, the CMakeLists.txt file is:<br />
<br />
<font color="green"># We need to wrap for Qt stuff such as signals/slots etc. to work correctly.</font><br />
QT4_WRAP_CPP(MOC_SRCS SourceToolbarActions.h)<br />
<br />
<font color="green"># This is a macro for adding QActionGroup subclasses automatically as toolbars.</font><br />
<font color="violet">ADD_PARAVIEW_ACTION_GROUP</font>(IFACES IFACE_SRCS <br />
<font color="purple">CLASS_NAME</font> SourceToolbarActions<br />
<font color="purple">GROUP_NAME</font> "ToolBar/SourceToolbar")<br />
<br />
<font color="green"># Now create a plugin for the toolbar. Here we pass IFACES and IFACE_SRCS<br />
# which are filled up by the above macro with relevant entries</font><br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(SourceToolbar "1.0"<br />
<font color="purple">GUI_INTERFACES</font> ${IFACES}<br />
<font color="purple">SOURCES</font> ${MOC_SRCS} ${IFACE_SRCS} <br />
SourceToolbarActions.cxx)<br />
<br />
For the GROUP_NAME, we are using '''ToolBar/SourceToolbar'''; here '''ToolBar''' is a keyword which implies that the action group is a toolbar (and shows up under '''View | Toolbars''' menu) with the name '''SourceToolbar'''. When the plugin is loaded, this toolbar will show up with two buttons.<br />
<br />
<br />
=== Adding a Menu ===<br />
<br />
Adding a menu to the menu bar of the main window is almost identical to [[#Adding a Toolbar]]. The only difference is that you use the keyword '''MenuBar''' in lieu of '''ToolBar''' in the GROUP_NAME of the action group. So if you change the ADD_PARAVIEW_ACTION_GROUP command above to the following, the plugin will add a menu titled MyActions to the menu bar.<br />
<br />
<font color="violet">ADD_PARAVIEW_ACTION_GROUP</font>(IFACES IFACE_SRCS <br />
<font color="purple">CLASS_NAME</font> SourceToolbarActions<br />
<font color="purple">GROUP_NAME</font> "MenuBar/MyActions")<br />
<br />
If you give the name of an existing menu, then the commands will be added to that menu rather than create a new one. So, for example, if the GROUP_NAME is '''MenuBar/File''', the commands will be added to the bottom of the File menu.<br />
<br />
=== Adding an object panel ===<br />
Object Panels are the panels for editing object properties.<br />
<br />
ParaView3 contains automatic panel generation code which is suitable for most objects. If you find your object doesn't have a good auto-generated panel, you can make your own.<br />
<br />
To make your own, there is an explanation found on [[CustomObjectPanels]]<br />
<br />
Now let's say we have our own panel we want to make for a ConeSource. In this example, we'll just add a simple label saying that this panel came from the plugin. In ConePanel.h:<br />
<br />
<source lang="cpp"><br />
#include "pqAutoGeneratedObjectPanel.h"<br />
#include <QLabel><br />
#include <QLayout><br />
<br />
class ConePanel : public pqAutoGeneratedObjectPanel<br />
{<br />
Q_OBJECT<br />
public:<br />
ConePanel(pqProxy* pxy, QWidget* p)<br />
: pqAutoGeneratedObjectPanel(pxy, p)<br />
{<br />
this->layout()->addWidget(new QLabel("This is from a plugin", this));<br />
}<br />
};<br />
</source><br />
<br />
Then in our CMakeLists.txt file:<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
QT4_WRAP_CPP(MOC_SRCS ConePanel.h)<br />
<font color="violet">ADD_PARAVIEW_OBJECT_PANEL</font>(IFACES IFACE_SRCS <br />
<font color="purple">CLASS_NAME</font> ConePanel<br />
<font color="purple">XML_NAME</font> ConeSource <font color="purple">XML_GROUP</font> sources)<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(GUIConePanel "1.0"<br />
<font color="purple">GUI_INTERFACES</font> ${IFACES}<br />
<font color="purple">SOURCES</font> ${MOC_SRCS} ${IFACE_SRCS})<br />
<br />
=== Adding components to Display Panel (decorating display panels) ===<br />
Display panel is the panel shown on the '''Display''' tab in the '''Object Inspector'''. It is possible to add GUI components to existing [http://www.paraview.org/ParaView3/Doc/Nightly/html/classpqDisplayPanel.html display panels].<br />
<br />
In this example we want to add a GUI element to the display panel shown for the spread sheet view to size of data that is fetched to the client at one time referred to as the ''Block Size''.<br />
<br />
For that we write the implementation in QObject subclass (say MySpreadsheetDecorator) with a constructor that takes in the pqDisplayPanel which is to be decorated.<br />
<br />
<source lang="cpp"><br />
...<br />
class MySpreadsheetDecorator : public QObject<br />
{<br />
...<br />
public:<br />
MySpreadsheetDecorator(pqDisplayPanel* panel);<br />
virtual ~MySpreadsheetDecorator();<br />
...<br />
};<br />
</source><br />
<br />
In the constructor, we have access to the panel, hence we can get the ''layout'' from it and add custom widgets to it. In this case, it would be a spin-box or a line edit to enter the block size. <br />
''pqDisplayPanel::getRepresentation()'' provides access to the representation being shown on the panel. We can use [http://www.paraview.org/ParaView3/Doc/Nightly/html/classpqPropertyLinks.html pqPropertyLinks] to link the "BlockSize" property on the representation with the spin-box for the block size so that when the widget is changed by the user, the property changes and vice-versa.<br />
<br />
Now the CMakeLists.txt to package this plugin looks like follows:<br />
<br />
QT4_WRAP_CPP(MOC_SRCS MySpreadsheetDecorator.h)<br />
<br />
<font color="green"># This is the macro to add a display panel decorator.<br />
# It needs the class name, and the panel types we are decorating. It fills up <br />
# IFACES and IFACE_SRCS with proper values as needed by ADD_PARAVIEW_PLUGIN macro.</font><br />
<font color="violet">ADD_PARAVIEW_DISPLAY_PANEL_DECORATOR</font>(<br />
IFACES IFACE_SRCS <br />
<font color="purple">CLASS_NAME</font> MySpreadsheetDecorator<br />
<font color="purple">PANEL_TYPES</font> pqSpreadSheetDisplayEditor <br />
<font color="green"># <-- This identifies the panel type(s) to decorate<br />
# Our decorator will only be instantiated for the panel types indicated here</font><br />
)<br />
<br />
<font color="green"># create a plugin</font><br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(MySpreadsheetDecorator "1.0" <br />
<font color="purple">GUI_INTERFACES</font> ${IFACES} <br />
<font color="purple">SOURCES</font> MySpreadsheetDecorator.cxx ${MOC_SRCS} ${IFACE_SRCS})<br />
<br />
An example panel decorator is available under '''Examples/Plugins/DisplayPanelDecorator''' in the ParaView source.<br />
<br />
=== Autostart Plugins ===<br />
This refers to a plugin which needs to be notified when ParaView starts up or the plugin is loaded which ever happens later and then notified when ParaView quits. Example is in '''Examples/Plugins/Autostart''' in the ParaView source. For such a plugin, we need to provide a QObject subclass (pqMyApplicationStarter) with methods that need to be called on startup and shutdown.<br />
<br />
<source lang="cpp"><br />
...<br />
class pqMyApplicationStarter : public QObject<br />
{<br />
...<br />
public:<br />
// Callback for startup.<br />
// This cannot take any arguments<br />
void onStartup();<br />
<br />
// Callback for shutdown.<br />
// This cannot take any arguments<br />
void onShutdown();<br />
...<br />
};<br />
</source><br />
<br />
The CMakeLists.txt looks as follows:<br />
<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
QT4_WRAP_CPP(MOC_SRCS pqMyApplicationStarter.h)<br />
<br />
<font color="green"># Macro for auto-start plugins. We specify the class name<br />
# and the methods to call on startup and shutdown on an instance of that class.<br />
# It fills IFACES and IFACE_SRCS with proper values as needed by ADD_PARAVIEW_PLUGIN macro.</font><br />
<font color="violet">ADD_PARAVIEW_AUTO_START</font>(IFACES IFACE_SRCS <br />
<font color="purple">CLASS_NAME</font> pqMyApplicationStarter <font color="green"># the class name for our class</font><br />
<font color="purple">STARTUP</font> onStartup <font color="green"># specify the method to call on startup</font><br />
<font color="purple">SHUTDOWN</font> onShutdown <font color="green"># specify the method to call on shutdown</font><br />
)<br />
<br />
<font color="green"># Create a plugin for this starter </font><br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(Autostart "1.0" <br />
<font color="purple">GUI_INTERFACES</font> ${IFACES} <br />
<font color="purple">SOURCES</font> pqMyApplicationStarter.cxx ${MOC_SRCS} ${IFACE_SRCS})<br />
<br />
=== Adding a custom view ===<br />
ParaView contains a render view for rendering 3d images. It also contains chart views to visualize data in line charts and histogram charts. You may want to create another custom view that does your own view of the data.<br />
<br />
For this example, we'll just make a simple Qt widget with labels showing the displays that have been added to the view.<br />
<br />
To make a custom view, we need both client and server side plugins.<br />
<br />
For our server side, we simply have:<br />
<source lang="xml"><br />
<ServerManagerConfiguration><br />
<ProxyGroup name="displays"><br />
<GenericViewDisplayProxy name="MyDisplay"<br />
base_proxygroup="displays" base_proxyname="GenericViewDisplay"><br />
</GenericViewDisplayProxy><br />
</ProxyGroup><br />
<ProxyGroup name="views"><br />
<ViewModuleProxy name="MyViewViewModule"<br />
base_proxygroup="rendermodules" base_proxyname="ViewModule"<br />
display_name="MyDisplay"><br />
</ViewModuleProxy><br />
</ProxyGroup><br />
<ProxyGroup name="filters"><br />
<SourceProxy name="MyExtractEdges" class="vtkExtractEdges"<br />
label="My Extract Edges"><br />
<InputProperty<br />
name="Input"<br />
command="SetInputConnection"><br />
<ProxyGroupDomain name="groups"><br />
<Group name="sources"/><br />
<Group name="filters"/><br />
</ProxyGroupDomain><br />
<DataTypeDomain name="input_type"><br />
<DataType value="vtkDataSet"/><br />
</DataTypeDomain><br />
</InputProperty><br />
<Hints><br />
<View type="MyView"/><br />
</Hints><br />
</SourceProxy><br />
</ProxyGroup><br />
</ServerManagerConfiguration><br />
</source><br />
<br />
We define "MyDisplay" as a simple display proxy, and "MyViewModule" as a simple view module.<br />
We have our own filter "MyExtractEdges" with a hint saying it prefers to be shown in a view of type "MyView." So if we create a MyExtractEdges in ParaView3, it'll automatically be shown in our custom view.<br />
<br />
We build the server plugin with a CMakeLists.txt file as:<br />
<font color="violet">FIND_PACKAGE</font>(ParaView REQUIRED)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(SMMyView "1.0" <font color="purple">SERVER_MANAGER_XML</font> MyViewSM.xml)<br />
<br />
<br />
Our client side plugin will contain an extension of pqGenericViewModule.<br />
We can let ParaView give us a display panel for these displays, or we can make our own deriving from pqDisplayPanel. In this example, we'll make a simple display panel.<br />
<br />
We implement MyView in MyView.h:<br />
<source lang="cpp"><br />
#include "pqGenericViewModule.h"<br />
#include <QMap><br />
#include <QLabel><br />
#include <QVBoxLayout><br />
#include <vtkSMProxy.h><br />
#include <pqDisplay.h><br />
#include <pqServer.h><br />
#include <pqPipelineSource.h><br />
<br />
/// a simple view that shows a QLabel with the display's name in the view<br />
class MyView : public pqGenericViewModule<br />
{<br />
Q_OBJECT<br />
public:<br />
MyView(const QString& viewtypemodule, const QString& group, const QString& name,<br />
vtkSMAbstractViewModuleProxy* viewmodule, pqServer* server, QObject* p)<br />
: pqGenericViewModule(viewtypemodule, group, name, viewmodule, server, p)<br />
{<br />
this->MyWidget = new QWidget;<br />
new QVBoxLayout(this->MyWidget);<br />
<br />
// connect to display creation so we can show them in our view<br />
this->connect(this, SIGNAL(displayAdded(pqDisplay*)),<br />
SLOT(onDisplayAdded(pqDisplay*)));<br />
this->connect(this, SIGNAL(displayRemoved(pqDisplay*)),<br />
SLOT(onDisplayRemoved(pqDisplay*)));<br />
<br />
}<br />
~MyView()<br />
{<br />
delete this->MyWidget;<br />
}<br />
<br />
/// we don't support save images<br />
bool saveImage(int, int, const QString& ) { return false; }<br />
vtkImageData* captureImage(int) { return NULL; }<br />
<br />
/// return the QWidget to give to ParaView's view manager<br />
QWidget* getWidget()<br />
{<br />
return this->MyWidget;<br />
}<br />
/// returns whether this view can display the given source<br />
bool canDisplaySource(pqPipelineSource* source) const<br />
{<br />
if(!source ||<br />
this->getServer()->GetConnectionID() != source->getServer()->GetConnectionID() ||<br />
QString("MyExtractEdges") != source->getProxy()->GetXMLName())<br />
{<br />
return false;<br />
}<br />
return true;<br />
}<br />
<br />
protected slots:<br />
void onDisplayAdded(pqDisplay* d)<br />
{<br />
QString text = QString("Display (%1)").arg(d->getProxy()->GetSelfIDAsString());<br />
QLabel* label = new QLabel(text, this->MyWidget);<br />
this->MyWidget->layout()->addWidget(label);<br />
this->Labels.insert(d, label);<br />
}<br />
<br />
void onDisplayRemoved(pqDisplay* d)<br />
{<br />
QLabel* label = this->Labels.take(d);<br />
if(label)<br />
{<br />
this->MyWidget->layout()->removeWidget(label);<br />
delete label;<br />
}<br />
}<br />
<br />
protected:<br />
<br />
QWidget* MyWidget;<br />
QMap<pqDisplay*, QLabel*> Labels;<br />
<br />
};<br />
</source><br />
<br />
And MyDisplay.h is:<br />
<source lang="cpp"><br />
#include "pqDisplayPanel.h"<br />
#include <QVBoxLayout><br />
#include <QLabel><br />
<br />
class MyDisplay : public pqDisplayPanel<br />
{<br />
Q_OBJECT<br />
public:<br />
MyDisplay(pqDisplay* display, QWidget* p)<br />
: pqDisplayPanel(display, p)<br />
{<br />
QVBoxLayout* l = new QVBoxLayout(this);<br />
l->addWidget(new QLabel("From Plugin", this));<br />
}<br />
};<br />
</source><br />
<br />
The CMakeLists.txt file to build the client plugin would be:<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
QT4_WRAP_CPP(MOC_SRCS MyView.h MyDisplay.h)<br />
<br />
<font color="violet">ADD_PARAVIEW_VIEW_MODULE</font>(IFACES IFACE_SRCS <br />
<font color="purple">VIEW_TYPE</font> MyView <font color="purple">VIEW_XML_GROUP</font> views<br />
<font color="purple">DISPLAY_XML</font> MyDisplay <font color="purple">DISPLAY_PANEL</font> MyDisplay)<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(GUIMyView "1.0" <font color="purple">GUI_INTERFACES</font> ${IFACES}<br />
<font color="purple">SOURCES</font> ${MOC_SRCS} ${IFACE_SRCS})<br />
<br />
We load the plugins in ParaView, and we create something like a Cone, then create a "My Extract Edges" filter. The multiview manager will create a new view and the label "Display (151)".<br />
<br />
In ParaView 3.4, there's also a macro, ADD_PARAVIEW_VIEW_OPTIONS() which allows adding options pages for the custom view, accessible from Edit -> View Settings. The example in ParaView3/Examples/Plugins/GUIView demonstrates this (until more information is put here).<br />
<br />
=== Adding new Representations for 3D View using Plugins <font color="green"> * new in version 3.7</font> ===<br />
<br />
ParaView’s 3D view the most commonly used view for showing polygonal or volumetric data. By default, ParaView provides representation-types for showing the dataset as surface, wireframe, points etc. It’s possible to add representations using plugins that extends this set of available representation-types.<br />
<br />
Before we start looking at how to write such a plugin, we need to gain some understanding of the 3D view and its representations. The 3D view uses 3 basic representation proxies for rendering all types of data:<br />
* (representations, UnstructuredGridRepresentation) – for vtkUnstructuredGrid or a composite dataset consisting of vtkUnstructuredGrid.<br />
* (representations, UniformGridRepresentation) – for vtkImageData or a composite dataset consisting of vtkImageData<br />
* (representations, GeometryRepresentation) – for all other data types.<br />
<br />
Each of these representation proxies are basically composite-representation proxies that use other representation proxies to do the actual rendering e.g. GeometryRepresentation uses SurfaceRepresentation for rendering the data as wireframe, points, surface and surface-with-edges and OutlineRepresentation for rendering an outline for the data. Subsequently, the 3 composite-representation proxies provide a property named '''Representation''' which allows the user to pick the representation type he wants to see the data as. The composite-representation proxy has logic to enable one of its internal representations based on the type chosen by the user.<br />
<br />
These 3-composite representation types are fixed and cannot be changed by plugins. What plugins can do is add more internal representations to any of these 3 composite representations to support new representations types, that the user can choose using the representation-type combo box on the display tab or in the toolbar.<br />
<br />
[[Image:Representationplugin.png|800px|Figure: Representation type combo-box allowing user to choose the sub-representation to use]]<br />
<br />
==== Using a new Mapper ====<br />
In this example, we see how to integrate a special poly-data mapper written in VTK into ParaView. Let’s say the mapper is called vtkMySpecialPolyDataMapper which is simply a subclass of vtkPainterPolyDataMapper. In practice, vtkMySpecialPolyDataMapper can internally use different painters to do perform special rendering tasks.<br />
<br />
To integrate this mapper into ParaView first we need to create a vtkSMRepresentationProxy subclass for that uses this mapper. In this example, since the mapper is a simple replacement for the standard vtkPainterPolyDataMapper, we can define our representation proxy as a specialization of the “SurfaceRepresentation” as follows:<br />
<br />
<source lang="xml"><br />
<ServerManagerConfiguration><br />
<ProxyGroup name="representations"><br />
<RepresentationProxy name="MySpecialRepresentation"<br />
class="vtkMySpecialRepresentation"<br />
processes="client|renderserver|dataserver"<br />
base_proxygroup="representations"<br />
base_proxyname="SurfaceRepresentation"><br />
<Documentation><br />
This is the new representation type we are adding. This is identical to<br />
the SurfaceRepresentation except that we are overriding the mapper with<br />
our mapper.<br />
</Documentation><br />
<br />
<!-- End of MySpecialRepresentation --><br />
</RepresentationProxy><br />
</ProxyGroup><br />
<br />
</ServerManagerConfiguration><br />
</source><br />
<br />
vtkMySpecialRepresentation is a subclass of vtkGeometryRepresentationWithFaces where in the constructor we simply override the mappers as follows:<br />
<br />
<source lang="cpp"><br />
//----------------------------------------------------------------------------<br />
vtkMySpecialRepresentation::vtkMySpecialRepresentation()<br />
{<br />
// Replace the mappers created by the superclass.<br />
this->Mapper->Delete();<br />
this->LODMapper->Delete();<br />
<br />
this->Mapper = vtkMySpecialPolyDataMapper::New();<br />
this->LODMapper = vtkMySpecialPolyDataMapper::New();<br />
<br />
// Since we replaced the mappers, we need to call SetupDefaults() to ensure<br />
// the pipelines are setup correctly.<br />
this->SetupDefaults();<br />
}<br />
</source><br />
<br />
<br />
Next we need to register this new type with the any (or all) of the 3 standard composite representations so that it will become available to the user to choose in the representation type combo-box.<br />
To decide which of the 3 composite representations we want to add our representation to, think of the input data types our representation supports. If it can support any type of data set, then we can add our representation all the 3 representations (as is the case with this example). However if we are adding a representation for volume rendering of vtkUnstructuredGrid then we will add it only to the UnstructuredGridRepresentation. This is done by using the Extension xml tag. It simply means that we are extending the original XML for the proxy definition with the specified additions. Now to make this representation available as a type to the user, we use the <RepresentationType /> element , with “text” used as the text shown for the type in the combo-box, “subproxy” specifies the name of representation –subproxy to activate when the user chooses the specified type. Optionally one can also specify the “subtype” attribute, which if present is the value set on a property named “Representation” for the subproxy when the type is chosen. This allows for the subproxy to provide more than one representation type.<br />
<br />
<source lang="xml"><br />
<ServerManagerConfiguration><br />
<ProxyGroup name="representations"><br />
<br />
<Extension name="GeometryRepresentation"><br />
<Documentation><br />
Extends standard GeometryRepresentation by adding<br />
MySpecialRepresentation as a new type of representation.<br />
</Documentation><br />
<br />
<!-- this adds to what is already defined in PVRepresentationBase --><br />
<RepresentationType subproxy="MySpecialRepresentation"<br />
text="Special Mapper" subtype="1" /><br />
<br />
<SubProxy><br />
<Proxy name="MySpecialRepresentation"<br />
proxygroup="representations" proxyname="MySpecialRepresentation"><br />
</Proxy><br />
<ShareProperties subproxy="SurfaceRepresentation"><br />
<Exception name="Input" /><br />
<Exception name="Visibility" /><br />
<Exception name="Representation" /><br />
</ShareProperties><br />
</SubProxy><br />
</Extension><br />
<br />
</ProxyGroup><br />
</ServerManagerConfiguration><br />
</source><br />
<br />
The CMakeLists.txt file is not much different from what it would be like for adding a simple filter or a reader.<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(Representation "1.0"<br />
<font color="purple">SERVER_MANAGER_XML</font> Representation.xml<br />
<font color="purple">SERVER_MANAGER_SOURCES</font> vtkMySpecialPolyDataMapper.cxx vtkMySpecialRepresentation.cxx<br />
)<br />
<br />
<br />
Source code for this example is available under '''Examples/Plugins/Representation''' in the ParaView source directory.<br />
<br />
==== Using Hardware Shaders ====<br />
One common use-case for adding new representations is to employ specialized hardware shaders written using shading languages such as GLSL or Cg to perform specialized rendering. Such special rendering algorithms can be encapsulated in a special mapper or a vtkPainter subclass and then making a special mapper that uses the painter.<br />
<br />
In this example, we have a new vtkPainter subclasses vtkVisibleLinePainter that uses shaders to prune hidden lines from a wireframe rendering. Following is the CMakeLists.txt<br />
<font color="violet">FIND_PACKAGE</font>(ParaView <font color="purple">REQUIRED</font>)<br />
<font color="violet">INCLUDE</font>(${PARAVIEW_USE_FILE})<br />
<font color="green"><br />
# Compile-in all GLSL files are strings.<br />
# const char* with the names same as that of the file then become available for<br />
# use.</font><br />
<font color="violet">encode_files_as_strings</font>(ENCODED_STRING_FILES<br />
vtkPVLightingHelper_s.glsl<br />
vtkPVColorMaterialHelper_vs.glsl<br />
vtkVisibleLinesPainter_fs.glsl<br />
vtkVisibleLinesPainter_vs.glsl<br />
)<br />
<br />
<font color="violet">add_paraview_plugin</font>(<br />
HiddenLinesRemoval "1.0"<br />
<font color="purple">SERVER_MANAGER_XML</font><br />
HiddenLinesRemovalPlugin.xml<br />
<br />
<font color="purple">SERVER_MANAGER_SOURCES</font><br />
vtkVisibleLinesPolyDataMapper.cxx<br />
<br />
<font color="purple">SOURCES</font> vtkPVColorMaterialHelper.cxx<br />
vtkPVLightingHelper.cxx<br />
vtkVisibleLinesPainter.cxx<br />
${ENCODED_STRING_FILES}<br />
)<br />
<br />
vtkVisibleLinesPolyDataMapper is simply a vtkPainterPolyDataMapper subclass, like the previous example, which inserts the vtkVisibleLinesPainter at the appropriate location in the painter chain. The server manager configuration xml doesn’t look much different from the Using a new Mapper example except that we replace the mapper to be vtkVisibleLinesPolyDataMapper.<br />
<br />
Source code for this example is available under Examples/Plugins/HiddenLineRemoval in the ParaView source directory.<br />
<br />
=== Embedding Python Source as Modules ===<br />
<br />
Embedding Python source was first available in ParaView 3.6. Also be aware that you need Python 2.3 or greater to be able to load a plugin with embedded Python source.<br />
<br />
It is possible to take a Python module written in Python source code and embed it into a ParaView plugin. Once the plugin is loaded, the Python interpreter within the ParaView client (or pvpython or pvbatch) can access your module using the Python <tt>import</tt> command. Of course, Python has its own way of distributing modules; however, if your Python source relies on, say, a filter defined in a plugin or something else in a plugin, like a toolbar, relies on executing your Python module, then it can be more convenient to distribute and load everything if they are all wrapped into a single plugin.<br />
<br />
Let us say that you have a file named helloworld.py with the following contents.<br />
<br />
<source lang="python"><br />
def hello():<br />
print "Hello world"<br />
</source><br />
<br />
You can add this to a plugin by simply listing the file in the <tt>PYTHON_MODULES</tt> option of <tt>ADD_PARAVIEW_PLUGIN</tt>. Note that the file must be located in the same directory as the CMakeLists.txt file (more on that later).<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(MyPythonModules "1.0"<br />
<font color="purple">PYTHON_MODULES</font> helloworld.py<br />
)<br />
<br />
Once you load this plugin into ParaView (no matter how you do it), you can then access this source code by importing the helloworld module.<br />
<br />
<source lang="python"><br />
>>> paraview.servermanager.LoadPlugin('libPythonTest.dylib')<br />
>>> import helloworld<br />
>>> helloworld.hello()<br />
Hello world<br />
</source><br />
<br />
Note that if you are using the ParaView client GUI, you can load the plugin through the GUI's Plugin Manager or by autoloading the plugin (as described in [[#Using Plugins]]) instead of using the <tt>LoadPlugin</tt> Python command. You do, however, need the <tt>import</tt> command.<br />
<br />
It is also possible to have multiple modules and to embed packages with their own submodules (with an arbitrary depth of packages). You can set this up by simply arranging your Python source in directories representing the packages in the same way you set them up if you were loading them directly from Python (in fact, that might simplify debugging your Python code). If you have a file named __init__.py, that file is taken to be the implementation of the package represented by the directory it is contained in. This is the same behavior as Python itself.<br />
<br />
<font color="violet">ADD_PARAVIEW_PLUGIN</font>(MyPythonModules "1.0"<br />
<font color="purple">PYTHON_MODULES</font> helloworld.py <font color="green"># Becomes module helloworld</font><br />
hello/__init__.py <font color="green"># Becomes package hello</font><br />
hello/world.py <font color="green"># Becomes module hello.world</font><br />
)<br />
<br />
Note that when Python imports a module, it first imports all packages in which it is contained. The upshot is that if you define a module in a package within your plugin, you must also make sure that the package is also defined somewhere. In the example above, if you removed the hello/__init__.py source file, you would not be able to load the hello/world.py file. Thus, it is best to include a __init__.py in every package directory you make, even if it is empty.<br />
<br />
== Examples ==<br />
<br />
The ParaView CVS repository contains many examples in the Plugins directory. Additional examples are available on this wiki at the [[Plugin Examples]] entry.<br />
<br />
== Plugins in Static Applications ==<br />
<br />
<font color="magenta">This functionality is new in ParaView 3.8</font><br />
<br />
It is possible to import plugins into a ParaView-based application at compile time. When building ParaView-based applications statically, this is the only option to bring in components from plugins. To import a plugin you'd need to use the PV_PLUGIN_IMPORT_INIT and PV_PLUGIN_IMPORT macros defined in vtkPVPlugin.h as follows:<br />
<br />
<source lang="cpp"><br />
#include "vtkPVPlugin.h"<br />
<br />
// Adds required externs.<br />
PV_PLUGIN_IMPORT_INIT(MyFilterPlugin)<br />
PV_PLUGIN_IMPORT_INIT(MyReaderPlugin)<br />
<br />
class MyMainWindow : public QMainWindow<br />
{<br />
// ....<br />
};<br />
<br />
MyMainWindow::MyMainWindow(...)<br />
{<br />
// ... after initialization ...<br />
<br />
// Calls relevant callbacks to load the plugins and update the <br />
// GUI/Server-Manager <br />
PV_PLUGIN_IMPORT(MyFilterPlugin);<br />
PV_PLUGIN_IMPORT(MyReaderPlugin);<br />
<br />
}<br />
</source><br />
<br />
Don't forget to use TARGET_LINK_LIBRARIES() in the CMakeLists.txt file to link to the plugins otherwise you will get linking errors.<br />
<br />
== Pitfalls ==<br />
=== Tools->Manage Plugins is not visible! ===<br />
Plugins can only be loaded dynamically when ParaView is built with shared libraries. You must recompile Paraview with BUILD_SHARED_LIBS ON.<br />
<br />
=== SYNTAX ERROR found in parsing the header file ===<br />
When writing a VTK reader, filter, or writer for use with Paraview, any variable declaration in header files involving VTK classes or your own derived data type has to be wrapped in a "//BTX" "//ETX" pair of comments to tell the parser (Paraview's vtkWrapClientServer) to ignore these lines. The following is an example based on ParaView/Examples/Plugins/Filter/vtkMyElevationFilter.h:<br />
<source lang="cpp"><br />
class VTK_EXPORT vtkMyElevationFilter : public vtkElevationFilter<br />
{<br />
private:<br />
vtkMyElevationFilter(const vtkMyElevationFilter&);<br />
void operator=(const vtkMyElevationFilter&);<br />
<br />
//BTX<br />
vtkSmartPointer<vtkPolyData> Source;<br />
vtkSmartPointer<vtkPolyData> Target;<br />
//ETX<br />
};<br />
</source><br />
<br />
If these tags are omitted, building the plugin will fail with an error message like the following:<br />
<source lang="text"><br />
*** SYNTAX ERROR found in parsing the header file <something>.h before line <line number> ***<br />
</source><br />
<br />
=== Compile error "invalid conversion from ‘vtkYourFiltersSuperClass*’ to ‘vtkYourFilter*’" ===<br />
Any VTK object that needs to be treated as a filter or source has to be a vtkAlgorithm subclass. The particular superclass a filter is derived from has to be given not only in the standard C++ way<br />
<source lang="cpp"><br />
class VTK_EXPORT vtkMyElevationFilter : public vtkElevationFilter<br />
</source><br />
<br />
but additionally declared with help of the "vtkTypeRevisionMacro". For the example given above<br />
<source lang="cpp"><br />
class VTK_EXPORT vtkMyElevationFilter : public vtkElevationFilter<br />
{<br />
public:<br />
vtkTypeRevisionMacro(vtkMyElevationFilter, vtkElevationFilter);<br />
}<br />
</source><br />
<br />
Otherwise, compiling the filter will fail with a variety of error messages (depending on superclass) like<br />
<source lang="cpp"><br />
vtkMyElevationFilter.cxx:19: error: no 'void vtkMyElevationFilter::CollectRevisions(std::ostream&)'<br />
member function declared in class 'vtkMyElevationFilter'<br />
</source><br />
or<br />
<source lang="cpp"><br />
vtkMyElevationFilterClientServer.cxx:97: error: invalid conversion from ‘vtkPolyDataAlgorithm*’ to<br />
‘vtkICPFilter*’<br />
</source><br />
<br />
=== Plugin loaded, but invalid ELF header ===<br />
What would cause this???<br />
<br />
=== Undefined symbol _ZTV12vtkYourFilter ===<br />
When you load your plugin, if you see a yellow ! warning triangle that says "undefined symbol....", you need to add<br />
<source lang="cpp"><br />
vtkCxxRevisionMacro(vtkYourFilter, "$Revision$");<br />
</source><br />
to your header file and recompile the plugin.<br />
<br />
=== Mysterious Segmentation Faults in plugins that use custom VTK classes ===<br />
<br />
This primarily concerns plugins that make calls to your own custom "vtkMy"(or whatever you called it) library of VTK extensions.<br />
<br />
Symptoms:<br />
* The plugin will load, but causes a segfault when you try to use it.<br />
* If you use a debugger you may notice that in some cases when your code calls vtkClassA.MethodB, what actually gets called is vtkClassC.MethodD, where MethodB is a virtual member function. This is occurs because of different vtable entries in the Paraview-internal versions of the VTK libraries.<br />
<br />
The solution is to make sure that your vtkMy library is compiled against Paraview's internal VTK libraries. Even if you compiled VTK and Paraview using the same VTK sources, you *must not* link against the external VTK libraries. (The linker won't complain, because it will find all the symbols it needs, but this leads to unexpected behaviour.)<br />
<br />
To be explicit, when compiling your vtkMy library, you must set the cmake variable VTK_DIR to point to the 'VTK' subdirectory in the directory in which you built Paraview. (On my system, cmake automatically finds vtk at /usr/lib/vtk-5.2, and I must change VTK_DIR to ~/source/ParaView3/build/VTK .)<br />
<br />
=== "Is not a valid Qt plugin" in Windows ===<br />
<br />
Make sure that all the DLLs that your plugin depends on are on the PATH. If in doubt, try placing your plugin and all its dependent DLLs in the bin dir of your build and load it from there.<br />
<br />
<br />
{{ParaView/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/External_Tutorials&diff=40898VTK/Tutorials/External Tutorials2011-06-18T17:29:28Z<p>Pratikm: /* Examples, Presentations, Seminars, Talks, Tutorials */</p>
<hr />
<div>This page was based on Sebastien Barre's [http://www.barre.nom.fr/vtk/links-examples.html VTK Links: Examples] page. However, it is kept more upto date, and new resources will be regularly added. Note that to understand VTK internals, you might still have to buy the VTK User's Guide (although be warned that it does not cover everything). If you find a nice tutorial/webpage that explains VTK, please add it here! <br />
<br />
== Examples, Presentations, Seminars, Talks, Tutorials ==<br />
# There is a tutorial in the VTK distribution (in /Examples/Tutorial).<br />
#[http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html Visualization Toolkit Tutorial ]: A very nice introduction VTK, with code for actual projects included(John Bell). <br />
#[http://www.rug.nl/cit/hpcv/visualisation/VTK/index.html Visualization examples for the The VISUALIZATION TOOLKIT], used in a VTK workshop given at the University of Groningen (RuG): Worth a look, lot's of examples.<br />
#[http://www.mcs.anl.gov/~disz/cs-341/colorvis/colorvis.PPT Introduction to Visualization with VTK]:Powerpoint presentation, no code, very nice illustrations (T. L. Disz, Univ. of Chicago)<br />
#[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.514&rep=rep1&type=pdf Visualizing with VTK : A Tutorial]: A tutorial given by the writers of the VTK Book (W. Schroeder, L. Avila, W. Hoffman)<br />
#[http://www.bu.edu/tech/research/training/tutorials/vtk/ Using VTK to Visualize Scientific Data (online tutorial)]: Nice Introduction with serious examples in Tcl (BU)<br />
#[http://www.ncsa.illinois.edu/~semeraro/PPT/VTK_TUTORIAL/v3_document.htm VTK Tutorial: How to Create Visualization Applications with VTK]: Explains basic VTK objects, code snippets are present (Dave Semeraro)<br />
#[http://www.osc.edu/supercomputing/training/vtk/vtk_0505.pdf Scientific Visualization with VTK,The Visualization Toolkit]: A Hands-on introduction; uses examples that ship with VTK to illustrate VTK concepts(Ohio Supercomputer Center, OSC)<br />
#[http://www.csc.kth.se/utbildning/kth/kurser/DD2257/visual09/VTK-090325-GT.pdf Introduction to VTK]: Nice exposition of basic VTK internals, but very brief (Gustav Taxen)<br />
#[http://vizisaw.dubmun.com/index.html Vizi-SAW VTK Tutorial]<br />
<br />
== VTK Pipeline ==<br />
#[http://www.ci-ra.org/Documents/Seminaires/05-2009/ParaView.Introduction.pdf ParaView Visualization Course: The VTK Visualization Pipeline and ParaView]: Introduction to ParaView, but includes a nice discussion of VTK pipeline (Jean M. Favre)<br />
#[http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 Upgrading to and Understanding the new VTK Pipeline]: Explains the concepts of vtkInformation etc.<br />
#[http://www.vtk.org/Wiki/images/8/80/Pipeline.pdf Proposal for the New VTK Pipeline]: This new pipeline is currently in use; the document uses the term 'vtkProcessor'; just replace this by 'vtkAlgorithm' throughout<br />
<br />
== Computer Graphics Courses that teach VTK ==<br />
# [http://web.cs.wpi.edu/~matt/courses/cs563 CS563:Advanced Topics in Computer Graphics Home Page]<br />
# [http://www.cs.rpi.edu/~cutler/classes/visualization/F10/ RPI CSCI-4972 Introduction to Visualization Fall 2010] - slides available [https://docs.google.com/leaf?id=0B8yIfGqnlfSoYmMzNGNmMjQtZWUzYS00MGIwLWI2ODItMzBiNzkwNTY5MTY1&hl=en_US here]<br />
# [http://www.inf.ed.ac.uk/teaching/courses/vis/ Visualization Module Home Page] <br />
<br />
== Books ==<br />
Apart from the [http://www.vtk.org/VTK/help/book.html VTK Book] and the [http://www.vtk.org/VTK/help/book.html User's Guide], the following books may be of interest for VTK users/developers:<br />
#[http://www.bioimagesuite.org/vtkbook/index.html Introduction to Programming for Image Analysis with VTK]: A book on image analysis using VTK, free PDF available at the site. Very nice introduction to VTK, uses Tcl in the beginning but C++ at the end. Includes an example of writing your own VTK filter. Overall, a highly recommended read (Xenophon Papadmetris)<br />
<br />
<!--<br />
No links could be found; if they are found, then please:<br />
1) include the links<br />
2) make them visible by shifting the comment tags<br />
<br />
== Advanced Computer Graphics and Data Visualization - Term Projects ==<br />
(T. D. Citriniti, Rensselaer Polytechnic Institute, NY)<br />
" This is a collection of Term Project completed by the students in "Advanced Computer Graphics and Data Visualization" 35-6961 for the Fall 1995 semester." And these of Spring 97. Very interesting projects, but NO code (should ask for it ?).<br />
Problem: No link found<br />
== Jumping Jack Icon/Glyph - Scientific Visualization Project ==<br />
(M. Srdanovic)<br />
"The project is to combine 6 MRI images using some form of glyph. I decided to use a glyph that resembles a person doing "jumping jacks". Each limb of the icon will be oriented based on the intensity of pixels from each of the 6 images.". Part of the Scientific Visualization (91.541) course (Dr. G.G. Grinstein) at UMass Lowell Computer Science Department.<br />
Problem: No link found<br />
== CS 5630 - Scientific Visualization ==<br />
(C. Johnson , University of Utah)<br />
formerly known as CS 523 : various solutions (+ code) : Class assignments - see 1 to 4 (T. Robbins), and also Y.-K. Yang assignments.<br />
Problem: No link found<br />
==(J. Kaandorp, R. Belleman & Z. Zhao, Univ. of Amsterdam)==<br />
Note : "If you plan to use any of the material provided by our invited speakers for your own course, please contact me so that I can check for an OK with the authors.".<br />
Problem: No link found<br />
--><br />
<br />
<br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/VTK_Terminology&diff=40897VTK/Tutorials/VTK Terminology2011-06-18T17:27:01Z<p>Pratikm: </p>
<hr />
<div><small>Please see [http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html this] for a very nice introduction to VTK, and especially the terminology</small><br />
<br />
VTK conceptually divides the "pipeline" into two segments: A data processing segment and a rendering segment.<br />
<br />
==Data Processing Segment==<br />
The data processing segment consists of data sources and data filters. These objects give the user direct access to the data.<br />
<br />
===Filter===<br />
A filter is an object that takes one or many inputs, operates on the inputs, and produces one or many outputs.<br />
<br />
===Source===<br />
A source is a special case of a filter where there are 0 inputs.<br />
<br />
==Rendering Segment==<br />
The rendering segment is essentially an OpenGL wrapper. In this segment of the pipeline, the user does not have direct access to the data - it is in a form that is most efficient for the graphics hardware. The rendering segment consists of mappers, actors, renderers, and render windows. <br />
<br />
===Mappers===<br />
Mappers serve as the transition between the two segments of the pipeline. It can be thought of as a filter that converts data in a manipulatable form (e.g. something from the data processing segment) to a a form that can be rendered.<br />
<br />
===Actor===<br />
An actor is an "OpenGL object". That is, if you have a geometric object or a model of some sort, you will have to create an actor for it to give to the renderer.<br />
<br />
===Render Window===<br />
This is the actual window that the operating system opens.<br />
<br />
===Renderer===<br />
A render window can contain one or many renderers. The renderer is where the data is actual drawn to the screen.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40869VTK/Learning VTK2011-06-17T19:11:08Z<p>Pratikm: </p>
<hr />
<div>VTK is an extremely powerful and versatile tool, but has a reputation for having a steep learning curve. Here, we hope to organize the resources that may be of use to both VTK users and developers in understanding and using VTK effectively. The key to successfully using VTK are:<br />
# understanding the structure of the object-oriented hierarchy and <br />
# understanding it’s pipeline architecture.<br />
<br />
If you are new to VTK, you might want to check out:<br />
<br />
*[[VTK/Tutorials/External_Tutorials|External Tutorials]] - presentations on VTK by users and developers around the world. This should be helpful to 'get started' with VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be:<br />
#Get the [http://kitware.com/products/books/vtkbook.html The VTK Textbook] and [http://kitware.com/products/books/vtkguide.html The VTK Users Guide].<br />
#actually see/write VTK code. This is a great way of learning VTK, and if there is something which you wish to know in greater detail, please have a look at the very well organized and commented source code for VTK. The following resources may also help you: <br />
##[[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used. This is quite an extensive collection.<br />
##[[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK. Note that these cover only certain topics that are not clearly elucidated in the [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] and are not meant as a 'getting started' manual for absolute beginners.<br />
#More resources available to help the user are :<br />
##[http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
##[http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
##[[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40852VTK/Tutorials/New Pipeline2011-06-17T03:32:38Z<p>Pratikm: </p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*[http://www.vtk.org/doc/nightly/html/classvtkInformation.html '''vtkInformation''']: <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*[http://www.vtk.org/doc/nightly/html/classvtkDataObject.html '''vtkDataObject''']:<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*[http://www.vtk.org/doc/nightly/html/classvtkAlgorithm.html '''vtkAlgorithm''']:<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*[http://www.vtk.org/doc/nightly/html/classvtkExecutive.html '''vtkExecutive''']:<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
[http://www.vtk.org/doc/nightly/html/classvtkDemandDrivenPipeline.html vtkDemandDrivenPipeline] which in turn has a subclass called<br />
[http://www.vtk.org/doc/nightly/html/classvtkStreamingDemandDrivenPipeline.html vtkStreamingDemandDrivenPipeline]. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called '''downstream requests'''. Requests for information or data flowing from output to <br />
input are called '''upstream requests''') request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see is upon execution is REQUEST_DATA_OBJECT (see [http://www.vtk.org/doc/nightly/html/classvtkExecutive.html vtkExecutive], [http://www.vtk.org/doc/nightly/html/classvtkDataObject.html vtkDataObject] and their subclasses for all the possible keys and requests.) This request asks the algorithm to create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see [http://www.vtk.org/doc/nightly/html/classvtkDataSetAlgorithm.html vtkDataSetAlgorithm] for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFORMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see [http://www.vtk.org/doc/nightly/html/classvtkImageGradient.html vtkImageGradient] for an example)<br />
<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
</center><br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40851VTK/Tutorials/New Pipeline2011-06-17T03:28:50Z<p>Pratikm: /* Introduction */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*[http://www.vtk.org/doc/nightly/html/classvtkInformation.html '''vtkInformation''']: <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*[http://www.vtk.org/doc/nightly/html/classvtkDataObject.html '''vtkDataObject''']:<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*[http://www.vtk.org/doc/nightly/html/classvtkAlgorithm.html '''vtkAlgorithm''']:<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*[http://www.vtk.org/doc/nightly/html/classvtkExecutive.html '''vtkExecutive''']:<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
[http://www.vtk.org/doc/nightly/html/classvtkDemandDrivenPipeline.html vtkDemandDrivenPipeline] which in turn has a subclass called<br />
[http://www.vtk.org/doc/nightly/html/classvtkStreamingDemandDrivenPipeline.html vtkStreamingDemandDrivenPipeline]. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see is upon execution is REQUEST_DATA_OBJECT (see [http://www.vtk.org/doc/nightly/html/classvtkExecutive.html vtkExecutive], [http://www.vtk.org/doc/nightly/html/classvtkDataObject.html vtkDataObject] and their subclasses for all the possible keys and requests.) This request asks the algorithm to create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see [http://www.vtk.org/doc/nightly/html/classvtkDataSetAlgorithm.html vtkDataSetAlgorithm] for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFORMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see [http://www.vtk.org/doc/nightly/html/classvtkImageGradient.html vtkImageGradient] for an example)<br />
<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
</center><br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40850VTK/Learning VTK2011-06-17T03:08:14Z<p>Pratikm: </p>
<hr />
<div>VTK is an extremely powerful and versatile tool, but has a reputation for having a steep learning curve. Here, we hope to organize the resources that may be of use to both VTK users and developers in understanding and using VTK effectively. The key to successfully using VTK are:<br />
# understanding the structure of the object-oriented hierarchy and <br />
# understanding it’s pipeline architecture.<br />
<br />
If you are new to VTK, you might want to check out:<br />
<br />
*[[VTK/Tutorials/External_Tutorials|External Tutorials]] - presentations on VTK by users and developers around the world. This should be helpful to 'get started' with VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be:<br />
#purchase the [http://kitware.com/products/books/vtkbook.html The VTK Textbook] and [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] from Kitware<br />
#actually see/write VTK code. The following resources will help you do just that: <br />
##[[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used. This is quite an extensive collection.<br />
##[[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK. Note that these cover only certain topics that are not clearly elucidated in the [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] and are not meant as a 'getting started' manual for absolute beginners.<br />
#More resources available to help the user are :<br />
##[http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
##[http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
##[[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40849VTK/Learning VTK2011-06-17T03:05:42Z<p>Pratikm: </p>
<hr />
<div>VTK is an extremely powerful and versatile tool, but has a reputation for having a steep learning curve. Here, we hope to organize the resources that may be of use to both VTK users and developers in understanding and using VTK effectively. The key to successfully using VTK are:<br />
# understanding the structure of the object-oriented hierarchy and <br />
# understanding it’s pipeline architecture.<br />
<br />
If you are new to VTK, you might want to check out:<br />
<br />
*[[VTK/Tutorials/External_Tutorials|External Tutorials]] - presentations on VTK by users and developers around the world. This should be helpful to 'get started' with VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be:<br />
#purchase the [http://kitware.com/products/books/vtkbook.html The VTK Textbook] and [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] from Kitware<br />
#actually see/write VTK code. The following resources will help you do just that: <br />
##[[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used. This is quite an extensive collection.<br />
##[[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK. Note that these cover only certain topics that are not clearly elucidated in the [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] and are not meant as an introduction to beginners.<br />
#More resources available to help the user are :<br />
##[http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
##[http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
##[[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40848VTK/Learning VTK2011-06-17T03:04:36Z<p>Pratikm: </p>
<hr />
<div>VTK is an extremely powerful and versatile tool, but has a reputation for having a steep learning curve. Here, we hope to organize the resources that may be of use to both VTK users and developers in understanding and using VTK effectively. The key to successfully using VTK are:<br />
# understanding the structure of the object-oriented hierarchy and <br />
# understanding it’s pipeline architecture.<br />
<br />
If you are new to VTK, you might want to check out: <br />
[[VTK/Tutorials/External_Tutorials|External Tutorials]] - presentations on VTK by users and developers around the world. This should be helpful to 'get started' with VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be:<br />
#purchase the [http://kitware.com/products/books/vtkbook.html The VTK Textbook] and [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] from Kitware<br />
#actually see/write VTK code. The following resources will help you do just that: <br />
##[[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used. This is quite an extensive collection.<br />
##[[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK. Note that these cover only certain topics that are not clearly elucidated in the [http://kitware.com/products/books/vtkguide.html The VTK Users Guide] and are not meant as an introduction to beginners.<br />
#More resources available to help the user are :<br />
##[http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
##[http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
##[[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/External_Tutorials&diff=40847VTK/Tutorials/External Tutorials2011-06-17T00:08:39Z<p>Pratikm: /* Books */</p>
<hr />
<div>This page was based on Sebastien Barre's [http://www.barre.nom.fr/vtk/links-examples.html VTK Links: Examples] page. However, it is kept more upto date, and new resources will be regularly added. Note that to understand VTK internals, you might still have to buy the VTK User's Guide (although be warned that it does not cover everything). If you find a nice tutorial/webpage that explains VTK, please add it here! <br />
<br />
== Examples, Presentations, Seminars, Talks, Tutorials ==<br />
# There is a tutorial in the VTK distribution (in /Examples/Tutorial).<br />
#[http://www.rug.nl/cit/hpcv/visualisation/VTK/index.html Visualization examples for the The VISUALIZATION TOOLKIT], used in a VTK workshop given at the University of Groningen (RuG): Worth a look, lot's of examples.<br />
#[http://www.mcs.anl.gov/~disz/cs-341/colorvis/colorvis.PPT Introduction to Visualization with VTK]:Powerpoint presentation, no code, very nice illustrations (T. L. Disz, Univ. of Chicago)<br />
#[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.514&rep=rep1&type=pdf Visualizing with VTK : A Tutorial]: A tutorial given by the writers of the VTK Book (W. Schroeder, L. Avila, W. Hoffman)<br />
#[http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html Visualization Toolkit Tutorial ]: Overview of VTK, no code, mostly pictures of projects (John Bell). <br />
#[http://www.bu.edu/tech/research/training/tutorials/vtk/ Using VTK to Visualize Scientific Data (online tutorial)]: Nice Introduction with serious examples in Tcl (BU)<br />
#[http://www.ncsa.illinois.edu/~semeraro/PPT/VTK_TUTORIAL/v3_document.htm VTK Tutorial: How to Create Visualization Applications with VTK]: Explains basic VTK objects, code snippets are present (Dave Semeraro)<br />
#[http://www.osc.edu/supercomputing/training/vtk/vtk_0505.pdf Scientific Visualization with VTK,The Visualization Toolkit]: A Hands-on introduction; uses examples that ship with VTK to illustrate VTK concepts(Ohio Supercomputer Center, OSC)<br />
#[http://www.csc.kth.se/utbildning/kth/kurser/DD2257/visual09/VTK-090325-GT.pdf Introduction to VTK]: Nice exposition of basic VTK internals, but very brief (Gustav Taxen)<br />
#[http://vizisaw.dubmun.com/index.html Vizi-SAW VTK Tutorial]<br />
<br />
== VTK Pipeline ==<br />
#[http://www.ci-ra.org/Documents/Seminaires/05-2009/ParaView.Introduction.pdf ParaView Visualization Course: The VTK Visualization Pipeline and ParaView]: Introduction to ParaView, but includes a nice discussion of VTK pipeline (Jean M. Favre)<br />
#[http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 Upgrading to and Understanding the new VTK Pipeline]: Explains the concepts of vtkInformation etc.<br />
#[http://www.vtk.org/Wiki/images/8/80/Pipeline.pdf Proposal for the New VTK Pipeline]: This new pipeline is currently in use; the document uses the term 'vtkProcessor'; just replace this by 'vtkAlgorithm' throughout<br />
<br />
== Computer Graphics Courses that teach VTK ==<br />
# [http://web.cs.wpi.edu/~matt/courses/cs563 CS563:Advanced Topics in Computer Graphics Home Page]<br />
# [http://www.cs.rpi.edu/~cutler/classes/visualization/F10/ RPI CSCI-4972 Introduction to Visualization Fall 2010] - slides available [https://docs.google.com/leaf?id=0B8yIfGqnlfSoYmMzNGNmMjQtZWUzYS00MGIwLWI2ODItMzBiNzkwNTY5MTY1&hl=en_US here]<br />
# [http://www.inf.ed.ac.uk/teaching/courses/vis/ Visualization Module Home Page] <br />
<br />
== Books ==<br />
Apart from the [http://www.vtk.org/VTK/help/book.html VTK Book] and the [http://www.vtk.org/VTK/help/book.html User's Guide], the following books may be of interest for VTK users/developers:<br />
#[http://www.bioimagesuite.org/vtkbook/index.html Introduction to Programming for Image Analysis with VTK]: A book on image analysis using VTK, free PDF available at the site. Very nice introduction to VTK, uses Tcl in the beginning but C++ at the end. Includes an example of writing your own VTK filter. Overall, a highly recommended read (Xenophon Papadmetris)<br />
<br />
<!--<br />
No links could be found; if they are found, then please:<br />
1) include the links<br />
2) make them visible by shifting the comment tags<br />
<br />
== Advanced Computer Graphics and Data Visualization - Term Projects ==<br />
(T. D. Citriniti, Rensselaer Polytechnic Institute, NY)<br />
" This is a collection of Term Project completed by the students in "Advanced Computer Graphics and Data Visualization" 35-6961 for the Fall 1995 semester." And these of Spring 97. Very interesting projects, but NO code (should ask for it ?).<br />
Problem: No link found<br />
== Jumping Jack Icon/Glyph - Scientific Visualization Project ==<br />
(M. Srdanovic)<br />
"The project is to combine 6 MRI images using some form of glyph. I decided to use a glyph that resembles a person doing "jumping jacks". Each limb of the icon will be oriented based on the intensity of pixels from each of the 6 images.". Part of the Scientific Visualization (91.541) course (Dr. G.G. Grinstein) at UMass Lowell Computer Science Department.<br />
Problem: No link found<br />
== CS 5630 - Scientific Visualization ==<br />
(C. Johnson , University of Utah)<br />
formerly known as CS 523 : various solutions (+ code) : Class assignments - see 1 to 4 (T. Robbins), and also Y.-K. Yang assignments.<br />
Problem: No link found<br />
==(J. Kaandorp, R. Belleman & Z. Zhao, Univ. of Amsterdam)==<br />
Note : "If you plan to use any of the material provided by our invited speakers for your own course, please contact me so that I can check for an OK with the authors.".<br />
Problem: No link found<br />
--><br />
<br />
<br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40846VTK/Learning VTK2011-06-16T19:04:46Z<p>Pratikm: </p>
<hr />
<div>VTK is an extremely powerful and versatile tool, but has a reputation for having a steep learning curve. Here, we hope to organize the resources that may be of use to both VTK users and developers in understanding and using VTK effectively. The key to successfully using VTK are:<br />
# understanding the structure of the object-oriented hierarchy and <br />
# understanding it’s pipeline architecture.<br />
<br />
If you are new to VTK, you might want to check out: <br />
* [[VTK/Tutorials/External_Tutorials|External Tutorials]] - short presentations from VTK users and developers around the world on VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be to actually see/write VTK code. The following resources will help you do just that: <br />
* [[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used.<br />
* [[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK.<br />
<br />
More resources available to help the user are :<br />
* [http://kitware.com/products/books/vtkbook.html The VTK Textbook]<br />
* [http://kitware.com/products/books/vtkguide.html The VTK Users Guide]<br />
* [http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
* [http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40845VTK/Learning VTK2011-06-16T19:04:01Z<p>Pratikm: </p>
<hr />
<div>VTK is an extremely powerful and versatile tool, but has a reputation for having a steep learning curve. Here, we hope to organize the resources that may be of use to both VTK users and developers in understanding and using VTK effectively.<br />
<br />
The key to successfully using VTK are:<br />
# understanding the structure of the object-oriented hierarchy and <br />
# understanding it’s pipeline architecture.<br />
<br />
If you are new to VTK, you might want to check out: <br />
* [[VTK/Tutorials/External_Tutorials|External Tutorials]] - short presentations from VTK users and developers around the world on VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be to actually see/write VTK code. The following resources will help you do just that: <br />
* [[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used.<br />
* [[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK.<br />
<br />
More resources available to help the user are :<br />
* [http://kitware.com/products/books/vtkbook.html The VTK Textbook]<br />
* [http://kitware.com/products/books/vtkguide.html The VTK Users Guide]<br />
* [http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
* [http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK&diff=40844VTK2011-06-16T18:30:05Z<p>Pratikm: i am moving all the learning VTK part to a new page, so i don't have to worry about tampering too much with the main page</p>
<hr />
<div><center>http://public.kitware.com/images/logos/vtk-logo2.jpg</center><br />
<br /><br />
The Visualization ToolKit (VTK) is an open source, freely available software system for 3D computer graphics, image processing, and visualization used by thousands of researchers and developers around the world. VTK consists of a C++ class library, and several interpreted interface layers including Tcl/Tk, Java, and Python. Professional support and products for VTK are provided by Kitware, Inc. ([http://www.kitware.com www.kitware.com]) VTK supports a wide variety of visualization algorithms including scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as implicit modelling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation. In addition, dozens of imaging algorithms have been directly integrated to allow the user to mix 2D imaging / 3D graphics algorithms and data.<br />
<br />
<br />
== Learning VTK ==<br />
If you want to learn how to use or develop VTK, please see [[VTK/Learning_VTK | Learning VTK]]<br />
<br />
<br />
== Building VTK ==<br />
* Where can I [http://vtk.org/get-software.php download VTK]?<br />
<br />
* Where can I download a tarball of the [http://vtk.org/files/nightly/vtkNightlyDocHtml.tar.gz nightly HTML documentation]?<br />
<br />
* How do I build the [[VTK/BuildingDoxygen|Doxygen documentation]]?<br />
<br />
* [[VTK/Git|Using Git for VTK development]]<br />
<br />
* [[VTK/Build parameters | Build parameters]]<br />
<br />
* [[Making Development Environment without compiling source distribution]]<br />
<br />
== Extending VTK ==<br />
<br />
* Where can I get [[VTK Datasets]]?<br />
<br />
* [[VTK Classes|User-Contributed Classes]]<br />
<br />
* [[VTK Coding Standards]] <br />
<br />
* [[VTK cvs commit Guidelines]]<br />
<br />
* [[VTK Patch Procedure]] -- merge requests for the current release branch<br />
<br />
* [[VTK Scripts|Extending VTK with Scripts]]<br />
<br />
== Projects/ Tools that use VTK == <br />
<br />
* [[VTK Tools|VTK-Based Tools and Applications]]<br />
<br />
* What are some [[VTK Projects|projects using VTK]]?<br />
<br />
== Future VTK development ==<br />
<br />
* [[VTK 5.4 Release Planning]]<br />
<br />
* [[Proposed Changes to VTK | Proposed Changes to VTK]]<br />
<br />
== Troubleshooting ==<br />
<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]<br />
<br />
* [[VTK OpenGL|Common OpenGL troubles]]<br />
<br />
== Miscellaneous ==<br />
* [[VTK Related Job Opportunities|VTK Related Job Opportunities]]<br />
<br />
* [[VTK/Third Party Library Patrol | VTK 3rd Party Library Patrol]]<br />
<br />
* [[VTK/Meeting Minutes | Meeting Minutes]]<br />
<br />
== Releases ==<br />
<br />
* VTK 5.8: [[VTK/Release580 New Classes | New Classes]]<br />
<br />
== Projects ==<br />
<br />
==== For inclusion in VTK 5.0 ====<br />
<br />
* [[VTKWidgets | VTK Widget Redesign]]<br />
<br />
==== For inclusion in VTK 5.2 ====<br />
<br />
* [[VTK/Java Wrapping | VTK Java Wrapping]]<br />
* [[VTK/Composite Data Redesign | Composite Data Redesign]]<br />
* [[VTKShaders | Shaders in VTK]]<br />
* [[VTK/VTKMatlab | VTK with Matlab]]<br />
* [[VTK/Time_Support | VTK Time support]]<br />
* [[VTK/Graph Layout | VTK Graph Layout]]<br />
* [[VTK/Depth_Peeling | VTK Depth Peeling]]<br />
* [[VTK/Using_JRuby | Using VTK with JRuby]]<br />
* [[VTK/Painters | Painters]]<br />
<br />
==== For inclusion in VTK 5.4 ====<br />
<br />
* [[VTK/Cray XT3 Compilation| Cray XT3 Compilation]]<br />
* [[VTK/Geovis vision toolkit | Geospatial and vision visualization support ]]<br />
<br />
==== For inclusion in VTK 5.6 ====<br />
<br />
* [[VTK/MultiPass_Rendering | VTK Multi-Pass Rendering]]<br />
* [[VTK/Multicore and Streaming | Multicore and Streaming]]<br />
* [[VTK/statistics | Statistics]]<br />
* [[VTK/Array Refactoring | Array Refactoring]]<br />
* [[VTK/3DConnexion Devices Support | 3DConnexion Devices Support]]<br />
* [[VTK/Charts | New Charts API]]<br />
* [[VTK/New CellPicker | New Cell Picker and Volume Picking (start Nov 2010, finish Feb 2010)]]<br />
<br />
==== For inclusion in VTK 5.8 ====<br />
<br />
* [[VTK/Polyhedron_Support | Polyhedron cells and MVC Interpolation]]<br />
* [[VTK/Closed Surface Clipping | Clipping of closed surfaces (start Mar 26, 2010, finish Apr 22, 2010)]]<br />
* [[VTK/Wrapper Update 2010 | New wrappers (start Apr 28, 2010)]]<br />
* [[VTK/Image Stencil Improvements | Improved image stencil support (start Nov 3, 2010)]]<br />
* [[VTK/MNI File Formats | MNI file formats]]<br />
<br />
==== For inclusion in next VTK release ====<br />
<br />
* [[VTK/improved unicode support | Change unicode readers/writers to register as codecs (Proposed)]]<br />
* [[VTK/Image Rendering Classes | New image rendering classes (start Dec 15 2010, finish Mar 15 2011)]]<br />
<br />
==== For inclusion in VTK 6.0 ====<br />
<br />
* [[VTK/Remove_VTK_4_Compatibility | Remove VTK 4 compatibility layer from pipeline]]<br />
<br />
<br />
<br />
== News ==<br />
=== Development Process ===<br />
The VTK Community is [[VTK/Managing_the_Development_Process | upgrading its development process]]. We are doing this in response to the continuing and rapid growth of the toolkit. A VTK Architecture Review Board [[VTK/Architecture_Review_Board |VTK ARB]] is being put in place to provide strategic guidance to the community, and individuals are being identified as leaders in various VTK subsystems.<br />
<br />
Have a question or topic for the ARB to discuss about the future of VTK? First, please bring the topic to the [http://public.kitware.com/mailman/listinfo/vtk-developers VTK developers mailing list]. If the issue is not resolved there or needs further planning or direction, you may [[VTK/ARB/Meetings#Potential Topics|enter a suggested topic for discussion]].<br />
<br />
===[[VTK/NextGen|VTK NextGen]]=== <br />
We have started collecting works in progress as well as future ideas at [[VTK/NextGen|NextGen]]. Please add anything you are working on, would like to collaborate on, or would like to see in the future of VTK!<br />
<br />
== Developers Corner ==<br />
[[VTK/Developers Corner|Developers Corner]]<br />
<br />
<!-- <br />
== External Links ==<br />
dead link *[http://zorayasantos.tripod.com/vtk_csharp_examples VTK examples in C#] (Visual Studio 5.0 and .NET 2.0)<br />
--><br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Learning_VTK&diff=40843VTK/Learning VTK2011-06-16T18:20:29Z<p>Pratikm: Created page with "If you are new to VTK, you might want to check out: * External Tutorials - short presentations from VTK users and developers around the worl..."</p>
<hr />
<div>If you are new to VTK, you might want to check out: <br />
* [[VTK/Tutorials/External_Tutorials|External Tutorials]] - short presentations from VTK users and developers around the world on VTK.<br />
<br />
Once you have done that, the best way to learn VTK in detail may be to actually see/write VTK code. The following resources will help you do just that: <br />
* [[VTK/Examples|Examples]] - Short code examples demonstrating how many of the VTK classes can be used.<br />
* [[VTK/Tutorials|Tutorials]] - Slightly longer than the examples, these tutorials explain concepts and demonstrate more complicated tasks in VTK.<br />
<br />
More resources available to help the user are :<br />
* [http://kitware.com/products/books/vtkbook.html The VTK Textbook]<br />
* [http://kitware.com/products/books/vtkguide.html The VTK Users Guide]<br />
* [http://www.vtk.org/doc/nightly/html/ The VTK Doxygen Man Pages]<br />
* [http://www.vtk.org/mailman/listinfo/vtkusers The VTK Users Mailing List]<br />
* [[VTK FAQ|Frequently asked questions (FAQ)]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/External_Tutorials&diff=40839VTK/Tutorials/External Tutorials2011-06-16T09:41:29Z<p>Pratikm: </p>
<hr />
<div>This page was based on Sebastien Barre's [http://www.barre.nom.fr/vtk/links-examples.html VTK Links: Examples] page. However, it is kept more upto date, and new resources will be regularly added. Note that to understand VTK internals, you might still have to buy the VTK User's Guide (although be warned that it does not cover everything). If you find a nice tutorial/webpage that explains VTK, please add it here! <br />
<br />
== Examples, Presentations, Seminars, Talks, Tutorials ==<br />
# There is a tutorial in the VTK distribution (in /Examples/Tutorial).<br />
#[http://www.rug.nl/cit/hpcv/visualisation/VTK/index.html Visualization examples for the The VISUALIZATION TOOLKIT], used in a VTK workshop given at the University of Groningen (RuG): Worth a look, lot's of examples.<br />
#[http://www.mcs.anl.gov/~disz/cs-341/colorvis/colorvis.PPT Introduction to Visualization with VTK]:Powerpoint presentation, no code, very nice illustrations (T. L. Disz, Univ. of Chicago)<br />
#[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.514&rep=rep1&type=pdf Visualizing with VTK : A Tutorial]: A tutorial given by the writers of the VTK Book (W. Schroeder, L. Avila, W. Hoffman)<br />
#[http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html Visualization Toolkit Tutorial ]: Overview of VTK, no code, mostly pictures of projects (John Bell). <br />
#[http://www.bu.edu/tech/research/training/tutorials/vtk/ Using VTK to Visualize Scientific Data (online tutorial)]: Nice Introduction with serious examples in Tcl (BU)<br />
#[http://www.ncsa.illinois.edu/~semeraro/PPT/VTK_TUTORIAL/v3_document.htm VTK Tutorial: How to Create Visualization Applications with VTK]: Explains basic VTK objects, code snippets are present (Dave Semeraro)<br />
#[http://www.osc.edu/supercomputing/training/vtk/vtk_0505.pdf Scientific Visualization with VTK,The Visualization Toolkit]: A Hands-on introduction; uses examples that ship with VTK to illustrate VTK concepts(Ohio Supercomputer Center, OSC)<br />
#[http://www.csc.kth.se/utbildning/kth/kurser/DD2257/visual09/VTK-090325-GT.pdf Introduction to VTK]: Nice exposition of basic VTK internals, but very brief (Gustav Taxen)<br />
#[http://vizisaw.dubmun.com/index.html Vizi-SAW VTK Tutorial]<br />
<br />
== VTK Pipeline ==<br />
#[http://www.ci-ra.org/Documents/Seminaires/05-2009/ParaView.Introduction.pdf ParaView Visualization Course: The VTK Visualization Pipeline and ParaView]: Introduction to ParaView, but includes a nice discussion of VTK pipeline (Jean M. Favre)<br />
#[http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 Upgrading to and Understanding the new VTK Pipeline]: Explains the concepts of vtkInformation etc.<br />
#[http://www.vtk.org/Wiki/images/8/80/Pipeline.pdf Proposal for the New VTK Pipeline]: This new pipeline is currently in use; the document uses the term 'vtkProcessor'; just replace this by 'vtkAlgorithm' throughout<br />
<br />
== Computer Graphics Courses that teach VTK ==<br />
# [http://web.cs.wpi.edu/~matt/courses/cs563 CS563:Advanced Topics in Computer Graphics Home Page]<br />
# [http://www.cs.rpi.edu/~cutler/classes/visualization/F10/ RPI CSCI-4972 Introduction to Visualization Fall 2010] - slides available [https://docs.google.com/leaf?id=0B8yIfGqnlfSoYmMzNGNmMjQtZWUzYS00MGIwLWI2ODItMzBiNzkwNTY5MTY1&hl=en_US here]<br />
# [http://www.inf.ed.ac.uk/teaching/courses/vis/ Visualization Module Home Page] <br />
<br />
== Books ==<br />
Apart from the [http://www.vtk.org/VTK/help/book.html VTK Book] and the [http://www.vtk.org/VTK/help/book.html User's Guide], the following books may be of interest for VTK users/developers:<br />
#[http://www.bioimagesuite.org/vtkbook/index.html Introduction to Programming for Image Analysis with VTK]: A book on medical analysis using VTK, free PDF available at the site. Includes an example of writing your own VTK filter (Xenophon Papadmetris)<br />
<br />
<!--<br />
No links could be found; if they are found, then please:<br />
1) include the links<br />
2) make them visible by shifting the comment tags<br />
<br />
== Advanced Computer Graphics and Data Visualization - Term Projects ==<br />
(T. D. Citriniti, Rensselaer Polytechnic Institute, NY)<br />
" This is a collection of Term Project completed by the students in "Advanced Computer Graphics and Data Visualization" 35-6961 for the Fall 1995 semester." And these of Spring 97. Very interesting projects, but NO code (should ask for it ?).<br />
Problem: No link found<br />
== Jumping Jack Icon/Glyph - Scientific Visualization Project ==<br />
(M. Srdanovic)<br />
"The project is to combine 6 MRI images using some form of glyph. I decided to use a glyph that resembles a person doing "jumping jacks". Each limb of the icon will be oriented based on the intensity of pixels from each of the 6 images.". Part of the Scientific Visualization (91.541) course (Dr. G.G. Grinstein) at UMass Lowell Computer Science Department.<br />
Problem: No link found<br />
== CS 5630 - Scientific Visualization ==<br />
(C. Johnson , University of Utah)<br />
formerly known as CS 523 : various solutions (+ code) : Class assignments - see 1 to 4 (T. Robbins), and also Y.-K. Yang assignments.<br />
Problem: No link found<br />
==(J. Kaandorp, R. Belleman & Z. Zhao, Univ. of Amsterdam)==<br />
Note : "If you plan to use any of the material provided by our invited speakers for your own course, please contact me so that I can check for an OK with the authors.".<br />
Problem: No link found<br />
--><br />
<br />
<br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40837VTK/Tutorials/New Pipeline2011-06-16T02:52:58Z<p>Pratikm: /* Request Update Extent */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see is upon execution is REQUEST_DATA_OBJECT (see [http://www.vtk.org/doc/nightly/html/classvtkExecutive.html vtkExecutive], [http://www.vtk.org/doc/nightly/html/classvtkDataObject.html vtkDataObject] and their subclasses for all the possible keys and requests.) This request asks the algorithm to create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see [http://www.vtk.org/doc/nightly/html/classvtkDataSetAlgorithm.html vtkDataSetAlgorithm] for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFORMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see [http://www.vtk.org/doc/nightly/html/classvtkImageGradient.html vtkImageGradient] for an example)<br />
<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
</center><br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40836VTK/Tutorials/New Pipeline2011-06-16T01:19:21Z<p>Pratikm: /* Typical Pipeline Execution */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see is upon execution is REQUEST_DATA_OBJECT (see [http://www.vtk.org/doc/nightly/html/classvtkExecutive.html vtkExecutive], [http://www.vtk.org/doc/nightly/html/classvtkDataObject.html vtkDataObject] and their subclasses for all the possible keys and requests.) This request asks the algorithm to create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see [http://www.vtk.org/doc/nightly/html/classvtkDataSetAlgorithm.html vtkDataSetAlgorithm] for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
</center><br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40764VTK/Tutorials/Executives2011-06-14T19:53:04Z<p>Pratikm: Redirected page to VTK/Tutorials/New Pipeline#Executives</p>
<hr />
<div>#REDIRECT [[VTK/Tutorials/New Pipeline#Executives]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40763VTK/Tutorials/New Pipeline2011-06-14T19:50:43Z<p>Pratikm: /* Executives */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and their subclasses for all the possible keys and requests.) This request asks the algorithm to create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
</center><br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<center><br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
</center><br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40762VTK/Tutorials/New Pipeline2011-06-14T19:45:06Z<p>Pratikm: /* Typical Pipeline Execution */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and their subclasses for all the possible keys and requests.) This request asks the algorithm to create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40761VTK/Tutorials/New Pipeline2011-06-14T19:43:35Z<p>Pratikm: /* Request Information */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing'' (or<br />
reading in the entire data file) ''and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40760VTK/Tutorials/New Pipeline2011-06-14T19:42:47Z<p>Pratikm: /* Typical Pipeline Execution */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it will automatically create a default executive and connect the algorithm and executive together. The algorithm will also call '''FillInputPortInformation''' and '''FillOutputPortInformation''' on each input and output port respectively. These two methods should setup the static characteristics of the input and output ports such as data type requirements and whether the port is optional or repeatable. For example, by default all subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<br />
<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
<br />
<br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new SetInputConnection signature is used (and it should be used) then this just stores the connectivity information. The old VTK pipeline used the data objects to store connectivity and thus required that the data objects be instantiated prior to calling GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed (typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If you handle this request you can set the output ports output data to be whatever type you want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the request then by default the executive will look at the output port information to see what type the output port has been set to (from FillOutputPortInformation) If this is a concrete type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40759VTK/Tutorials2011-06-14T19:37:37Z<p>Pratikm: /* Tutorials */</p>
<hr />
<div>==System Configuration/General Information==<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
<br />
==Tutorials==<br />
* [[VTK/Tutorials/New_Pipeline | The New VTK Pipeline]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
* [[VTK/Information Keys | VTK Information Keys and their significance]]<br />
<br />
* [[VTK/Streaming | Streaming data in VTK]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
=== Wrapping ===<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
===External Tutorials===<br />
Pratik was nice enough to catalog several external tutorials (from courses, slides, etc around the world) here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40758VTK/Tutorials/Executives2011-06-14T19:35:58Z<p>Pratikm: Redirected page to VTK/Tutorials/New Pipeline</p>
<hr />
<div>#REDIRECT [[VTK/Tutorials/New Pipeline]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/New_Pipeline&diff=40757VTK/Tutorials/New Pipeline2011-06-14T19:35:38Z<p>Pratikm: Created page with "<small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</..."</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it<br />
will automatically create a default executive and connect the algorithm and executive<br />
together. The algorithm will also call '''FillInputPortInformation''' and<br />
'''FillOutputPortInformation''' on each input and output port respectively. These two methods<br />
should setup the static characteristics of the input and output ports such as data type<br />
requirements and whether the port is optional or repeatable. For example, by default all<br />
subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*<br />
info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new<br />
SetInputConnection signature is used (and it should be used) then this just stores the<br />
connectivity information. The old VTK pipeline used the data objects to store<br />
connectivity and thus required that the data objects be instantiated prior to calling<br />
GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed<br />
(typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and<br />
their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If<br />
you handle this request you can set the output ports output data to be whatever type you<br />
want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the<br />
request then by default the executive will look at the output port information to see what<br />
type the output port has been set to (from FillOutputPortInformation) If this is a concrete<br />
type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I<br />
can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40756VTK/Tutorials/Executives2011-06-14T19:33:44Z<p>Pratikm: </p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it<br />
will automatically create a default executive and connect the algorithm and executive<br />
together. The algorithm will also call '''FillInputPortInformation''' and<br />
'''FillOutputPortInformation''' on each input and output port respectively. These two methods<br />
should setup the static characteristics of the input and output ports such as data type<br />
requirements and whether the port is optional or repeatable. For example, by default all<br />
subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*<br />
info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new<br />
SetInputConnection signature is used (and it should be used) then this just stores the<br />
connectivity information. The old VTK pipeline used the data objects to store<br />
connectivity and thus required that the data objects be instantiated prior to calling<br />
GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed<br />
(typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and<br />
their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If<br />
you handle this request you can set the output ports output data to be whatever type you<br />
want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the<br />
request then by default the executive will look at the output port information to see what<br />
type the output port has been set to (from FillOutputPortInformation) If this is a concrete<br />
type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I<br />
can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.<br />
<br />
==Converting an Existing Filter to the New Pipeline==<br />
The best approach at this point is to find a VTK filter similar to your filter and then copy<br />
that. An alternate approach is to follow the instructions below. The new pipeline<br />
implementation does include a backwards compatibility layer for old filters. Specifically<br />
vtkProcessObject, vtkSource, and their related subclasses are still present and working.<br />
Most filters should work with the new pipeline without any changes. The most common<br />
problems with the backwards compatibility layer involve filters that manipulate the<br />
pipeline. If your filter overrides UpdateData or UpdateInformation you will probably<br />
have to make some changes. If your filter uses an internal pipeline then you might need<br />
to make some changes, otherwise you should be OK.<br />
Now ideally you would convert your filter to the new pipeline. There is a script that you<br />
can run that will help you to convert your filter. The script doesn’t do everything but it<br />
will help get you going in the right direction. You can run the script on an existing class<br />
as follows:<br />
cd VTK/MYClasses<br />
cmake –D CLASS=vtkMyClass –P ../Utilities/Upgrading/NewPipeConvert.cmake<br />
One of the effects this script might have is to change the superclass of your class. There<br />
are some convenience superclasses to make writing algorithms a little easier. In the old<br />
pipeline there were classes such as vtkImageToImageFilter. Some of the classes designed<br />
for the new pipeline include:<br />
vtkPolyDataAlgorithm: for algorithms that produce vtkPolyData<br />
vtkImageAlgorithm: for algorithms that take and or produce vtkImageData<br />
vtkThreadedImageAlgorithm: a subclass of vtkImageAlgorithm that implements<br />
multithreading<br />
These classes have some defaults that can be easily changed in your subclass. The first<br />
default is that the subclass will take on input and produce one output. This is typically<br />
specified in the constructor using SetNumberOfInputPorts(1) and<br />
SetNumberOfOutputPorts(1). If your subclass doesn’t take an input then in its constructor<br />
just call SetNumberOfInputPorts(0). Another assumption that is made is that all the input<br />
port and output ports take vtkPolyData (for vtkPolyDataAlgorithm, vtkImageData for<br />
vtkImageAlgorithm etc). Again your subclass can override this by providing its own<br />
implementation of FillInputPortInformation or FillOutputPortInformation.<br />
These superclasses typically provide an implementation of ProcessRequest that handles<br />
REQUEST_INFORMATION and REQUEST_DATA request by invoking virtual<br />
functions called RequestData and RequestInformation. They also typically provide<br />
default implementations of RequestData that call the older style ExecuteData functions to<br />
make converting your old filters easier.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40755VTK/Tutorials/Executives2011-06-14T19:30:25Z<p>Pratikm: /* Typical Pipeline Execution */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. <br />
* First the use instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it<br />
will automatically create a default executive and connect the algorithm and executive<br />
together. The algorithm will also call '''FillInputPortInformation''' and<br />
'''FillOutputPortInformation''' on each input and output port respectively. These two methods<br />
should setup the static characteristics of the input and output ports such as data type<br />
requirements and whether the port is optional or repeatable. For example, by default all<br />
subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*<br />
info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
This can be overridden by subclasses as required. <br />
* Once the algorithm and executive are instantiated they will be connected to other algorithm executive pairs. If the new<br />
SetInputConnection signature is used (and it should be used) then this just stores the<br />
connectivity information. The old VTK pipeline used the data objects to store<br />
connectivity and thus required that the data objects be instantiated prior to calling<br />
GetOutput. <br />
* Once the entire pipeline is instantiated and connected it will be executed<br />
(typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and<br />
their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If<br />
you handle this request you can set the output ports output data to be whatever type you<br />
want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the<br />
request then by default the executive will look at the output port information to see what<br />
type the output port has been set to (from FillOutputPortInformation) If this is a concrete<br />
type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I<br />
can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
At this point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
''provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time''. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. <br />
<br />
By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40754VTK/Tutorials/Executives2011-06-14T19:25:41Z<p>Pratikm: /* Introduction */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is '''ProcessRequest'''.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. <br />
<br />
So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. First the use<br />
instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it<br />
will automatically create a default executive and connect the algorithm and executive<br />
together. The algorithm will also call '''FillInputPortInformation''' and<br />
'''FillOutputPortInformation''' on each input and output port respectively. These two methods<br />
should setup the static characteristics of the input and output ports such as data type<br />
requirements and whether the port is optional or repeatable. For example, by default all<br />
subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*<br />
info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
This can be overridden by subclasses as required. Once the algorithm and executive are<br />
instantiated they will be connected to other algorithm executive pairs. If the new<br />
SetInputConnection signature is used (and it should be used) then this just stores the<br />
connectivity information. The old VTK pipeline used the data objects to store<br />
connectivity and thus required that the data objects be instantiated prior to calling<br />
GetOutput. Once the entire pipeline is instantiated and connected it will be executed<br />
(typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and<br />
their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If<br />
you handle this request you can set the output ports output data to be whatever type you<br />
want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the<br />
request then by default the executive will look at the output port information to see what<br />
type the output port has been set to (from FillOutputPortInformation) If this is a concrete<br />
type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I<br />
can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
This point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/External_Tutorials&diff=40731VTK/Tutorials/External Tutorials2011-06-13T19:42:03Z<p>Pratikm: /* VTK Pipeline */</p>
<hr />
<div>This page was based on Sebastien Barre's [http://www.barre.nom.fr/vtk/links-examples.html VTK Links: Examples] page. However, it is kept more upto date, and new resources will be regularly added. Note that to understand VTK internals, you might still have to buy the VTK User's Guide (although be warned that it does not cover everything). If you find a nice tutorial/webpage that explains VTK, please add it here! <br />
<br />
== Examples, Presentations, Seminars, Talks, Tutorials ==<br />
# There is a tutorial in the VTK distribution (in /Examples/Tutorial).<br />
#[http://www.rug.nl/cit/hpcv/visualisation/VTK/index.html Visualization examples for the The VISUALIZATION TOOLKIT], used in a VTK workshop given at the University of Groningen (RuG): Worth a look, lot's of examples.<br />
#[http://www.mcs.anl.gov/~disz/cs-341/colorvis/colorvis.PPT Introduction to Visualization with VTK]:Powerpoint presentation, no code, very nice illustrations (T. L. Disz, Univ. of Chicago)<br />
#[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.514&rep=rep1&type=pdf Visualizing with VTK : A Tutorial]: A tutorial given by the writers of the VTK Book (W. Schroeder, L. Avila, W. Hoffman)<br />
#[http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html Visualization Toolkit Tutorial ]: Overview of VTK, no code, mostly pictures of projects. <br />
#[http://www.bu.edu/tech/research/training/tutorials/vtk/ Using VTK to Visualize Scientific Data (online tutorial)]: Nice Introduction with serious examples in Tcl (BU)<br />
#[http://www.ncsa.illinois.edu/~semeraro/PPT/VTK_TUTORIAL/v3_document.htm VTK Tutorial: How to Create Visualization Applications with VTK]: Explains basic VTK objects, code snippets are present (Dave Semeraro)<br />
#[http://www.osc.edu/supercomputing/training/vtk/vtk_0505.pdf Scientific Visualization with VTK,The Visualization Toolkit]: A Hands-on introduction; uses examples that ship with VTK to illustrate VTK concepts(Ohio Supercomputer Center, OSC)<br />
#[http://www.bioimagesuite.org/vtkbook/index.html Introduction to Programming for Image Analysis with VTK]: A book on medical analysis using VTK, free PDF available at the site (Xenophon Papadmetris)<br />
#[http://www.csc.kth.se/utbildning/kth/kurser/DD2257/visual09/VTK-090325-GT.pdf Introduction to VTK]: Nice exposition of basic VTK internals, but very brief (Gustav Taxen)<br />
#[http://vizisaw.dubmun.com/index.html Vizi-SAW VTK Tutorial]<br />
<br />
== VTK Pipeline ==<br />
#[http://www.ci-ra.org/Documents/Seminaires/05-2009/ParaView.Introduction.pdf ParaView Visualization Course: The VTK Visualization Pipeline and ParaView]: Introduction to ParaView, but includes a nice discussion of VTK pipeline (Jean M. Favre)<br />
#[http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 Upgrading to and Understanding the new VTK Pipeline]: Explains the concepts of vtkInformation etc.<br />
#[http://www.vtk.org/Wiki/images/8/80/Pipeline.pdf Proposal for the New VTK Pipeline]: This new pipeline is currently in use; the document uses the term 'vtkProcessor'; just replace this by 'vtkAlgorithm' throughout<br />
<br />
== Computer Graphics Courses that teach VTK ==<br />
# [http://web.cs.wpi.edu/~matt/courses/cs563 CS563:Advanced Topics in Computer Graphics Home Page]<br />
# [http://www.cs.rpi.edu/~cutler/classes/visualization/F10/ RPI CSCI-4972 Introduction to Visualization Fall 2010] - slides available [https://docs.google.com/leaf?id=0B8yIfGqnlfSoYmMzNGNmMjQtZWUzYS00MGIwLWI2ODItMzBiNzkwNTY5MTY1&hl=en_US here]<br />
# [http://www.inf.ed.ac.uk/teaching/courses/vis/ Visualization Module Home Page] <br />
<br />
<br />
<!--<br />
No links could be found; if they are found, then please:<br />
1) include the links<br />
2) make them visible by shifting the comment tags<br />
<br />
== Advanced Computer Graphics and Data Visualization - Term Projects ==<br />
(T. D. Citriniti, Rensselaer Polytechnic Institute, NY)<br />
" This is a collection of Term Project completed by the students in "Advanced Computer Graphics and Data Visualization" 35-6961 for the Fall 1995 semester." And these of Spring 97. Very interesting projects, but NO code (should ask for it ?).<br />
Problem: No link found<br />
== Jumping Jack Icon/Glyph - Scientific Visualization Project ==<br />
(M. Srdanovic)<br />
"The project is to combine 6 MRI images using some form of glyph. I decided to use a glyph that resembles a person doing "jumping jacks". Each limb of the icon will be oriented based on the intensity of pixels from each of the 6 images.". Part of the Scientific Visualization (91.541) course (Dr. G.G. Grinstein) at UMass Lowell Computer Science Department.<br />
Problem: No link found<br />
== CS 5630 - Scientific Visualization ==<br />
(C. Johnson , University of Utah)<br />
formerly known as CS 523 : various solutions (+ code) : Class assignments - see 1 to 4 (T. Robbins), and also Y.-K. Yang assignments.<br />
Problem: No link found<br />
==(J. Kaandorp, R. Belleman & Z. Zhao, Univ. of Amsterdam)==<br />
Note : "If you plan to use any of the material provided by our invited speakers for your own course, please contact me so that I can check for an OK with the authors.".<br />
Problem: No link found<br />
--><br />
<br />
<br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/External_Tutorials&diff=40730VTK/Tutorials/External Tutorials2011-06-13T19:40:44Z<p>Pratikm: /* Examples, Presentations, Seminars, Talks, Tutorials */</p>
<hr />
<div>This page was based on Sebastien Barre's [http://www.barre.nom.fr/vtk/links-examples.html VTK Links: Examples] page. However, it is kept more upto date, and new resources will be regularly added. Note that to understand VTK internals, you might still have to buy the VTK User's Guide (although be warned that it does not cover everything). If you find a nice tutorial/webpage that explains VTK, please add it here! <br />
<br />
== Examples, Presentations, Seminars, Talks, Tutorials ==<br />
# There is a tutorial in the VTK distribution (in /Examples/Tutorial).<br />
#[http://www.rug.nl/cit/hpcv/visualisation/VTK/index.html Visualization examples for the The VISUALIZATION TOOLKIT], used in a VTK workshop given at the University of Groningen (RuG): Worth a look, lot's of examples.<br />
#[http://www.mcs.anl.gov/~disz/cs-341/colorvis/colorvis.PPT Introduction to Visualization with VTK]:Powerpoint presentation, no code, very nice illustrations (T. L. Disz, Univ. of Chicago)<br />
#[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.514&rep=rep1&type=pdf Visualizing with VTK : A Tutorial]: A tutorial given by the writers of the VTK Book (W. Schroeder, L. Avila, W. Hoffman)<br />
#[http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html Visualization Toolkit Tutorial ]: Overview of VTK, no code, mostly pictures of projects. <br />
#[http://www.bu.edu/tech/research/training/tutorials/vtk/ Using VTK to Visualize Scientific Data (online tutorial)]: Nice Introduction with serious examples in Tcl (BU)<br />
#[http://www.ncsa.illinois.edu/~semeraro/PPT/VTK_TUTORIAL/v3_document.htm VTK Tutorial: How to Create Visualization Applications with VTK]: Explains basic VTK objects, code snippets are present (Dave Semeraro)<br />
#[http://www.osc.edu/supercomputing/training/vtk/vtk_0505.pdf Scientific Visualization with VTK,The Visualization Toolkit]: A Hands-on introduction; uses examples that ship with VTK to illustrate VTK concepts(Ohio Supercomputer Center, OSC)<br />
#[http://www.bioimagesuite.org/vtkbook/index.html Introduction to Programming for Image Analysis with VTK]: A book on medical analysis using VTK, free PDF available at the site (Xenophon Papadmetris)<br />
#[http://www.csc.kth.se/utbildning/kth/kurser/DD2257/visual09/VTK-090325-GT.pdf Introduction to VTK]: Nice exposition of basic VTK internals, but very brief (Gustav Taxen)<br />
#[http://vizisaw.dubmun.com/index.html Vizi-SAW VTK Tutorial]<br />
<br />
== VTK Pipeline ==<br />
#[http://www.ci-ra.org/Documents/Seminaires/05-2009/ParaView.Introduction.pdf ParaView Visualization Course: The VTK Visualization Pipeline and ParaView]: Introduction to ParaView, but includes a nice discussion of VTK pipeline (Jean M. Favre)<br />
#[http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 Upgrading to and Understanding the new VTK Pipeline]: Explains the concepts of vtkInformation etc.<br />
#[http://www.vtk.org/Wiki/images/8/80/Pipeline.pdf Proposal for the New VTK Pipeline]: This new pipeline is currently in use.<br />
<br />
== Computer Graphics Courses that teach VTK ==<br />
# [http://web.cs.wpi.edu/~matt/courses/cs563 CS563:Advanced Topics in Computer Graphics Home Page]<br />
# [http://www.cs.rpi.edu/~cutler/classes/visualization/F10/ RPI CSCI-4972 Introduction to Visualization Fall 2010] - slides available [https://docs.google.com/leaf?id=0B8yIfGqnlfSoYmMzNGNmMjQtZWUzYS00MGIwLWI2ODItMzBiNzkwNTY5MTY1&hl=en_US here]<br />
# [http://www.inf.ed.ac.uk/teaching/courses/vis/ Visualization Module Home Page] <br />
<br />
<br />
<!--<br />
No links could be found; if they are found, then please:<br />
1) include the links<br />
2) make them visible by shifting the comment tags<br />
<br />
== Advanced Computer Graphics and Data Visualization - Term Projects ==<br />
(T. D. Citriniti, Rensselaer Polytechnic Institute, NY)<br />
" This is a collection of Term Project completed by the students in "Advanced Computer Graphics and Data Visualization" 35-6961 for the Fall 1995 semester." And these of Spring 97. Very interesting projects, but NO code (should ask for it ?).<br />
Problem: No link found<br />
== Jumping Jack Icon/Glyph - Scientific Visualization Project ==<br />
(M. Srdanovic)<br />
"The project is to combine 6 MRI images using some form of glyph. I decided to use a glyph that resembles a person doing "jumping jacks". Each limb of the icon will be oriented based on the intensity of pixels from each of the 6 images.". Part of the Scientific Visualization (91.541) course (Dr. G.G. Grinstein) at UMass Lowell Computer Science Department.<br />
Problem: No link found<br />
== CS 5630 - Scientific Visualization ==<br />
(C. Johnson , University of Utah)<br />
formerly known as CS 523 : various solutions (+ code) : Class assignments - see 1 to 4 (T. Robbins), and also Y.-K. Yang assignments.<br />
Problem: No link found<br />
==(J. Kaandorp, R. Belleman & Z. Zhao, Univ. of Amsterdam)==<br />
Note : "If you plan to use any of the material provided by our invited speakers for your own course, please contact me so that I can check for an OK with the authors.".<br />
Problem: No link found<br />
--><br />
<br />
<br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/External_Tutorials&diff=40664VTK/Tutorials/External Tutorials2011-06-13T08:55:48Z<p>Pratikm: /* Computer Graphics Courses that teach VTK */</p>
<hr />
<div>This page was based on Sebastien Barre's [http://www.barre.nom.fr/vtk/links-examples.html VTK Links: Examples] page. However, it is kept more upto date, and new resources will be regularly added. Note that to understand VTK internals, you might still have to buy the VTK User's Guide (although be warned that it does not cover everything). If you find a nice tutorial/webpage that explains VTK, please add it here! <br />
<br />
== Examples, Presentations, Seminars, Talks, Tutorials ==<br />
# There is a tutorial in the VTK distribution (in /Examples/Tutorial).<br />
#[http://www.rug.nl/cit/hpcv/visualisation/VTK/index.html Visualization examples for the The VISUALIZATION TOOLKIT], used in a VTK workshop given at the University of Groningen (RuG): Worth a look, lot's of examples.<br />
#[http://www.mcs.anl.gov/~disz/cs-341/colorvis/colorvis.PPT Introduction to Visualization with VTK]:Powerpoint presentation, no code, very nice illustrations (T. L. Disz, Univ. of Chicago)<br />
#[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.514&rep=rep1&type=pdf Visualizing with VTK : A Tutorial]: A tutorial given by the writers of the VTK Book (W. Schroeder, L. Avila, W. Hoffman)<br />
#[http://www.cs.uic.edu/~jbell/CS526/Tutorial/Tutorial.html Visualization Toolkit Tutorial ]: Overview of VTK, no code, mostly pictures of projects. <br />
#[http://www.bu.edu/tech/research/training/tutorials/vtk/ Using VTK to Visualize Scientific Data (online tutorial)]: Nice Introduction with serious examples in Tcl (BU)<br />
#[http://www.ncsa.illinois.edu/~semeraro/PPT/VTK_TUTORIAL/v3_document.htm VTK Tutorial: How to Create Visualization Applications with VTK]: Explains basic VTK objects, code snippets are present (Dave Semeraro)<br />
#[http://www.osc.edu/supercomputing/training/vtk/vtk_0505.pdf Scientific Visualization with VTK,The Visualization Toolkit]: A Hands-on introduction; uses examples that ship with VTK to illustrate VTK concepts(Ohio Supercomputer Center, OSC)<br />
#[http://www.bioimagesuite.org/vtkbook/index.html Introduction to Programming for Image Analysis with VTK]: A book on medical analysis using VTK, free PDF available at the site (Xenophon Papadmetris)<br />
#[http://www.csc.kth.se/utbildning/kth/kurser/DD2257/visual09/VTK-090325-GT.pdf Introduction to VTK]: Nice exposition of basic VTK internals, but very brief (Gustav Taxen)<br />
<br />
== VTK Pipeline ==<br />
#[http://www.ci-ra.org/Documents/Seminaires/05-2009/ParaView.Introduction.pdf ParaView Visualization Course: The VTK Visualization Pipeline and ParaView]: Introduction to ParaView, but includes a nice discussion of VTK pipeline (Jean M. Favre)<br />
#[http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 Upgrading to and Understanding the new VTK Pipeline]: Explains the concepts of vtkInformation etc.<br />
#[http://www.vtk.org/Wiki/images/8/80/Pipeline.pdf Proposal for the New VTK Pipeline]: This new pipeline is currently in use.<br />
<br />
== Computer Graphics Courses that teach VTK ==<br />
# [http://web.cs.wpi.edu/~matt/courses/cs563 CS563:Advanced Topics in Computer Graphics Home Page]<br />
# [http://www.cs.rpi.edu/~cutler/classes/visualization/F10/ RPI CSCI-4972 Introduction to Visualization Fall 2010] - slides available [https://docs.google.com/leaf?id=0B8yIfGqnlfSoYmMzNGNmMjQtZWUzYS00MGIwLWI2ODItMzBiNzkwNTY5MTY1&hl=en_US here]<br />
# [http://www.inf.ed.ac.uk/teaching/courses/vis/ Visualization Module Home Page] <br />
<br />
<br />
<!--<br />
No links could be found; if they are found, then please:<br />
1) include the links<br />
2) make them visible by shifting the comment tags<br />
<br />
== Advanced Computer Graphics and Data Visualization - Term Projects ==<br />
(T. D. Citriniti, Rensselaer Polytechnic Institute, NY)<br />
" This is a collection of Term Project completed by the students in "Advanced Computer Graphics and Data Visualization" 35-6961 for the Fall 1995 semester." And these of Spring 97. Very interesting projects, but NO code (should ask for it ?).<br />
Problem: No link found<br />
== Jumping Jack Icon/Glyph - Scientific Visualization Project ==<br />
(M. Srdanovic)<br />
"The project is to combine 6 MRI images using some form of glyph. I decided to use a glyph that resembles a person doing "jumping jacks". Each limb of the icon will be oriented based on the intensity of pixels from each of the 6 images.". Part of the Scientific Visualization (91.541) course (Dr. G.G. Grinstein) at UMass Lowell Computer Science Department.<br />
Problem: No link found<br />
== CS 5630 - Scientific Visualization ==<br />
(C. Johnson , University of Utah)<br />
formerly known as CS 523 : various solutions (+ code) : Class assignments - see 1 to 4 (T. Robbins), and also Y.-K. Yang assignments.<br />
Problem: No link found<br />
==(J. Kaandorp, R. Belleman & Z. Zhao, Univ. of Amsterdam)==<br />
Note : "If you plan to use any of the material provided by our invited speakers for your own course, please contact me so that I can check for an OK with the authors.".<br />
Problem: No link found<br />
--><br />
<br />
<br />
{{VTK/Template/Footer}}</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40662VTK/Tutorials/Executives2011-06-13T01:29:56Z<p>Pratikm: /* Introduction */</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is ProcessRequest.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream ( Requests for information or data <br />
flowing from input to output are called downstream requests. Requests for information or data flowing from output to <br />
input are called upstream requests.) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. First the use<br />
instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it<br />
will automatically create a default executive and connect the algorithm and executive<br />
together. The algorithm will also call '''FillInputPortInformation''' and<br />
'''FillOutputPortInformation''' on each input and output port respectively. These two methods<br />
should setup the static characteristics of the input and output ports such as data type<br />
requirements and whether the port is optional or repeatable. For example, by default all<br />
subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*<br />
info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
This can be overridden by subclasses as required. Once the algorithm and executive are<br />
instantiated they will be connected to other algorithm executive pairs. If the new<br />
SetInputConnection signature is used (and it should be used) then this just stores the<br />
connectivity information. The old VTK pipeline used the data objects to store<br />
connectivity and thus required that the data objects be instantiated prior to calling<br />
GetOutput. Once the entire pipeline is instantiated and connected it will be executed<br />
(typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and<br />
their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If<br />
you handle this request you can set the output ports output data to be whatever type you<br />
want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the<br />
request then by default the executive will look at the output port information to see what<br />
type the output port has been set to (from FillOutputPortInformation) If this is a concrete<br />
type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I<br />
can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
This point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Executives&diff=40656VTK/Executives2011-06-12T21:33:44Z<p>Pratikm: Redirected page to VTK/Tutorials/Executives</p>
<hr />
<div>#REDIRECT [[VTK/Tutorials/Executives]]</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials/Executives&diff=40655VTK/Tutorials/Executives2011-06-12T21:33:26Z<p>Pratikm: Created page with "<small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</..."</p>
<hr />
<div><small>The introductory section has been blatantly copied from [http://www.cmake.org/cgi-bin/viewcvs.cgi/*checkout*/Utilities/Upgrading/TheNewVTKPipeline.pdf?revision=1.5 here]</small><br />
<br />
This section will introduce the classes and concepts used in the new VTK<br />
pipeline. You should be familiar with the very basic design of the old<br />
pipeline before delving into this. The new pipeline was designed to reduce complexity<br />
while at the same time provide more flexibility. In the old pipeline the pipeline<br />
functionality and mechanics were contained in the data object and filters. In the new<br />
pipeline this functionality is contained in a new class (and its subclasses) called<br />
'''vtkExecutive'''. <br />
<br />
==Introduction==<br />
There are four key classes that make up the new pipeline. They are:<br />
<br />
*'''vtkInformation''': <br />
provides the flexibility to grow. Most of the methods and meta<br />
information storage make use of this class. vtkInformation is a map-based data<br />
structure that supports heterogeneous key-value operations with compile time<br />
type checking. There is also a vtkInformationVector class for storing vectors of<br />
information objects. When passing information up or down the pipeline (or from<br />
the executive to the algorithm) this is the class to use.<br />
*'''vtkDataObject''':<br />
in the past this class both stored data and handled some of the<br />
pipeline logic. In the new pipeline this class is only supposed to store data. In<br />
practice there are some pipeline methods in the class for backwards compatibility<br />
so that all of VTK doesn’t break, but the goal is that vtkDataObject should only<br />
be about storing data. vtkDataObject has an instance of vtkInformation that can be<br />
used to store key-value pairs in. For example the current extent of the data object<br />
is stored in there but the whole extent is not, because that is a pipeline attribute<br />
containing information about a specific pipeline topology .<br />
*'''vtkAlgorithm''':<br />
an algorithm is the new superclass for all filters/sources/sinks in<br />
VTK. It is basically the replacement for vtkSource. Like vtkDataObject,<br />
vtkAlgorithm should know nothing about the pipeline and should only be an<br />
algorithm/function/filter. Call it with the correct arguments and it will produce<br />
results. It also has a vtkInformation instance that describes the properties of the<br />
algorithm and it has information objects that describe its input and output port<br />
characteristics. The main method of an algorithm is ProcessRequest.<br />
*'''vtkExecutive''':<br />
contains the logic of how to connect and execute a pipeline. This<br />
class is the superclass of all executives. Executives are distributed (as opposed to<br />
centralized) and each filter/algorithm has its own executive that communicates<br />
with other executives. vtkExecutive has a subclass called<br />
vtkDemandDrivenPipeline which in turn has a subclass called<br />
vtkStreamingDemandDrivenPipeline. vtkStreamingDemandDrivenPipeline<br />
provides most of the functionality that was found in the old VTK pipeline and is<br />
the default executive for all algorithms if you do not specify one.<br />
<br />
Let us first look at the vtkAlgorithm class. It may seem odd that a class with no notion of<br />
a pipeline has one key method called ProcessRequest. At its simplest, an algorithm has a<br />
basic function to take input data and produce output data. This is a down-stream (down-stream is <br />
the direction from input to output) request<br />
(specifically REQUEST_DATA) that all algorithms should implement. But algorithms<br />
can do more than just produce data; they also have characteristics or metadata that they<br />
can provide. For example, an algorithm can provide information about what type of<br />
output it will produce when you execute it. An imaging algorithm might only be capable<br />
of producing double results. The algorithm can specify this by responding to another<br />
down-stream request called REQUEST_INFORMATION. Consider the following code<br />
fragment:<br />
<source lang="cpp"><br />
int vtkMyAlgorithm::ProcessRequest(<br />
vtkInformation *request,<br />
vtkInformationVector **inputVector,<br />
vtkInformationVector *outputVector)<br />
{<br />
// generate the data<br />
if(request->Has(vtkDemandDrivenPipeline::REQUEST_INFORMATION()))<br />
{<br />
// specify that the output (only one for this filter) will be double<br />
vtkInformation* outInfo = outputVector->GetInformationObject(0);<br />
outInfo->Set(vtkDataObject::SCALAR_TYPE(),VTK_DOUBLE);<br />
return 1;<br />
}<br />
return this->Superclass::ProcessRequest(request, inputVector,outputVector);<br />
}<br />
</source><br />
This method takes three information objects as input. The first is the request which<br />
specifies what you are asking the algorithm to do. Typically this is just one key such as<br />
REQUEST_INFORMATION. The next two arguments are information vectors one for<br />
the inputs to this algorithm and one for the outputs of this algorithm. In the above<br />
example no input information was used. The output information vector was used to get<br />
the information object associated with the first output of this algorithm. Into that<br />
information was placed a key-value pair specifying that it would produce results of type<br />
double. Any requests that the algorithm doesn’t handle should be forwarded to the<br />
superclass.<br />
<br />
The pipeline topology in the new pipeline is a little different from the old one. In the new<br />
pipeline you connect the output port of one algorithm to the input port of another<br />
algorithm. For example,<br />
<source lang ="cpp"><br />
alg1->SetInputConnection( inPort, alg2->GetOutputPort( outPort ))<br />
</source><br />
Using this terminology a port is like a pin on an integrated circuit. An algorithm has some<br />
input ports and some output ports. A connection is a “connection” between two ports. So<br />
to connect two algorithms you make a connection between the output port of one<br />
algorithm and the input port of another algorithm. Any input port in the new pipeline can<br />
be specified as 'repeatable' and or 'optional':<br />
#'''Repeatable''' means that more than one connection can be made to this input port (such as for append filters). <br />
#'''Optional''' means that the input is not required for execution. <br />
<br />
While ports can be repeatable there is still a<br />
need for multiple ports. Your algorithm should have multiple ports when the concept,<br />
data type, or semantics of a port are different. So AppendFilter only needs one repeatable<br />
input port because it treats all of its inputs the same. Glyph in contrast should have two<br />
ports, one for the input points and one for the glyph model because these are two distinct<br />
concepts. Of course the old style of SetInput, GetOutput connections will work with<br />
existing algorithms as well. So in the new pipeline outputs are referred to by port number<br />
while inputs are referred to by both their port number and connection number (because a<br />
single input port can have more than one connection)<br />
<br />
==Typical Pipeline Execution==<br />
Let us take a quick look at the typical execution of a pipeline in VTK. First the use<br />
instantiates an Algorithm such as vtkImageGradient, when the algorithm is instantiated it<br />
will automatically create a default executive and connect the algorithm and executive<br />
together. The algorithm will also call '''FillInputPortInformation''' and<br />
'''FillOutputPortInformation''' on each input and output port respectively. These two methods<br />
should setup the static characteristics of the input and output ports such as data type<br />
requirements and whether the port is optional or repeatable. For example, by default all<br />
subclasses of vtkImageAlgorithm are assumed to take an vtkImageData as input:<br />
<source lang="cpp"><br />
int vtkImageAlgorithm::FillInputPortInformation(int vtkNotUsed(port), vtkInformation*<br />
info)<br />
{<br />
info->Set(vtkAlgorithm::INPUT_REQUIRED_DATA_TYPE(), "vtkImageData");<br />
return 1;<br />
}<br />
</source><br />
This can be overridden by subclasses as required. Once the algorithm and executive are<br />
instantiated they will be connected to other algorithm executive pairs. If the new<br />
SetInputConnection signature is used (and it should be used) then this just stores the<br />
connectivity information. The old VTK pipeline used the data objects to store<br />
connectivity and thus required that the data objects be instantiated prior to calling<br />
GetOutput. Once the entire pipeline is instantiated and connected it will be executed<br />
(typically as the result of a Render() call) Typically the first request an algorithm will see<br />
is upon execution is REQUEST_DATA_OBJECT (see vtkExecutive, vtkDataObject and<br />
their subclasses for all the possible keys and requests.) This request asks the algorithm to<br />
create an instance of vtkDataObject (or appropriate subclass) for all of its output ports. If<br />
you handle this request you can set the output ports output data to be whatever type you<br />
want (see vtkDataSetAlgorithm for an example), if the algorithm does not handle the<br />
request then by default the executive will look at the output port information to see what<br />
type the output port has been set to (from FillOutputPortInformation) If this is a concrete<br />
type then it will instantiate an instance of that type and use it. If it isn’t concrete then (I<br />
can’t remember if it just fails with an error or has another fallback position)<br />
<br />
===Request Information===<br />
This point the pipeline is instantiated, connected, and the data objects have been<br />
instantiated to store the data. The next request an algorithm will typically see is<br />
REQUEST_INFORMATION. This request asks the Algorithm to provide as much<br />
information as it can about what the output data will look like once the algorithm has<br />
generated it. Typically an algorithm will look at the information provided about its inputs<br />
and try to specify what it can about its outputs. In many image processing filters quite a<br />
bit can be specified such as the WHOLE_EXTENT, SCALAR_TYPE,<br />
SCALAR_NUMBER_OF_COMPONENTS, ORIGIN, SPACING, etc. The rule here is to<br />
provide or compute as much information as you can without actually executing (or<br />
reading in the entire data file) and without taking up significant CPU time. For example<br />
an image reader should read the header information from the file to get what information<br />
it can out of it, but it should not read in the entire image so that it can compute the scalar<br />
range of the data. When providing information about an output, an algorithm is not<br />
limited to the current information keys (such as WHOLE_EXTENT) that are provided by<br />
VTK. Part of the new pipeline design is that it can be easily extended. You can define<br />
your own keys and then in the REQUEST_INFORMATION request you can add those<br />
keys and their values to the output information objects. You can also specify that you<br />
want those keys to be passed down (or up) the pipeline by adding them to the<br />
KEYS_TO_COPY. That way you could have a specialized reader that populates the<br />
information with special keys and then have a writer or mapper downstream that uses<br />
those keys. By default executives will first copy the input’s information to the output’s<br />
information. You only need to handle the cases where it is different.<br />
===Request Update Extent===<br />
The next request is typically REQUEST_UPDATE_EXTENT. To fulfill this request an<br />
algorithm should take the update extents in its output’s information and then set the<br />
correct update extents in its input’s information. As with REQUEST_INFROMATION<br />
there is a default behavior by the executive and you only need to handle cases where it<br />
changes. (see vtkImageGradient for an example)<br />
===Request Data===<br />
Finally REQUEST_DATA will be called and the algorithm should fill in the output ports<br />
data objects.<br />
<br />
==Executives==<br />
<br />
The class hierarchy for the mainstream VTK executives is as follows.<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=BT<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
vtkCompositeDataPipeline -> vtkStreamingDemandDrivenPipeline -> vtkDemandDrivenPipeline -> vtkExecutive;<br />
} <br />
<br />
</graphviz><br />
<br />
Below, we provide some information about these executives that is not necessarily found in VTK User's Guide. If you are not familiar with VTK's pipeline execution model, you should read the Managing Pipeline Execution chapter in the book.<br />
<br />
=== vtkExecutive ===<br />
This class is the superclass for all executives. It is pretty abstract and provides little functionality. Important ones:<br />
* An executive has an algorithm<br />
* An executive manages (i.e. has) the input and output information objects of the algorithm. This means that the pipeline graph is stored by a set of executives not algorithms<br />
<br />
The most important function is vtkExecutive is ProcessRequest(). This method does two things:<br />
# Forward the request upstream (if the FORWARD_DIRECTION is vtkExecutive::RequestUpstream) or downstream (if the FORWARD_DIRECTION is vtkExecutive::RequestDownstream (note: this is not implemented).<br />
# Pass the request to the algorithm by calling CallAlgorithm() which calls ProcessRequest() on the algorithm. This can happen before and/or after the request is forwarding depending on whether ALGORITHM_BEFORE_FORWARD and/or ALGORITHM_AFTER_FORWARD is set.<br />
<br />
CallAlgorithm() calls CopyDefaultInformation() before passing the request to the algorithm. The goal of this function is to copy certain information (such as update requests or meta-information) from output to input (when the algorithm is invoked before forwarding - for example, in REQUEST_UPDATE_EXTENT) or from input to output (when the algorithm is invoked after forwarding - for example in REQUEST_INFORMATION).<br />
<br />
'''Note on centralized executives:''' This implementation does not allow us to use centralized executives (one executive that manages more than one algorithm) because the executive has an algorithm AND the executive stores the pipeline graph. However, it is possible to create a meta-executive (an executive to rule them all) that is centralized. This executive would have to manage the flow of information itself. This can be done by subclassing the executive class to disable forwarding. The centralized executive can the delegate the handling of each pass to the distributed executives but manage forwarding itself. <br />
<br />
=== vtkDemandDrivenPipeline ===<br />
<br />
This executive implements a demand-driven (pull) pipeline. It recognizes 3 passes in ProcessRequest():<br />
<br />
# REQUEST_DATA_OBJECT: This is where the algorithm is supposed to create its output data objects. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteDataObject() (a virtual member function). This first calls CallAlgorithm() and then CheckDataObject(). If CallAlgorithm() does not create output data objects, CheckDataObject() tries to create them based on a vtkDataObject::DATA_TYPE_NAME defined in the output port information.<br />
# REQUEST_INFORMATION: This is where the algorithm is supposed to provide meta-data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteInformation() (a virtual member function). Note: For backwards compatibility purposes, ExecuteInformation() calls CopyInformationToPipeline() on the output data object. In the old pipeline, the meta-information was provided by setting it on the output data object. This copies such meta-information from the data objects to the output information.<br />
# REQUEST_DATA: This is where the algorithm is supposed to provide (heavy) data. It is a ALGORITHM_AFTER_FORWARD pass. After forwarding the request upstream, the executive calls ExecuteData() (a virtual member function). ExecuteData() calls ExecuteDataStart(), CallAlgorithm() and ExecuteDataEnd(). ExecuteDataStart() takes case of initializing the outputs whereas ExecuteDataEnd() performs finalization such as marking the outputs as generated by calling DataHasBeenGenerated().<br />
<br />
Note that all of these passes make sure to skip execution if the required information was already generated and if nothing upstream changed.<br />
<br />
=== vtkStreamingDemandDrivenPipeline ===<br />
<br />
This executive adds support for [[VTK/Streaming | streaming]] to vtkDemandDrivenPipeline. Streaming is usually performed by processing a subset of a dataset in each invocation of the pipeline and accumulating the results. Currently, vtkStreamingDemandDrivenPipeline supports 3 ways of subsetting the data:<br />
<br />
# '''Extents''': Extents are only applicable to structured datasets. An extent is a logical subset of a structured dataset defined by providing min and max IJK values (VOI).<br />
# '''Pieces''': Pieces are only applicable to unstructured datasets. A piece is a subset of an unstructured dataset. What a "piece" means is determined by the producer of such dataset.<br />
# '''Time steps''': The pipeline can request a particular time step at each invocation for time series data.<br />
<br />
vtkStreamingDemandDrivenPipeline adds 2 new pipeline passes to support streaming:<br />
<br />
# REQUEST_UPDATE_EXTENT: This pass is where the consumer asks for a particular subset from its input. It is a ALGORITHM_BEFORE_FORWARD pass. This is where algorithms receive extent/piece and time step request from their consumer and copy or modify this request upstream. In this pass, if the algorithms produces unstructured data but consumes structured data, the executive uses an extent translator to automatically convert the piece request to an extent request.<br />
# REQUEST_UPDATE_EXTENT_INFORMATION: This optional pass is where meta-information specific to the particular subset is requested. It is a ALGORITHM_AFTER_FORWARD pass. This pass is commonly used for dynamic streaming where the consumer fetches meta-information such as bounds or scalar range for a particular piece before deciding whether to update it.<br />
<br />
=== vtkCompositeDataPipeline ===<br />
<br />
This executive adds support for iterating over multiple blocks and/or time steps. This is best described using an example pipeline. For example,<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> Contour -> Mapper;<br />
} <br />
</graphviz><br />
<br />
Here, the Ensight reader always produces a multi-block dataset whereas the contour filter can only handle vtkDataSet and subclasses. As a result, this pipeline would produce a run-time error if the executive is vtkStreamingDemandDrivenPipeline. vtkCompositeDataPipeline deals with this issue by looping over the leaf nodes of the multi-block dataset and performing a full pipeline invocation of the contour filter for each block. Similarly, when the pipeline is something like the following<br />
<br />
<graphviz><br />
digraph G {<br />
rankdir=LR<br />
fontsize = 12<br />
fontname = Helvetica<br />
node [ fontsize = 9 fontname = Helvetica shape = record height = 0.1 ]<br />
edge [ fontsize = 9 fontname = Helvetica ]<br />
<br />
"Ensight reader" -> "Particle tracer" -> Mapper;<br />
} <br />
</graphviz><br />
<br />
the executive knows to invoke the reader for 2 time steps (because the particle tracer always need 2 time steps for interpolation) and gather the result in a vtkTemporalDataSet.<br />
<br />
vtkCompositeDataPipeline does all of this by overriding a significant portion of the execution mechanism when it needs to iterate over blocks and/or time steps.</div>Pratikmhttps://public.kitware.com/Wiki/index.php?title=VTK/Tutorials&diff=40654VTK/Tutorials2011-06-12T21:31:29Z<p>Pratikm: /* Tutorials */</p>
<hr />
<div>==System Configuration/General Information==<br />
* [[VTK/Git | Obtaining VTK using Git]]<br />
* [[VTK/Tutorials/CMakeListsFile | Typical CMakeLists.txt file]]<br />
* [[VTK/Tutorials/CMakeListsFileForQt4 | Typical CMakeLists.txt file for Qt4]]<br />
* [[VTK/Tutorials/LinuxEnvironmentSetup | Linux Environment Setup]]<br />
* [[VTK/Tutorials/WindowsEnvironmentSetup | Microsoft Windows Environment Setup]]<br />
* [[VTK/Tutorials/PythonEnvironmentSetup | Python environment setup]]<br />
* [[VTK/Tutorials/JavaEnvironmentSetup | Java environment setup]]<br />
* [[VTK/Tutorials/SQLSetup | SQL setup]]<br />
<br />
==Basics==<br />
* [[VTK/Tutorials/SmartPointers | Smart pointers]]<br />
* [[VTK/Tutorials/VtkIdType | vtkIdType]]<br />
* [[VTK/Tutorials/3DDataTypes | 3D Data Types]] - A brief outline of the data types that VTK offers for 3D data storage.<br />
* [[VTK/Tutorials/MemberVariables | Non-SmartPointer Template Member Variable]]<br />
* [[VTK/Tutorials/Extents | Extents]] - A powerful indexing method.<br />
* [[VTK/Tutorials/GeometryTopology | Geometry & Topology]] - Demonstrates 0, 1, and 2D topology on a triangle geometry.<br />
* [[VTK/Tutorials/VTK Terminology | Terminology]] - What is a mapper? What is an actor? What is a filter? What is a source?<br />
* [[VTK/Tutorials/DataStorage | Field data, cell data, and point data]] - What are these? When should I use them?<br />
<br />
==Tutorials==<br />
* [[VTK/Tutorials/Callbacks | Callbacks]] - Handling events produced by VTK<br />
* [[VTK/Tutorials/InteractorStyleSubclass | vtkInteractorStyle subclass]] - Handling user events in a render window.<br />
* [[VTK/Tutorials/Widgets | Widgets]]<br />
* [[VTK/Tutorials/SavingVideos | Saving videos]]<br />
* [[VTK/Writing_VTK_files_using_python | Writing VTK files using python]]<br />
* [[VTK/mesh quality | Geometric mesh quality]]<br />
* [[VTK_XML_Formats | VTK XML Format Details]]<br />
* [[VTK/Information Keys | VTK Information Keys and their significance]]<br />
* [[VTK/Executives | VTK executives]]- The New Pipeline model and VTK Executives<br />
* [[VTK/Streaming | Streaming data in VTK]]<br />
*[http://www.imtek.uni-freiburg.de/simulation/mathematica/IMSweb/ IMTEK Mathematica Supplement (IMS)], the Open Source IMTEK Mathematica Supplement (IMS) interfaces VTK and generates ParaView batch scripts<br />
=== Wrapping ===<br />
* [[VTK/Java Wrapping|Java]]<br />
** [[VTK/Java Code Samples|Java code samples]]<br />
* [[VTK/Python Wrapping FAQ|Python]]<br />
** [[VTK/Python Wrapper Enhancement|Python wrapper enhancements]]<br />
<br />
===Tutorials for Developers===<br />
* [[VTK/Tutorials/RevisionMacros | New and Revision Macros]]<br />
<br />
===External Tutorials===<br />
Pratik was nice enough to catalog several external tutorials (from courses, slides, etc around the world) here: [[VTK/Tutorials/External_Tutorials| External Tutorials]]</div>Pratikm