ParaView/Users Guide/List of filters: Difference between revisions

From KitwarePublic
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
[[ParaViewUsersGuide]]
[[ParaViewUsersGuide]]






==AMR Contour==
==AMR Contour==








{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Capping'''<br>''(Capping)''
| '''Capping'''<br>''(Capping)''
|
|
If this property is on, the the boundary of the data set is capped.
If this property is on, the the boundary of the data set is capped.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Isosurface'''<br>''(ContourValue)''
| '''Isosurface'''<br>''(ContourValue)''
|
|
This property specifies the values at which to compute the isosurface.
This property specifies the values at which to compute the isosurface.


| 1
| 1
|
|
The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|-
|-
| '''Degenerate Cells'''<br>''(DegenerateCells)''
| '''Degenerate Cells'''<br>''(DegenerateCells)''
|
|
If this property is on, a transition mesh between levels is created.
If this property is on, a transition mesh between levels is created.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array with 1 components.
The dataset must contain a cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.


|-
|-
| '''Merge Points'''<br>''(MergePoints)''
| '''Merge Points'''<br>''(MergePoints)''
|
|
Use more memory to merge points on the boundaries of blocks.
Use more memory to merge points on the boundaries of blocks.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Multiprocess Communication'''<br>''(MultiprocessCommunication)''
| '''Multiprocess Communication'''<br>''(MultiprocessCommunication)''
|
|
If this property is off, each process executes independantly.
If this property is off, each process executes independantly.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Contour By'''<br>''(SelectInputScalars)''
| '''Contour By'''<br>''(SelectInputScalars)''
|
|
This property specifies the name of the cell scalar array from which the contour filter will compute isolines and/or isosurfaces.
This property specifies the name of the cell scalar array from which the contour filter will compute isolines and/or isosurfaces.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Skip Ghost Copy'''<br>''(SkipGhostCopy)''
| '''Skip Ghost Copy'''<br>''(SkipGhostCopy)''
|
|
A simple test to see if ghost values are already set properly.
A simple test to see if ghost values are already set properly.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Triangulate'''<br>''(Triangulate)''
| '''Triangulate'''<br>''(Triangulate)''
|
|
Use triangles instead of quads on capping surfaces.
Use triangles instead of quads on capping surfaces.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==AMR Dual Clip==
==AMR Dual Clip==




Clip with scalars.  Tetrahedra.
Clip with scalars.  Tetrahedra.




{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Degenerate Cells'''<br>''(DegenerateCells)''
| '''Degenerate Cells'''<br>''(DegenerateCells)''
|
|
If this property is on, a transition mesh between levels is created.
If this property is on, a transition mesh between levels is created.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array with 1 components.
The dataset must contain a cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.


|-
|-
| '''Merge Points'''<br>''(MergePoints)''
| '''Merge Points'''<br>''(MergePoints)''
|
|
Use more memory to merge points on the boundaries of blocks.
Use more memory to merge points on the boundaries of blocks.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Multiprocess Communication'''<br>''(MultiprocessCommunication)''
| '''Multiprocess Communication'''<br>''(MultiprocessCommunication)''
|
|
If this property is off, each process executes independantly.
If this property is off, each process executes independantly.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Select Material Arrays'''<br>''(SelectMaterialArrays)''
| '''Select Material Arrays'''<br>''(SelectMaterialArrays)''
|
|
This property specifies the cell arrays from which the clip filter will
This property specifies the cell arrays from which the clip filter will
compute clipped cells.
compute clipped cells.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Volume Fraction Value'''<br>''(VolumeFractionSurfaceValue)''
| '''Volume Fraction Value'''<br>''(VolumeFractionSurfaceValue)''
|
|
This property specifies the values at which to compute the isosurface.
This property specifies the values at which to compute the isosurface.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Annotate Time Filter==
==Annotate Time Filter==




Shows input data time as text annnotation in the view.
Shows input data time as text annnotation in the view.


The Annotate Time filter can be used to show the data time in a text annotation.<br>
The Annotate Time filter can be used to show the data time in a text annotation.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Format'''<br>''(Format)''
| '''Format'''<br>''(Format)''
|
|
The value of this property is a format string used to display the input time. The format string is specified using printf style.
The value of this property is a format string used to display the input time. The format string is specified using printf style.


| Time: %f
| Time: %f
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input dataset for which to display the time.
This property specifies the input dataset for which to display the time.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.


|-
|-
| '''Scale'''<br>''(Scale)''
| '''Scale'''<br>''(Scale)''
|
|
The factor by which the input time is scaled.
The factor by which the input time is scaled.


| 1
| 1
| �
| �
|-
|-
| '''Shift'''<br>''(Shift)''
| '''Shift'''<br>''(Shift)''
|
|
The amount of time the input is shifted (after scaling).
The amount of time the input is shifted (after scaling).


| 0
| 0
| �
| �
|}
|}




==Append Attributes==
==Append Attributes==




Copies geometry from first input.  Puts all of the arrays into the output.
Copies geometry from first input.  Puts all of the arrays into the output.


The Append Attributes filter takes multiple input data sets with the same geometry and merges their point and cell attributes to produce a single output containing all the point and cell attributes of the inputs. Any inputs without the same number of points and cells as the first input are ignored. The input data sets must already be collected together, either as a result of a reader that loads multiple parts (e.g., EnSight reader) or because the Group Parts filter has been run to form a collection of data sets.<br>
The Append Attributes filter takes multiple input data sets with the same geometry and merges their point and cell attributes to produce a single output containing all the point and cell attributes of the inputs. Any inputs without the same number of points and cells as the first input are ignored. The input data sets must already be collected together, either as a result of a reader that loads multiple parts (e.g., EnSight reader) or because the Group Parts filter has been run to form a collection of data sets.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Append Attributes filter.
This property specifies the input to the Append Attributes filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Append Datasets==
==Append Datasets==




Takes an input of multiple datasets and output has only one unstructured grid.
Takes an input of multiple datasets and output has only one unstructured grid.


The Append Datasets filter operates on multiple data sets of any type (polygonal, structured, etc.). It merges their geometry into a single data set. Only the point and cell attributes that all of the input data sets have in common will appear in the output. The input data sets must already be collected together, either as a result of a reader that loads multiple parts (e.g., EnSight reader) or because the Group Parts filter has been run to form a collection of data sets.<br>
The Append Datasets filter operates on multiple data sets of any type (polygonal, structured, etc.). It merges their geometry into a single data set. Only the point and cell attributes that all of the input data sets have in common will appear in the output. The input data sets must already be collected together, either as a result of a reader that loads multiple parts (e.g., EnSight reader) or because the Group Parts filter has been run to form a collection of data sets.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the datasets to be merged into a single dataset by the Append Datasets filter.
This property specifies the datasets to be merged into a single dataset by the Append Datasets filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Append Geometry==
==Append Geometry==




Takes an input of multiple poly data parts and output has only one part.
Takes an input of multiple poly data parts and output has only one part.


The Append Geometry filter operates on multiple polygonal data sets. It merges their geometry into a single data set. Only the point and cell attributes that all of the input data sets have in common will appear in the output.<br>
The Append Geometry filter operates on multiple polygonal data sets. It merges their geometry into a single data set. Only the point and cell attributes that all of the input data sets have in common will appear in the output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input to the Append Geometry filter.
Set the input to the Append Geometry filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|}
|}




==Block Scalars==
==Block Scalars==




The Level Scalars filter uses colors to show levels of a multiblock dataset.
The Level Scalars filter uses colors to show levels of a multiblock dataset.


The Level Scalars filter uses colors to show levels of a multiblock dataset.<br>
The Level Scalars filter uses colors to show levels of a multiblock dataset.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Level Scalars filter.
This property specifies the input to the Level Scalars filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.


|}
|}




==Calculator==
==Calculator==




Compute new attribute arrays as function of existing arrays.
Compute new attribute arrays as function of existing arrays.


The Calculator filter computes a new data array or new point coordinates as a function of existing scalar or vector arrays. If point-centered arrays are used in the computation of a new data array, the resulting array will also be point-centered. Similarly, computations using cell-centered arrays will produce a new cell-centered array. If the function is computing point coordinates, the result of the function must be a three-component vector. The Calculator interface operates similarly to a scientific calculator. In creating the function to evaluate, the standard order of operations applies.<br>
The Calculator filter computes a new data array or new point coordinates as a function of existing scalar or vector arrays. If point-centered arrays are used in the computation of a new data array, the resulting array will also be point-centered. Similarly, computations using cell-centered arrays will produce a new cell-centered array. If the function is computing point coordinates, the result of the function must be a three-component vector. The Calculator interface operates similarly to a scientific calculator. In creating the function to evaluate, the standard order of operations applies.<br>
Each of the calculator functions is described below. Unless otherwise noted, enclose the operand in parentheses using the ( and ) buttons.<br>
Each of the calculator functions is described below. Unless otherwise noted, enclose the operand in parentheses using the ( and ) buttons.<br>
Clear: Erase the current function (displayed in the read-only text box above the calculator buttons).<br>
Clear: Erase the current function (displayed in the read-only text box above the calculator buttons).<br>
/: Divide one scalar by another. The operands for this function are not required to be enclosed in parentheses.<br>
/: Divide one scalar by another. The operands for this function are not required to be enclosed in parentheses.<br>
*: Multiply two scalars, or multiply a vector by a scalar (scalar multiple). The operands for this function are not required to be enclosed in parentheses.<br>
*: Multiply two scalars, or multiply a vector by a scalar (scalar multiple). The operands for this function are not required to be enclosed in parentheses.<br>
-: Negate a scalar or vector (unary minus), or subtract one scalar or vector from another. The operands for this function are not required to be enclosed in parentheses.<br>
-: Negate a scalar or vector (unary minus), or subtract one scalar or vector from another. The operands for this function are not required to be enclosed in parentheses.<br>
+: Add two scalars or two vectors. The operands for this function are not required to be enclosed in parentheses.<br>
+: Add two scalars or two vectors. The operands for this function are not required to be enclosed in parentheses.<br>
sin: Compute the sine of a scalar.<br>
sin: Compute the sine of a scalar.<br>
cos: Compute the cosine of a scalar.<br>
cos: Compute the cosine of a scalar.<br>
tan: Compute the tangent of a scalar.<br>
tan: Compute the tangent of a scalar.<br>
asin: Compute the arcsine of a scalar.<br>
asin: Compute the arcsine of a scalar.<br>
acos: Compute the arccosine of a scalar.<br>
acos: Compute the arccosine of a scalar.<br>
atan: Compute the arctangent of a scalar.<br>
atan: Compute the arctangent of a scalar.<br>
sinh: Compute the hyperbolic sine of a scalar.<br>
sinh: Compute the hyperbolic sine of a scalar.<br>
cosh: Compute the hyperbolic cosine of a scalar.<br>
cosh: Compute the hyperbolic cosine of a scalar.<br>
tanh: Compute the hyperbolic tangent of a scalar.<br>
tanh: Compute the hyperbolic tangent of a scalar.<br>
min: Compute minimum of two scalars.<br>
min: Compute minimum of two scalars.<br>
max: Compute maximum of two scalars.<br>
max: Compute maximum of two scalars.<br>
x^y: Raise one scalar to the power of another scalar. The operands for this function are not required to be enclosed in parentheses.<br>
x^y: Raise one scalar to the power of another scalar. The operands for this function are not required to be enclosed in parentheses.<br>
sqrt: Compute the square root of a scalar.<br>
sqrt: Compute the square root of a scalar.<br>
e^x: Raise e to the power of a scalar.<br>
e^x: Raise e to the power of a scalar.<br>
log: Compute the logarithm of a scalar (deprecated. same as log10).<br>
log: Compute the logarithm of a scalar (deprecated. same as log10).<br>
log10: Compute the logarithm of a scalar to the base 10.<br>
log10: Compute the logarithm of a scalar to the base 10.<br>
ln: Compute the logarithm of a scalar to the base 'e'.<br>
ln: Compute the logarithm of a scalar to the base 'e'.<br>
ceil: Compute the ceiling of a scalar.<br>
ceil: Compute the ceiling of a scalar.<br>
floor: Compute the floor of a scalar.<br>
floor: Compute the floor of a scalar.<br>
abs: Compute the absolute value of a scalar.<br>
abs: Compute the absolute value of a scalar.<br>
v1.v2: Compute the dot product of two vectors. The operands for this function are not required to be enclosed in parentheses.<br>
v1.v2: Compute the dot product of two vectors. The operands for this function are not required to be enclosed in parentheses.<br>
cross: Compute cross product of two vectors.<br>
cross: Compute cross product of two vectors.<br>
mag: Compute the magnitude of a vector.<br>
mag: Compute the magnitude of a vector.<br>
norm: Normalize a vector.<br>
norm: Normalize a vector.<br>
The operands are described below.<br>
The operands are described below.<br>
The digits 0 - 9 and the decimal point are used to enter constant scalar values.<br>
The digits 0 - 9 and the decimal point are used to enter constant scalar values.<br>
iHat, jHat, and kHat are vector constants representing unit vectors in the X, Y, and Z directions, respectively.<br>
iHat, jHat, and kHat are vector constants representing unit vectors in the X, Y, and Z directions, respectively.<br>
The scalars menu lists the names of the scalar arrays and the components of the vector arrays of either the point-centered or cell-centered data. The vectors menu lists the names of the point-centered or cell-centered vector arrays. The function will be computed for each point (or cell) using the scalar or vector value of the array at that point (or cell).<br>
The scalars menu lists the names of the scalar arrays and the components of the vector arrays of either the point-centered or cell-centered data. The vectors menu lists the names of the point-centered or cell-centered vector arrays. The function will be computed for each point (or cell) using the scalar or vector value of the array at that point (or cell).<br>
The filter operates on any type of data set, but the input data set must have at least one scalar or vector array. The arrays can be either point-centered or cell-centered. The Calculator filter's output is of the same data set type as the input.<br>
The filter operates on any type of data set, but the input data set must have at least one scalar or vector array. The arrays can be either point-centered or cell-centered. The Calculator filter's output is of the same data set type as the input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Attribute Mode'''<br>''(AttributeMode)''
| '''Attribute Mode'''<br>''(AttributeMode)''
|
|
This property determines whether the computation is to be performed on point-centered or cell-centered data.
This property determines whether the computation is to be performed on point-centered or cell-centered data.


| 0
| 0
|
|
The value must be one of the following: point_data (1), cell_data (2), field_data (5).
The value must be one of the following: point_data (1), cell_data (2), field_data (5).


|-
|-
| '''Coordinate Results'''<br>''(CoordinateResults)''
| '''Coordinate Results'''<br>''(CoordinateResults)''
|
|
The value of this property determines whether the results of this computation should be used as point coordinates or as a new array.
The value of this property determines whether the results of this computation should be used as point coordinates or as a new array.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Function'''<br>''(Function)''
| '''Function'''<br>''(Function)''
|
|
This property contains the equation for computing the new array.
This property contains the equation for computing the new array.


| �
| �
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input dataset to the Calculator filter. The scalar and vector variables may be chosen from this dataset's arrays.
This property specifies the input dataset to the Calculator filter. The scalar and vector variables may be chosen from this dataset's arrays.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Replace Invalid Results'''<br>''(ReplaceInvalidValues)''
| '''Replace Invalid Results'''<br>''(ReplaceInvalidValues)''
|
|
This property determines whether invalid values in the computation will be replaced with a specific value. (See the ReplacementValue property.)
This property determines whether invalid values in the computation will be replaced with a specific value. (See the ReplacementValue property.)


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Replacement Value'''<br>''(ReplacementValue)''
| '''Replacement Value'''<br>''(ReplacementValue)''
|
|
If invalid values in the computation are to be replaced with another value, this property contains that value.
If invalid values in the computation are to be replaced with another value, this property contains that value.


| 0
| 0
| �
| �
|-
|-
| '''Result Array Name'''<br>''(ResultArrayName)''
| '''Result Array Name'''<br>''(ResultArrayName)''
|
|
This property contains the name for the output array containing the result of this computation.
This property contains the name for the output array containing the result of this computation.


| Result
| Result
| �
| �
|}
|}




==Cell Centers==
==Cell Centers==




Create a point (no geometry) at the center of each input cell.
Create a point (no geometry) at the center of each input cell.


The Cell Centers filter places a point at the center of each cell in the input data set. The center computed is the parametric center of the cell, not necessarily the geometric or bounding box center. The cell attributes of the input will be associated with these newly created points of the output. You have the option of creating a vertex cell per point in the outpuut. This is useful because vertex cells are rendered, but points are not. The points themselves could be used for placing glyphs (using the Glyph filter). The Cell Centers filter takes any type of data set as input and produces a polygonal data set as output.<br>
The Cell Centers filter places a point at the center of each cell in the input data set. The center computed is the parametric center of the cell, not necessarily the geometric or bounding box center. The cell attributes of the input will be associated with these newly created points of the output. You have the option of creating a vertex cell per point in the outpuut. This is useful because vertex cells are rendered, but points are not. The points themselves could be used for placing glyphs (using the Glyph filter). The Cell Centers filter takes any type of data set as input and produces a polygonal data set as output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Cell Centers filter.
This property specifies the input to the Cell Centers filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Vertex Cells'''<br>''(VertexCells)''
| '''Vertex Cells'''<br>''(VertexCells)''
|
|
If set to 1, a vertex cell will be generated per point in the output. Otherwise only points will be generated.
If set to 1, a vertex cell will be generated per point in the output. Otherwise only points will be generated.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Cell Data to Point Data==
==Cell Data to Point Data==




Create point attributes by averaging cell attributes.
Create point attributes by averaging cell attributes.


The Cell Data to Point Data filter averages the values of the cell attributes of the cells surrounding a point to compute point attributes. The Cell Data to Point Data filter operates on any type of data set, and the output data set is of the same type as the input.<br>
The Cell Data to Point Data filter averages the values of the cell attributes of the cells surrounding a point to compute point attributes. The Cell Data to Point Data filter operates on any type of data set, and the output data set is of the same type as the input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Cell Data to Point Data filter.
This property specifies the input to the Cell Data to Point Data filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array.
The dataset must contain a cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Pass Cell Data'''<br>''(PassCellData)''
| '''Pass Cell Data'''<br>''(PassCellData)''
|
|
If this property is set to 1, then the input cell data is passed through to the output; otherwise, only the generated point data will be available in the output.
If this property is set to 1, then the input cell data is passed through to the output; otherwise, only the generated point data will be available in the output.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Clean==
==Clean==




Merge coincident points if they do not meet a feature edge criteria.
Merge coincident points if they do not meet a feature edge criteria.


The Clean filter takes polygonal data as input and generates polygonal data as output. This filter can merge duplicate points, remove unused points, and transform degenerate cells into their appropriate forms (e.g., a triangle is converted into a line if two of its points are merged).<br>
The Clean filter takes polygonal data as input and generates polygonal data as output. This filter can merge duplicate points, remove unused points, and transform degenerate cells into their appropriate forms (e.g., a triangle is converted into a line if two of its points are merged).<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Absolute Tolerance'''<br>''(AbsoluteTolerance)''
| '''Absolute Tolerance'''<br>''(AbsoluteTolerance)''
|
|
If merging nearby points (see PointMerging property) and using absolute tolerance (see ToleranceIsAbsolute property), this property specifies the tolerance for performing merging in the spatial units of the input data set.
If merging nearby points (see PointMerging property) and using absolute tolerance (see ToleranceIsAbsolute property), this property specifies the tolerance for performing merging in the spatial units of the input data set.


| 1
| 1
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Convert Lines To Points'''<br>''(ConvertLinesToPoints)''
| '''Convert Lines To Points'''<br>''(ConvertLinesToPoints)''
|
|
If this property is set to 1, degenerate lines (a "line" whose endpoints are at the same spatial location) will be converted to points.
If this property is set to 1, degenerate lines (a "line" whose endpoints are at the same spatial location) will be converted to points.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Convert Polys To Lines'''<br>''(ConvertPolysToLines)''
| '''Convert Polys To Lines'''<br>''(ConvertPolysToLines)''
|
|
If this property is set to 1, degenerate polygons (a "polygon" with only two distinct point coordinates) will be converted to lines.
If this property is set to 1, degenerate polygons (a "polygon" with only two distinct point coordinates) will be converted to lines.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Convert Strips To Polys'''<br>''(ConvertStripsToPolys)''
| '''Convert Strips To Polys'''<br>''(ConvertStripsToPolys)''
|
|
If this property is set to 1, degenerate triangle strips (a triangle "strip" containing only one triangle) will be converted to triangles.
If this property is set to 1, degenerate triangle strips (a triangle "strip" containing only one triangle) will be converted to triangles.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input to the Clean filter.
Set the input to the Clean filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Piece Invariant'''<br>''(PieceInvariant)''
| '''Piece Invariant'''<br>''(PieceInvariant)''
|
|
If this property is set to 1, the whole data set will be processed at once so that cleaning the data set always produces the same results. If it is set to 0, the data set can be processed one piece at a time, so it is not necessary for the entire data set to fit into memory; however the results are not guaranteed to be the same as they would be if the Piece invariant option was on. Setting this option to 0 may produce seams in the output dataset when ParaView is run in parallel.
If this property is set to 1, the whole data set will be processed at once so that cleaning the data set always produces the same results. If it is set to 0, the data set can be processed one piece at a time, so it is not necessary for the entire data set to fit into memory; however the results are not guaranteed to be the same as they would be if the Piece invariant option was on. Setting this option to 0 may produce seams in the output dataset when ParaView is run in parallel.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Point Merging'''<br>''(PointMerging)''
| '''Point Merging'''<br>''(PointMerging)''
|
|
If this property is set to 1, then points will be merged if they are within the specified Tolerance or AbsoluteTolerance (see the Tolerance and AbsoluteTolerance propertys), depending on the value of the ToleranceIsAbsolute property. (See the ToleranceIsAbsolute property.) If this property is set to 0, points will not be merged.
If this property is set to 1, then points will be merged if they are within the specified Tolerance or AbsoluteTolerance (see the Tolerance and AbsoluteTolerance propertys), depending on the value of the ToleranceIsAbsolute property. (See the ToleranceIsAbsolute property.) If this property is set to 0, points will not be merged.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Tolerance'''<br>''(Tolerance)''
| '''Tolerance'''<br>''(Tolerance)''
|
|
If merging nearby points (see PointMerging property) and not using absolute tolerance (see ToleranceIsAbsolute property), this property specifies the tolerance for performing merging as a fraction of the length of the diagonal of the bounding box of the input data set.
If merging nearby points (see PointMerging property) and not using absolute tolerance (see ToleranceIsAbsolute property), this property specifies the tolerance for performing merging as a fraction of the length of the diagonal of the bounding box of the input data set.


| 0
| 0
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|-
|-
| '''Tolerance Is Absolute'''<br>''(ToleranceIsAbsolute)''
| '''Tolerance Is Absolute'''<br>''(ToleranceIsAbsolute)''
|
|
This property determines whether to use absolute or relative (a percentage of the bounding box) tolerance when performing point merging.
This property determines whether to use absolute or relative (a percentage of the bounding box) tolerance when performing point merging.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Clean to Grid==
==Clean to Grid==




This filter merges points and converts the data set to unstructured grid.
This filter merges points and converts the data set to unstructured grid.


The Clean to Grid filter merges points that are exactly coincident. It also converts the data set to an unstructured grid. You may wish to do this if you want to apply a filter to your data set that is available for unstructured grids but not for the initial type of your data set (e.g., applying warp vector to volumetric data). The Clean to Grid filter operates on any type of data set.<br>
The Clean to Grid filter merges points that are exactly coincident. It also converts the data set to an unstructured grid. You may wish to do this if you want to apply a filter to your data set that is available for unstructured grids but not for the initial type of your data set (e.g., applying warp vector to volumetric data). The Clean to Grid filter operates on any type of data set.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Clean to Grid filter.
This property specifies the input to the Clean to Grid filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Clip==
==Clip==




Clip with an implicit plane. Clipping does not reduce the dimensionality of the data set. The output data type of this filter is always an unstructured grid.
Clip with an implicit plane. Clipping does not reduce the dimensionality of the data set. The output data type of this filter is always an unstructured grid.


The Clip filter cuts away a portion of the input data set using an implicit plane. This filter operates on all types of data sets, and it returns unstructured grid data on output.<br>
The Clip filter cuts away a portion of the input data set using an implicit plane. This filter operates on all types of data sets, and it returns unstructured grid data on output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Clip Type'''<br>''(ClipFunction)''
| '''Clip Type'''<br>''(ClipFunction)''
|
|
This property specifies the parameters of the clip function (an implicit plane) used to clip the dataset.
This property specifies the parameters of the clip function (an implicit plane) used to clip the dataset.


| �
| �
|
|
The value must be set to one of the following: Plane, Box, Sphere, Scalar.
The value must be set to one of the following: Plane, Box, Sphere, Scalar.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the dataset on which the Clip filter will operate.
This property specifies the dataset on which the Clip filter will operate.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Inside Out'''<br>''(InsideOut)''
| '''Inside Out'''<br>''(InsideOut)''
|
|
If this property is set to 0, the clip filter will return that portion of the dataset that lies within the clip function. If set to 1, the portions of the dataset that lie outside the clip function will be returned instead.
If this property is set to 0, the clip filter will return that portion of the dataset that lies within the clip function. If set to 1, the portions of the dataset that lie outside the clip function will be returned instead.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
If clipping with scalars, this property specifies the name of the scalar array on which to perform the clip operation.
If clipping with scalars, this property specifies the name of the scalar array on which to perform the clip operation.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.




Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Use Value As Offset'''<br>''(UseValueAsOffset)''
| '''Use Value As Offset'''<br>''(UseValueAsOffset)''
|
|
If UseValueAsOffset is true, Value is used as an offset parameter to the implicit function. Otherwise, Value is used only when clipping using a scalar array.
If UseValueAsOffset is true, Value is used as an offset parameter to the implicit function. Otherwise, Value is used only when clipping using a scalar array.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Value'''<br>''(Value)''
| '''Value'''<br>''(Value)''
|
|
If clipping with scalars, this property sets the scalar value about which to clip the dataset based on the scalar array chosen. (See SelectInputScalars.) If clipping with a clip function, this property specifies an offset from the clip function to use in the clipping operation. Neither functionality is currently available in ParaView's user interface.
If clipping with scalars, this property sets the scalar value about which to clip the dataset based on the scalar array chosen. (See SelectInputScalars.) If clipping with a clip function, this property specifies an offset from the clip function to use in the clipping operation. Neither functionality is currently available in ParaView's user interface.


| 0
| 0
|
|
The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|}
|}




==Clip Closed Surface==
==Clip Closed Surface==




Clip a polygonal dataset with a plane to produce closed surfaces
Clip a polygonal dataset with a plane to produce closed surfaces


This clip filter cuts away a portion of the input polygonal dataset using a plane to generate a new polygonal dataset.<br>
This clip filter cuts away a portion of the input polygonal dataset using a plane to generate a new polygonal dataset.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Base Color'''<br>''(BaseColor)''
| '''Base Color'''<br>''(BaseColor)''
|
|
Specify the color for the faces from the input.
Specify the color for the faces from the input.


| 0.1 0.1 1
| 0.1 0.1 1
|
|
The value must be greater than or equal to (0, 0, 0) and less than or equal to (1, 1, 1).
The value must be greater than or equal to (0, 0, 0) and less than or equal to (1, 1, 1).


|-
|-
| '''Clip Color'''<br>''(ClipColor)''
| '''Clip Color'''<br>''(ClipColor)''
|
|
Specifiy the color for the capping faces (generated on the clipping interface).
Specifiy the color for the capping faces (generated on the clipping interface).


| 1 0.11 0.1
| 1 0.11 0.1
|
|
The value must be greater than or equal to (0, 0, 0) and less than or equal to (1, 1, 1).
The value must be greater than or equal to (0, 0, 0) and less than or equal to (1, 1, 1).


|-
|-
| '''Clipping Plane'''<br>''(ClippingPlane)''
| '''Clipping Plane'''<br>''(ClippingPlane)''
|
|
This property specifies the parameters of the clipping plane used to clip the polygonal data.
This property specifies the parameters of the clipping plane used to clip the polygonal data.


| �
| �
|
|
The value must be set to one of the following: Plane.
The value must be set to one of the following: Plane.


|-
|-
| '''Generate Cell Origins'''<br>''(GenerateColorScalars)''
| '''Generate Cell Origins'''<br>''(GenerateColorScalars)''
|
|
Generate (cell) data for coloring purposes such that the newly generated cells (including capping faces and clipping outlines) can be distinguished from the input cells.
Generate (cell) data for coloring purposes such that the newly generated cells (including capping faces and clipping outlines) can be distinguished from the input cells.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Generate Faces'''<br>''(GenerateFaces)''
| '''Generate Faces'''<br>''(GenerateFaces)''
|
|
Generate polygonal faces in the output.
Generate polygonal faces in the output.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Generate Outline'''<br>''(GenerateOutline)''
| '''Generate Outline'''<br>''(GenerateOutline)''
|
|
Generate clipping outlines in the output wherever an input face is cut by the clipping plane.
Generate clipping outlines in the output wherever an input face is cut by the clipping plane.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the dataset on which the Clip filter will operate.
This property specifies the dataset on which the Clip filter will operate.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Inside Out'''<br>''(InsideOut)''
| '''Inside Out'''<br>''(InsideOut)''
|
|
If this flag is turned off, the clipper will return the portion of the data that lies within the clipping plane. Otherwise, the clipper will return the portion of the data that lies outside the clipping plane.
If this flag is turned off, the clipper will return the portion of the data that lies within the clipping plane. Otherwise, the clipper will return the portion of the data that lies outside the clipping plane.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Clipping Tolerance'''<br>''(Tolerance)''
| '''Clipping Tolerance'''<br>''(Tolerance)''
|
|
Specify the tolerance for creating new points. A small value might incur degenerate triangles.
Specify the tolerance for creating new points. A small value might incur degenerate triangles.


| 1e-06
| 1e-06
| �
| �
|}
|}




==Compute Derivatives==
==Compute Derivatives==




This filter computes derivatives of scalars and vectors.
This filter computes derivatives of scalars and vectors.


CellDerivatives is a filter that computes derivatives of scalars and vectors at the center of cells. You can choose to generate different output including the scalar gradient (a vector), computed tensor vorticity (a vector), gradient of input vectors (a tensor), and strain matrix of the input vectors (a tensor); or you may choose to pass data through to the output.<br>
CellDerivatives is a filter that computes derivatives of scalars and vectors at the center of cells. You can choose to generate different output including the scalar gradient (a vector), computed tensor vorticity (a vector), gradient of input vectors (a tensor), and strain matrix of the input vectors (a tensor); or you may choose to pass data through to the output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the filter.
This property specifies the input to the filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Output Tensor Type'''<br>''(OutputTensorType)''
| '''Output Tensor Type'''<br>''(OutputTensorType)''
|
|
This property controls how the filter works to generate tensor cell data. You can choose to compute the gradient of the input vectors, or compute the strain tensor of the vector gradient tensor. By default, the filter will take the gradient of the vector data to construct a tensor.
This property controls how the filter works to generate tensor cell data. You can choose to compute the gradient of the input vectors, or compute the strain tensor of the vector gradient tensor. By default, the filter will take the gradient of the vector data to construct a tensor.


| 1
| 1
|
|
The value must be one of the following: Nothing (0), Vector Gradient (1), Strain (2).
The value must be one of the following: Nothing (0), Vector Gradient (1), Strain (2).


|-
|-
| '''Output Vector Type'''<br>''(OutputVectorType)''
| '''Output Vector Type'''<br>''(OutputVectorType)''
|
|
This property Controls how the filter works to generate vector cell data. You can choose to compute the gradient of the input scalars, or extract the vorticity of the computed vector gradient tensor. By default, the filter will take the gradient of the input scalar data.
This property Controls how the filter works to generate vector cell data. You can choose to compute the gradient of the input scalars, or extract the vorticity of the computed vector gradient tensor. By default, the filter will take the gradient of the input scalar data.


| 1
| 1
|
|
The value must be one of the following: Nothing (0), Scalar Gradient (1), Vorticity (2).
The value must be one of the following: Nothing (0), Scalar Gradient (1), Vorticity (2).


|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
This property indicates the name of the scalar array to differentiate.
This property indicates the name of the scalar array to differentiate.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
This property indicates the name of the vector array to differentiate.
This property indicates the name of the vector array to differentiate.


| 1
| 1
|
|
An array of vectors is required.
An array of vectors is required.


|}
|}




==Connectivity==
==Connectivity==




Mark connected components with integer point attribute array.
Mark connected components with integer point attribute array.


The Connectivity filter assigns a region id to connected components of the input data set. (The region id is assigned as a point scalar value.) This filter takes any data set type as input and produces unstructured grid output.<br>
The Connectivity filter assigns a region id to connected components of the input data set. (The region id is assigned as a point scalar value.) This filter takes any data set type as input and produces unstructured grid output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Color Regions'''<br>''(ColorRegions)''
| '''Color Regions'''<br>''(ColorRegions)''
|
|
Controls the coloring of the connected regions.
Controls the coloring of the connected regions.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Extraction Mode'''<br>''(ExtractionMode)''
| '''Extraction Mode'''<br>''(ExtractionMode)''
|
|
Controls the extraction of connected surfaces.
Controls the extraction of connected surfaces.


| 5
| 5
|
|
The value must be one of the following: Extract Point Seeded Regions (1), Extract Cell Seeded Regions (2), Extract Specified Regions (3), Extract Largest Region (4), Extract All Regions (5), Extract Closes Point Region (6).
The value must be one of the following: Extract Point Seeded Regions (1), Extract Cell Seeded Regions (2), Extract Specified Regions (3), Extract Largest Region (4), Extract All Regions (5), Extract Closes Point Region (6).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Connectivity filter.
This property specifies the input to the Connectivity filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Contingency Statistics==
==Contingency Statistics==




Compute a statistical model of a dataset and/or assess the dataset with a statistical model.
Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input.  Then, the model (however it is obtained) may optionally be used to assess the input dataset.<br>
This filter either computes a statistical model of a dataset or takes such a model as its second input.  Then, the model (however it is obtained) may optionally be used to assess the input dataset.<br>
This filter computes contingency tables between pairs of attributes.  This result is a tabular bivariate probability distribution which serves as a Bayesian-style prior model.  Data is assessed by computing <br>
This filter computes contingency tables between pairs of attributes.  This result is a tabular bivariate probability distribution which serves as a Bayesian-style prior model.  Data is assessed by computing <br>
*  the probability of observing both variables simultaneously;<br>
*  the probability of observing both variables simultaneously;<br>
*  the probability of each variable conditioned on the other (the two values need not be identical); and<br>
*  the probability of each variable conditioned on the other (the two values need not be identical); and<br>
*  the pointwise mutual information (PMI).
*  the pointwise mutual information (PMI).
<br>
<br>
Finally, the summary statistics include the information entropy of the observations.<br>
Finally, the summary statistics include the information entropy of the observations.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Attribute Mode'''<br>''(AttributeMode)''
| '''Attribute Mode'''<br>''(AttributeMode)''
|
|
Specify which type of field data the arrays will be drawn from.
Specify which type of field data the arrays will be drawn from.


| 0
| 0
|
|
Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input to the filter.  Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.
The input to the filter.  Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


|-
|-
| '''Model Input'''<br>''(ModelInput)''
| '''Model Input'''<br>''(ModelInput)''
|
|
A previously-calculated model with which to assess a separate dataset. This input is optional.
A previously-calculated model with which to assess a separate dataset. This input is optional.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


|-
|-
| '''Variables of Interest'''<br>''(SelectArrays)''
| '''Variables of Interest'''<br>''(SelectArrays)''
|
|
Choose arrays whose entries will be used to form observations for statistical analysis.
Choose arrays whose entries will be used to form observations for statistical analysis.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Task'''<br>''(Task)''
| '''Task'''<br>''(Task)''
|
|
Specify the task to be performed: modeling and/or assessment.
Specify the task to be performed: modeling and/or assessment.
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.
When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.


| 3
| 3
|
|
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


|-
|-
| '''Training Fraction'''<br>''(TrainingFraction)''
| '''Training Fraction'''<br>''(TrainingFraction)''
|
|
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Contour==
==Contour==




Generate isolines or isosurfaces using point scalars.
Generate isolines or isosurfaces using point scalars.


The Contour filter computes isolines or isosurfaces using a selected point-centered scalar array. The Contour filter operates on any type of data set, but the input is required to have at least one point-centered scalar (single-component) array. The output of this filter is polygonal.<br>
The Contour filter computes isolines or isosurfaces using a selected point-centered scalar array. The Contour filter operates on any type of data set, but the input is required to have at least one point-centered scalar (single-component) array. The output of this filter is polygonal.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Compute Gradients'''<br>''(ComputeGradients)''
| '''Compute Gradients'''<br>''(ComputeGradients)''
|
|
If this property is set to 1, a scalar array containing a gradient value at each point in the isosurface or isoline will be created by this filter; otherwise an array of gradients will not be computed. This operation is fairly expensive both in terms of computation time and memory required, so if the output dataset produced by the contour filter will be processed by filters that modify the dataset's topology or geometry, it may be wise to set the value of this property to 0. Not that if ComputeNormals is set to 1, then gradients will have to be calculated, but they will only be stored in the output dataset if ComputeGradients is also set to 1.
If this property is set to 1, a scalar array containing a gradient value at each point in the isosurface or isoline will be created by this filter; otherwise an array of gradients will not be computed. This operation is fairly expensive both in terms of computation time and memory required, so if the output dataset produced by the contour filter will be processed by filters that modify the dataset's topology or geometry, it may be wise to set the value of this property to 0. Not that if ComputeNormals is set to 1, then gradients will have to be calculated, but they will only be stored in the output dataset if ComputeGradients is also set to 1.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute Normals'''<br>''(ComputeNormals)''
| '''Compute Normals'''<br>''(ComputeNormals)''
|
|
If this property is set to 1, a scalar array containing a normal value at each point in the isosurface or isoline will be created by the contour filter; otherwise an array of normals will not be computed. This operation is fairly expensive both in terms of computation time and memory required, so if the output dataset produced by the contour filter will be processed by filters that modify the dataset's topology or geometry, it may be wise to set the value of this property to 0.
If this property is set to 1, a scalar array containing a normal value at each point in the isosurface or isoline will be created by the contour filter; otherwise an array of normals will not be computed. This operation is fairly expensive both in terms of computation time and memory required, so if the output dataset produced by the contour filter will be processed by filters that modify the dataset's topology or geometry, it may be wise to set the value of this property to 0.
Select whether to compute normals.
Select whether to compute normals.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute Scalars'''<br>''(ComputeScalars)''
| '''Compute Scalars'''<br>''(ComputeScalars)''
|
|
If this property is set to 1, an array of scalars (containing the contour value) will be added to the output dataset. If set to 0, the output will not contain this array.
If this property is set to 1, an array of scalars (containing the contour value) will be added to the output dataset. If set to 0, the output will not contain this array.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Isosurfaces'''<br>''(ContourValues)''
| '''Isosurfaces'''<br>''(ContourValues)''
|
|
This property specifies the values at which to compute isosurfaces/isolines and also the number of such values.
This property specifies the values at which to compute isosurfaces/isolines and also the number of such values.


| �
| �
|
|
The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input dataset to be used by the contour filter.
This property specifies the input dataset to be used by the contour filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array with 1 components.
The dataset must contain a point or cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Point Merge Method'''<br>''(Locator)''
| '''Point Merge Method'''<br>''(Locator)''
|
|
This property specifies an incremental point locator for merging duplicate / coincident points.
This property specifies an incremental point locator for merging duplicate / coincident points.


| �
| �
|
|
The selected object must be the result of the following: incremental_point_locators.
The selected object must be the result of the following: incremental_point_locators.




The value must be set to one of the following: MergePoints, IncrementalOctreeMergePoints, NonMergingPointLocator.
The value must be set to one of the following: MergePoints, IncrementalOctreeMergePoints, NonMergingPointLocator.


|-
|-
| '''Contour By'''<br>''(SelectInputScalars)''
| '''Contour By'''<br>''(SelectInputScalars)''
|
|
This property specifies the name of the scalar array from which the contour filter will compute isolines and/or isosurfaces.
This property specifies the name of the scalar array from which the contour filter will compute isolines and/or isosurfaces.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.




Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|}
|}




==Cosmology FOF Halo Finder==
==Cosmology FOF Halo Finder==




Sorry, no help is currently available.
Sorry, no help is currently available.






{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''bb (linking length/distance)'''<br>''(BB)''
| '''bb (linking length/distance)'''<br>''(BB)''
|
|
Linking length measured in units of interparticle spacing and is dimensionless.  Used to link particles into halos for the friend-of-a-friend algorithm.
Linking length measured in units of interparticle spacing and is dimensionless.  Used to link particles into halos for the friend-of-a-friend algorithm.


| 0.2
| 0.2
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Compute the most bound particle for halos'''<br>''(ComputeMostBoundParticle)''
| '''Compute the most bound particle for halos'''<br>''(ComputeMostBoundParticle)''
|
|
If checked, the most bound particle will be calculated.  This can be very slow.
If checked, the most bound particle will be calculated.  This can be very slow.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute the most connected particle for halos'''<br>''(ComputeMostConnectedParticle)''
| '''Compute the most connected particle for halos'''<br>''(ComputeMostConnectedParticle)''
|
|
If checked, the most connected particle will be calculated.  This can be very slow.
If checked, the most connected particle will be calculated.  This can be very slow.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Copy halo catalog information to original particles'''<br>''(CopyHaloDataToParticles)''
| '''Copy halo catalog information to original particles'''<br>''(CopyHaloDataToParticles)''
|
|
If checked, the halo catalog information will be copied to the original particles as well.
If checked, the halo catalog information will be copied to the original particles as well.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Halo position for 3D visualization'''<br>''(HaloPositionType)''
| '''Halo position for 3D visualization'''<br>''(HaloPositionType)''
|
|
This sets the position for the halo catalog particles (second output) in 3D space for visualization.  Input particle positions (first output) will be unaltered by this.  MBP and MCP for particle positions can potentially take a very long time to calculate.
This sets the position for the halo catalog particles (second output) in 3D space for visualization.  Input particle positions (first output) will be unaltered by this.  MBP and MCP for particle positions can potentially take a very long time to calculate.


| 0
| 0
|
|
The value must be one of the following: Average (0), Center of Mass (1), Most Bound Particle (2), Most Connected Particle (3).
The value must be one of the following: Average (0), Center of Mass (1), Most Bound Particle (2), Most Connected Particle (3).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


|-
|-
| '''np (number of seeded particles in one dimension, i.e., total particles = np^3)'''<br>''(NP)''
| '''np (number of seeded particles in one dimension, i.e., total particles = np^3)'''<br>''(NP)''
|
|
Number of seeded particles in one dimension.  Therefore, total simulation particles is np^3 (cubed).
Number of seeded particles in one dimension.  Therefore, total simulation particles is np^3 (cubed).


| 256
| 256
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''overlap (shared point/ghost cell gap distance)'''<br>''(Overlap)''
| '''overlap (shared point/ghost cell gap distance)'''<br>''(Overlap)''
|
|
The space in rL units to extend processor particle ownership for ghost particles/cells.  Needed for correct halo calculation when halos cross processor boundaries in parallel computation.
The space in rL units to extend processor particle ownership for ghost particles/cells.  Needed for correct halo calculation when halos cross processor boundaries in parallel computation.


| 5
| 5
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''pmin (minimum particle threshold for a halo)'''<br>''(PMin)''
| '''pmin (minimum particle threshold for a halo)'''<br>''(PMin)''
|
|
Minimum number of particles (threshold) needed before a group is called a halo.
Minimum number of particles (threshold) needed before a group is called a halo.


| 10
| 10
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''rL (physical box side length)'''<br>''(RL)''
| '''rL (physical box side length)'''<br>''(RL)''
|
|
The box side length used to wrap particles around if they exceed rL (or less than 0) in any dimension (only positive positions are allowed in the input, or the are wrapped around).
The box side length used to wrap particles around if they exceed rL (or less than 0) in any dimension (only positive positions are allowed in the input, or the are wrapped around).


| 90.1408
| 90.1408
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|}
|}




==Curvature==
==Curvature==




This filter will compute the Gaussian or mean curvature of the mesh at each point.
This filter will compute the Gaussian or mean curvature of the mesh at each point.


The Curvature filter computes the curvature at each point in a polygonal data set. This filter supports both Gaussian and mean curvatures.<br><br><br>
The Curvature filter computes the curvature at each point in a polygonal data set. This filter supports both Gaussian and mean curvatures.<br><br><br>
; the type can be selected from the Curvature type menu button.<br>
; the type can be selected from the Curvature type menu button.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Curvature Type'''<br>''(CurvatureType)''
| '''Curvature Type'''<br>''(CurvatureType)''
|
|
This propery specifies which type of curvature to compute.
This propery specifies which type of curvature to compute.


| 0
| 0
|
|
The value must be one of the following: Gaussian (0), Mean (1).
The value must be one of the following: Gaussian (0), Mean (1).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Curvature filter.
This property specifies the input to the Curvature filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Invert Mean Curvature'''<br>''(InvertMeanCurvature)''
| '''Invert Mean Curvature'''<br>''(InvertMeanCurvature)''
|
|
If this property is set to 1, the mean curvature calculation will be inverted. This is useful for meshes with inward-pointing normals.
If this property is set to 1, the mean curvature calculation will be inverted. This is useful for meshes with inward-pointing normals.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==D3==
==D3==




Repartition a data set into load-balanced spatially convex regions.  Create ghost cells if requested.
Repartition a data set into load-balanced spatially convex regions.  Create ghost cells if requested.


The D3 filter is available when ParaView is run in parallel. It operates on any type of data set to evenly divide it across the processors into spatially contiguous regions. The output of this filter is of type unstructured grid.<br>
The D3 filter is available when ParaView is run in parallel. It operates on any type of data set to evenly divide it across the processors into spatially contiguous regions. The output of this filter is of type unstructured grid.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Boundary Mode'''<br>''(BoundaryMode)''
| '''Boundary Mode'''<br>''(BoundaryMode)''
|
|
This property determines how cells that lie on processor boundaries are handled. The "Assign cells uniquely" option assigns each boundary cell to exactly one process, which is useful for isosurfacing. Selecting "Duplicate cells" causes the cells on the boundaries to be copied to each process that shares that boundary. The "Divide cells" option breaks cells across process boundary lines so that pieces of the cell lie in different processes. This option is useful for volume rendering.
This property determines how cells that lie on processor boundaries are handled. The "Assign cells uniquely" option assigns each boundary cell to exactly one process, which is useful for isosurfacing. Selecting "Duplicate cells" causes the cells on the boundaries to be copied to each process that shares that boundary. The "Divide cells" option breaks cells across process boundary lines so that pieces of the cell lie in different processes. This option is useful for volume rendering.


| 0
| 0
|
|
The value must be one of the following: Assign cells uniquely (0), Duplicate cells (1), Divide cells (2).
The value must be one of the following: Assign cells uniquely (0), Duplicate cells (1), Divide cells (2).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the D3 filter.
This property specifies the input to the D3 filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Minimal Memory'''<br>''(UseMinimalMemory)''
| '''Minimal Memory'''<br>''(UseMinimalMemory)''
|
|
If this property is set to 1, the D3 filter requires communication routines to use minimal memory than without this restriction.
If this property is set to 1, the D3 filter requires communication routines to use minimal memory than without this restriction.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Decimate==
==Decimate==




Simplify a polygonal model using an adaptive edge collapse algorithm.  This filter works with triangles only.
Simplify a polygonal model using an adaptive edge collapse algorithm.  This filter works with triangles only.


The Decimate filter reduces the number of triangles in a polygonal data set. Because this filter only operates on triangles, first run the Triangulate filter on a dataset that contains polygons other than triangles.<br>
The Decimate filter reduces the number of triangles in a polygonal data set. Because this filter only operates on triangles, first run the Triangulate filter on a dataset that contains polygons other than triangles.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Boundary Vertex Deletion'''<br>''(BoundaryVertexDeletion)''
| '''Boundary Vertex Deletion'''<br>''(BoundaryVertexDeletion)''
|
|
If this property is set to 1, then vertices on the boundary of the dataset can be removed. Setting the value of this property to 0 preserves the boundary of the dataset, but it may cause the filter not to reach its reduction target.
If this property is set to 1, then vertices on the boundary of the dataset can be removed. Setting the value of this property to 0 preserves the boundary of the dataset, but it may cause the filter not to reach its reduction target.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Feature Angle'''<br>''(FeatureAngle)''
| '''Feature Angle'''<br>''(FeatureAngle)''
|
|
The value of thie property is used in determining where the data set may be split. If the angle between two adjacent triangles is greater than or equal to the FeatureAngle value, then their boundary is considered a feature edge where the dataset can be split.
The value of thie property is used in determining where the data set may be split. If the angle between two adjacent triangles is greater than or equal to the FeatureAngle value, then their boundary is considered a feature edge where the dataset can be split.


| 15
| 15
|
|
The value must be greater than or equal to 0 and less than or equal to 180.
The value must be greater than or equal to 0 and less than or equal to 180.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Decimate filter.
This property specifies the input to the Decimate filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Preserve Topology'''<br>''(PreserveTopology)''
| '''Preserve Topology'''<br>''(PreserveTopology)''
|
|
If this property is set to 1, decimation will not split the dataset or produce holes, but it may keep the filter from reaching the reduction target. If it is set to 0, better reduction can occur (reaching the reduction target), but holes in the model may be produced.
If this property is set to 1, decimation will not split the dataset or produce holes, but it may keep the filter from reaching the reduction target. If it is set to 0, better reduction can occur (reaching the reduction target), but holes in the model may be produced.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Target Reduction'''<br>''(TargetReduction)''
| '''Target Reduction'''<br>''(TargetReduction)''
|
|
This property specifies the desired reduction in the total number of polygons in the output dataset. For example, if the TargetReduction value is 0.9, the Decimate filter will attempt to produce an output dataset that is 10% the size of the input.)
This property specifies the desired reduction in the total number of polygons in the output dataset. For example, if the TargetReduction value is 0.9, the Decimate filter will attempt to produce an output dataset that is 10% the size of the input.)


| 0.9
| 0.9
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Delaunay 2D==
==Delaunay 2D==




Create 2D Delaunay triangulation of input points. It expects a vtkPointSet as input and produces vtkPolyData as output. The points are expected to be in a mostly planar distribution.
Create 2D Delaunay triangulation of input points. It expects a vtkPointSet as input and produces vtkPolyData as output. The points are expected to be in a mostly planar distribution.


Delaunay2D is a filter that constructs a 2D Delaunay triangulation from a list of input points. These points may be represented by any dataset of type vtkPointSet and subclasses. The output of the filter is a polygonal dataset containing a triangle mesh.<br><br><br>
Delaunay2D is a filter that constructs a 2D Delaunay triangulation from a list of input points. These points may be represented by any dataset of type vtkPointSet and subclasses. The output of the filter is a polygonal dataset containing a triangle mesh.<br><br><br>
The 2D Delaunay triangulation is defined as the triangulation that satisfies the Delaunay criterion for n-dimensional simplexes (in this case n=2 and the simplexes are triangles). This criterion states that a circumsphere of each simplex in a triangulation contains only the n+1 defining points of the simplex. In two dimensions, this translates into an optimal triangulation. That is, the maximum interior angle of any triangle is less than or equal to that of any possible triangulation.<br><br><br>
The 2D Delaunay triangulation is defined as the triangulation that satisfies the Delaunay criterion for n-dimensional simplexes (in this case n=2 and the simplexes are triangles). This criterion states that a circumsphere of each simplex in a triangulation contains only the n+1 defining points of the simplex. In two dimensions, this translates into an optimal triangulation. That is, the maximum interior angle of any triangle is less than or equal to that of any possible triangulation.<br><br><br>
Delaunay triangulations are used to build topological structures from unorganized (or unstructured) points. The input to this filter is a list of points specified in 3D, even though the triangulation is 2D. Thus the triangulation is constructed in the x-y plane, and the z coordinate is ignored (although carried through to the output). You can use the option ProjectionPlaneMode in order to compute the best-fitting plane to the set of points, project the points and that plane and then perform the triangulation using their projected positions and then use it as the plane in which the triangulation is performed.<br><br><br>
Delaunay triangulations are used to build topological structures from unorganized (or unstructured) points. The input to this filter is a list of points specified in 3D, even though the triangulation is 2D. Thus the triangulation is constructed in the x-y plane, and the z coordinate is ignored (although carried through to the output). You can use the option ProjectionPlaneMode in order to compute the best-fitting plane to the set of points, project the points and that plane and then perform the triangulation using their projected positions and then use it as the plane in which the triangulation is performed.<br><br><br>
The Delaunay triangulation can be numerically sensitive in some cases. To prevent problems, try to avoid injecting points that will result in triangles with bad aspect ratios (1000:1 or greater). In practice this means inserting points that are "widely dispersed", and enables smooth transition of triangle sizes throughout the mesh. (You may even want to add extra points to create a better point distribution.) If numerical problems are present, you will see a warning message to this effect at the end of the triangulation process.<br><br><br>
The Delaunay triangulation can be numerically sensitive in some cases. To prevent problems, try to avoid injecting points that will result in triangles with bad aspect ratios (1000:1 or greater). In practice this means inserting points that are "widely dispersed", and enables smooth transition of triangle sizes throughout the mesh. (You may even want to add extra points to create a better point distribution.) If numerical problems are present, you will see a warning message to this effect at the end of the triangulation process.<br><br><br>
Warning:<br>
Warning:<br>
Points arranged on a regular lattice (termed degenerate cases) can be triangulated in more than one way (at least according to the Delaunay criterion). The choice of triangulation (as implemented by this algorithm) depends on the order of the input points. The first three points will form a triangle; other degenerate points will not break this triangle.<br><br><br>
Points arranged on a regular lattice (termed degenerate cases) can be triangulated in more than one way (at least according to the Delaunay criterion). The choice of triangulation (as implemented by this algorithm) depends on the order of the input points. The first three points will form a triangle; other degenerate points will not break this triangle.<br><br><br>
Points that are coincident (or nearly so) may be discarded by the algorithm. This is because the Delaunay triangulation requires unique input points. The output of the Delaunay triangulation is supposedly a convex hull. In certain cases this implementation may not generate the convex hull.<br>
Points that are coincident (or nearly so) may be discarded by the algorithm. This is because the Delaunay triangulation requires unique input points. The output of the Delaunay triangulation is supposedly a convex hull. In certain cases this implementation may not generate the convex hull.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Alpha'''<br>''(Alpha)''
| '''Alpha'''<br>''(Alpha)''
|
|
The value of this property controls the output of this filter. For a non-zero alpha value, only edges or triangles contained within a sphere centered at mesh vertices will be output. Otherwise, only triangles will be output.
The value of this property controls the output of this filter. For a non-zero alpha value, only edges or triangles contained within a sphere centered at mesh vertices will be output. Otherwise, only triangles will be output.


| 0
| 0
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Bounding Triangulation'''<br>''(BoundingTriangulation)''
| '''Bounding Triangulation'''<br>''(BoundingTriangulation)''
|
|
If this property is set to 1, bounding triangulation points (and associated triangles) are included in the output. These are introduced as an initial triangulation to begin the triangulation process. This feature is nice for debugging output.
If this property is set to 1, bounding triangulation points (and associated triangles) are included in the output. These are introduced as an initial triangulation to begin the triangulation process. This feature is nice for debugging output.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input dataset to the Delaunay 2D filter.
This property specifies the input dataset to the Delaunay 2D filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Offset'''<br>''(Offset)''
| '''Offset'''<br>''(Offset)''
|
|
This property is a multiplier to control the size of the initial, bounding Delaunay triangulation.
This property is a multiplier to control the size of the initial, bounding Delaunay triangulation.


| 1
| 1
|
|
The value must be greater than or equal to 0.75.
The value must be greater than or equal to 0.75.


|-
|-
| '''Projection Plane Mode'''<br>''(ProjectionPlaneMode)''
| '''Projection Plane Mode'''<br>''(ProjectionPlaneMode)''
|
|
This property determines type of projection plane to use in performing the triangulation.
This property determines type of projection plane to use in performing the triangulation.


| 0
| 0
|
|
The value must be one of the following: XY Plane (0), Best-Fitting Plane (2).
The value must be one of the following: XY Plane (0), Best-Fitting Plane (2).


|-
|-
| '''Tolerance'''<br>''(Tolerance)''
| '''Tolerance'''<br>''(Tolerance)''
|
|
This property specifies a tolerance to control discarding of closely spaced points. This tolerance is specified as a fraction of the diagonal length of the bounding box of the points.
This property specifies a tolerance to control discarding of closely spaced points. This tolerance is specified as a fraction of the diagonal length of the bounding box of the points.


| 1e-05
| 1e-05
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Delaunay 3D==
==Delaunay 3D==




Create a 3D Delaunay triangulation of input                                points.  It expects a vtkPointSet as input and                                produces vtkUnstructuredGrid as output.
Create a 3D Delaunay triangulation of input                                points.  It expects a vtkPointSet as input and                                produces vtkUnstructuredGrid as output.


Delaunay3D is a filter that constructs a 3D Delaunay triangulation<br>
Delaunay3D is a filter that constructs a 3D Delaunay triangulation<br>
from a list of input points. These points may be represented by any<br>
from a list of input points. These points may be represented by any<br>
dataset of type vtkPointSet and subclasses. The output of the filter<br>
dataset of type vtkPointSet and subclasses. The output of the filter<br>
is an unstructured grid dataset. Usually the output is a tetrahedral<br>
is an unstructured grid dataset. Usually the output is a tetrahedral<br>
mesh, but if a non-zero alpha distance value is specified (called<br>
mesh, but if a non-zero alpha distance value is specified (called<br>
the "alpha" value), then only tetrahedra, triangles, edges, and<br>
the "alpha" value), then only tetrahedra, triangles, edges, and<br>
vertices lying within the alpha radius are output. In other words,<br>
vertices lying within the alpha radius are output. In other words,<br>
non-zero alpha values may result in arbitrary combinations of<br>
non-zero alpha values may result in arbitrary combinations of<br>
tetrahedra, triangles, lines, and vertices. (The notion of alpha<br>
tetrahedra, triangles, lines, and vertices. (The notion of alpha<br>
value is derived from Edelsbrunner's work on "alpha shapes".)<br><br><br>
value is derived from Edelsbrunner's work on "alpha shapes".)<br><br><br>
The 3D Delaunay triangulation is defined as the triangulation that<br>
The 3D Delaunay triangulation is defined as the triangulation that<br>
satisfies the Delaunay criterion for n-dimensional simplexes (in<br>
satisfies the Delaunay criterion for n-dimensional simplexes (in<br>
this case n=3 and the simplexes are tetrahedra). This criterion<br>
this case n=3 and the simplexes are tetrahedra). This criterion<br>
states that a circumsphere of each simplex in a triangulation<br>
states that a circumsphere of each simplex in a triangulation<br>
contains only the n+1 defining points of the simplex. (See text for<br>
contains only the n+1 defining points of the simplex. (See text for<br>
more information.) While in two dimensions this translates into an<br>
more information.) While in two dimensions this translates into an<br>
"optimal" triangulation, this is not true in 3D, since a measurement<br>
"optimal" triangulation, this is not true in 3D, since a measurement<br>
for optimality in 3D is not agreed on.<br><br><br>
for optimality in 3D is not agreed on.<br><br><br>
Delaunay triangulations are used to build topological structures<br>
Delaunay triangulations are used to build topological structures<br>
from unorganized (or unstructured) points. The input to this filter<br>
from unorganized (or unstructured) points. The input to this filter<br>
is a list of points specified in 3D. (If you wish to create 2D<br>
is a list of points specified in 3D. (If you wish to create 2D<br>
triangulations see Delaunay2D.) The output is an unstructured<br>
triangulations see Delaunay2D.) The output is an unstructured<br>
grid.<br><br><br>
grid.<br><br><br>
The Delaunay triangulation can be numerically sensitive. To prevent<br>
The Delaunay triangulation can be numerically sensitive. To prevent<br>
problems, try to avoid injecting points that will result in<br>
problems, try to avoid injecting points that will result in<br>
triangles with bad aspect ratios (1000:1 or greater). In practice<br>
triangles with bad aspect ratios (1000:1 or greater). In practice<br>
this means inserting points that are "widely dispersed", and enables<br>
this means inserting points that are "widely dispersed", and enables<br>
smooth transition of triangle sizes throughout the mesh. (You may<br>
smooth transition of triangle sizes throughout the mesh. (You may<br>
even want to add extra points to create a better point<br>
even want to add extra points to create a better point<br>
distribution.) If numerical problems are present, you will see a<br>
distribution.) If numerical problems are present, you will see a<br>
warning message to this effect at the end of the triangulation<br>
warning message to this effect at the end of the triangulation<br>
process.<br><br><br>
process.<br><br><br>
Warning:<br>
Warning:<br>
Points arranged on a regular lattice (termed degenerate cases) can<br>
Points arranged on a regular lattice (termed degenerate cases) can<br>
be triangulated in more than one way (at least according to the<br>
be triangulated in more than one way (at least according to the<br>
Delaunay criterion). The choice of triangulation (as implemented by<br>
Delaunay criterion). The choice of triangulation (as implemented by<br>
this algorithm) depends on the order of the input points. The first<br>
this algorithm) depends on the order of the input points. The first<br>
four points will form a tetrahedron; other degenerate points<br>
four points will form a tetrahedron; other degenerate points<br>
(relative to this initial tetrahedron) will not break it.<br><br><br>
(relative to this initial tetrahedron) will not break it.<br><br><br>
Points that are coincident (or nearly so) may be discarded by the<br>
Points that are coincident (or nearly so) may be discarded by the<br>
algorithm. This is because the Delaunay triangulation requires<br>
algorithm. This is because the Delaunay triangulation requires<br>
unique input points. You can control the definition of coincidence<br>
unique input points. You can control the definition of coincidence<br>
with the "Tolerance" instance variable.<br><br><br>
with the "Tolerance" instance variable.<br><br><br>
The output of the Delaunay triangulation is supposedly a convex<br>
The output of the Delaunay triangulation is supposedly a convex<br>
hull. In certain cases this implementation may not generate the<br>
hull. In certain cases this implementation may not generate the<br>
convex hull. This behavior can be controlled by the Offset instance<br>
convex hull. This behavior can be controlled by the Offset instance<br>
variable. Offset is a multiplier used to control the size of the<br>
variable. Offset is a multiplier used to control the size of the<br>
initial triangulation. The larger the offset value, the more likely<br>
initial triangulation. The larger the offset value, the more likely<br>
you will generate a convex hull; and the more likely you are to see<br>
you will generate a convex hull; and the more likely you are to see<br>
numerical problems.<br><br><br>
numerical problems.<br><br><br>
The implementation of this algorithm varies from the 2D Delaunay<br>
The implementation of this algorithm varies from the 2D Delaunay<br>
algorithm (i.e., Delaunay2D) in an important way. When points are<br>
algorithm (i.e., Delaunay2D) in an important way. When points are<br>
injected into the triangulation, the search for the enclosing<br>
injected into the triangulation, the search for the enclosing<br>
tetrahedron is quite different. In the 3D case, the closest<br>
tetrahedron is quite different. In the 3D case, the closest<br>
previously inserted point point is found, and then the connected<br>
previously inserted point point is found, and then the connected<br>
tetrahedra are searched to find the containing one. (In 2D, a "walk"<br>
tetrahedra are searched to find the containing one. (In 2D, a "walk"<br>
towards the enclosing triangle is performed.) If the triangulation<br>
towards the enclosing triangle is performed.) If the triangulation<br>
is Delaunay, then an enclosing tetrahedron will be found. However,<br>
is Delaunay, then an enclosing tetrahedron will be found. However,<br>
in degenerate cases an enclosing tetrahedron may not be found and<br>
in degenerate cases an enclosing tetrahedron may not be found and<br>
the point will be rejected.<br>
the point will be rejected.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Alpha'''<br>''(Alpha)''
| '''Alpha'''<br>''(Alpha)''
|
|
This property specifies the alpha (or distance) value to control
This property specifies the alpha (or distance) value to control
the output of this filter.  For a non-zero alpha value, only
the output of this filter.  For a non-zero alpha value, only
edges, faces, or tetra contained within the circumsphere (of
edges, faces, or tetra contained within the circumsphere (of
radius alpha) will be output.  Otherwise, only tetrahedra will be
radius alpha) will be output.  Otherwise, only tetrahedra will be
output.
output.


| 0
| 0
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Bounding Triangulation'''<br>''(BoundingTriangulation)''
| '''Bounding Triangulation'''<br>''(BoundingTriangulation)''
|
|
This boolean controls whether bounding triangulation points (and
This boolean controls whether bounding triangulation points (and
associated triangles) are included in the output. (These are
associated triangles) are included in the output. (These are
introduced as an initial triangulation to begin the triangulation
introduced as an initial triangulation to begin the triangulation
process. This feature is nice for debugging output.)
process. This feature is nice for debugging output.)


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input dataset to the Delaunay 3D filter.
This property specifies the input dataset to the Delaunay 3D filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Offset'''<br>''(Offset)''
| '''Offset'''<br>''(Offset)''
|
|
This property specifies a multiplier to control the size of the
This property specifies a multiplier to control the size of the
initial, bounding Delaunay triangulation.
initial, bounding Delaunay triangulation.


| 2.5
| 2.5
|
|
The value must be greater than or equal to 2.5.
The value must be greater than or equal to 2.5.


|-
|-
| '''Tolerance'''<br>''(Tolerance)''
| '''Tolerance'''<br>''(Tolerance)''
|
|
This property specifies a tolerance to control discarding of
This property specifies a tolerance to control discarding of
closely spaced points. This tolerance is specified as a fraction
closely spaced points. This tolerance is specified as a fraction
of the diagonal length of the bounding box of the points.
of the diagonal length of the bounding box of the points.


| 0.001
| 0.001
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Descriptive Statistics==
==Descriptive Statistics==




Compute a statistical model of a dataset and/or assess the dataset with a statistical model.
Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input.  Then, the model (however it is obtained) may optionally be used to assess the input dataset.
This filter either computes a statistical model of a dataset or takes such a model as its second input.  Then, the model (however it is obtained) may optionally be used to assess the input dataset.
<br>
<br>
This filter computes the min, max, mean, raw moments M2 through M4, standard deviation, skewness, and kurtosis for each array you select.
This filter computes the min, max, mean, raw moments M2 through M4, standard deviation, skewness, and kurtosis for each array you select.


<br>
<br>
The model is simply a univariate Gaussian distribution with the mean and standard deviation provided. Data is assessed using this model by detrending the data (i.e., subtracting the mean) and then dividing by the standard deviation. Thus the assessment is an array whose entries are the number of standard deviations from the mean that each input point lies.<br>
The model is simply a univariate Gaussian distribution with the mean and standard deviation provided. Data is assessed using this model by detrending the data (i.e., subtracting the mean) and then dividing by the standard deviation. Thus the assessment is an array whose entries are the number of standard deviations from the mean that each input point lies.<br>




{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Attribute Mode'''<br>''(AttributeMode)''
| '''Attribute Mode'''<br>''(AttributeMode)''
|
|
Specify which type of field data the arrays will be drawn from.
Specify which type of field data the arrays will be drawn from.


| 0
| 0
|
|
Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input to the filter.  Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.
The input to the filter.  Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


|-
|-
| '''Model Input'''<br>''(ModelInput)''
| '''Model Input'''<br>''(ModelInput)''
|
|
A previously-calculated model with which to assess a separate dataset.  This input is optional.
A previously-calculated model with which to assess a separate dataset.  This input is optional.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


|-
|-
| '''Variables of Interest'''<br>''(SelectArrays)''
| '''Variables of Interest'''<br>''(SelectArrays)''
|
|
Choose arrays whose entries will be used to form observations for statistical analysis.
Choose arrays whose entries will be used to form observations for statistical analysis.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Deviations should be'''<br>''(SignedDeviations)''
| '''Deviations should be'''<br>''(SignedDeviations)''
|
|
Should the assessed values be signed deviations or unsigned?
Should the assessed values be signed deviations or unsigned?


| 0
| 0
|
|
The value must be one of the following: Unsigned (0), Signed (1).
The value must be one of the following: Unsigned (0), Signed (1).


|-
|-
| '''Task'''<br>''(Task)''
| '''Task'''<br>''(Task)''
|
|
Specify the task to be performed: modeling and/or assessment.
Specify the task to be performed: modeling and/or assessment.
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.
When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.


| 3
| 3
|
|
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


|-
|-
| '''Training Fraction'''<br>''(TrainingFraction)''
| '''Training Fraction'''<br>''(TrainingFraction)''
|
|
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Elevation==
==Elevation==




Create point attribute array by projecting points onto an elevation vector.
Create point attribute array by projecting points onto an elevation vector.


The Elevation filter generates point scalar values for an input dataset along a specified direction vector.<br><br><br>
The Elevation filter generates point scalar values for an input dataset along a specified direction vector.<br><br><br>
The Input menu allows the user to select the data set to which this filter will be applied. Use the Scalar range entry boxes to specify the minimum and maximum scalar value to be generated. The Low Point and High Point define a line onto which each point of the data set is projected. The minimum scalar value is associated with the Low Point, and the maximum scalar value is associated with the High Point. The scalar value for each point in the data set is determined by the location along the line to which that point projects.<br>
The Input menu allows the user to select the data set to which this filter will be applied. Use the Scalar range entry boxes to specify the minimum and maximum scalar value to be generated. The Low Point and High Point define a line onto which each point of the data set is projected. The minimum scalar value is associated with the Low Point, and the maximum scalar value is associated with the High Point. The scalar value for each point in the data set is determined by the location along the line to which that point projects.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''High Point'''<br>''(HighPoint)''
| '''High Point'''<br>''(HighPoint)''
|
|
This property defines the other end of the direction vector (large scalar values).
This property defines the other end of the direction vector (large scalar values).


| 0 0 1
| 0 0 1
|
|
The coordinate must lie within the bounding box of the dataset. It will default to the maximum in each dimension.
The coordinate must lie within the bounding box of the dataset. It will default to the maximum in each dimension.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input dataset to the Elevation filter.
This property specifies the input dataset to the Elevation filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Low Point'''<br>''(LowPoint)''
| '''Low Point'''<br>''(LowPoint)''
|
|
This property defines one end of the direction vector (small scalar values).
This property defines one end of the direction vector (small scalar values).


| 0 0 0
| 0 0 0
|
|
The coordinate must lie within the bounding box of the dataset. It will default to the minimum in each dimension.
The coordinate must lie within the bounding box of the dataset. It will default to the minimum in each dimension.


|-
|-
| '''Scalar Range'''<br>''(ScalarRange)''
| '''Scalar Range'''<br>''(ScalarRange)''
|
|
This property determines the range into which scalars will be mapped.
This property determines the range into which scalars will be mapped.


| 0 1
| 0 1
| �
| �
|}
|}




==Extract AMR Blocks==
==Extract AMR Blocks==




This filter extracts a list of datasets from hierarchical datasets.
This filter extracts a list of datasets from hierarchical datasets.


This filter extracts a list of datasets from hierarchical datasets.<br>
This filter extracts a list of datasets from hierarchical datasets.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Datasets filter.
This property specifies the input to the Extract Datasets filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


|-
|-
| '''Selected Data Sets'''<br>''(SelectedDataSets)''
| '''Selected Data Sets'''<br>''(SelectedDataSets)''
|
|
This property provides a list of datasets to extract.
This property provides a list of datasets to extract.


| �
| �
| �
| �
|}
|}




==Extract Block==
==Extract Block==




This filter extracts a range of blocks from a multiblock dataset.
This filter extracts a range of blocks from a multiblock dataset.


This filter extracts a range of groups from a multiblock dataset<br>
This filter extracts a range of groups from a multiblock dataset<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Block Indices'''<br>''(BlockIndices)''
| '''Block Indices'''<br>''(BlockIndices)''
|
|
This property lists the ids of the blocks to extract
This property lists the ids of the blocks to extract
from the input multiblock dataset.
from the input multiblock dataset.


| �
| �
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Group filter.
This property specifies the input to the Extract Group filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.


|-
|-
| '''Maintain Structure'''<br>''(MaintainStructure)''
| '''Maintain Structure'''<br>''(MaintainStructure)''
|
|
This is used only when PruneOutput is ON. By default, when pruning the
This is used only when PruneOutput is ON. By default, when pruning the
output i.e. remove empty blocks, if node has only 1 non-null child
output i.e. remove empty blocks, if node has only 1 non-null child
block, then that node is removed. To preserve these parent nodes, set
block, then that node is removed. To preserve these parent nodes, set
this flag to true.
this flag to true.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Prune Output'''<br>''(PruneOutput)''
| '''Prune Output'''<br>''(PruneOutput)''
|
|
When set, the output mutliblock dataset will be pruned to remove empty
When set, the output mutliblock dataset will be pruned to remove empty
nodes. On by default.
nodes. On by default.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Extract CTH Parts==
==Extract CTH Parts==




Create a surface from a CTH volume fraction.
Create a surface from a CTH volume fraction.


Extract CTH Parts is a specialized filter for visualizing the data from a CTH simulation. It first converts the selected cell-centered arrays to point-centered ones. It then contours each array at a value of 0.5. The user has the option of clipping the resulting surface(s) with a plane. This filter only operates on unstructured data. It produces polygonal output.<br>
Extract CTH Parts is a specialized filter for visualizing the data from a CTH simulation. It first converts the selected cell-centered arrays to point-centered ones. It then contours each array at a value of 0.5. The user has the option of clipping the resulting surface(s) with a plane. This filter only operates on unstructured data. It produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Double Volume Arrays'''<br>''(AddDoubleVolumeArrayName)''
| '''Double Volume Arrays'''<br>''(AddDoubleVolumeArrayName)''
|
|
This property specifies the name(s) of the volume fraction array(s) for generating parts.
This property specifies the name(s) of the volume fraction array(s) for generating parts.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Float Volume Arrays'''<br>''(AddFloatVolumeArrayName)''
| '''Float Volume Arrays'''<br>''(AddFloatVolumeArrayName)''
|
|
This property specifies the name(s) of the volume fraction array(s) for generating parts.
This property specifies the name(s) of the volume fraction array(s) for generating parts.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Unsigned Character Volume Arrays'''<br>''(AddUnsignedCharVolumeArrayName)''
| '''Unsigned Character Volume Arrays'''<br>''(AddUnsignedCharVolumeArrayName)''
|
|
This property specifies the name(s) of the volume fraction array(s) for generating parts.
This property specifies the name(s) of the volume fraction array(s) for generating parts.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Clip Type'''<br>''(ClipPlane)''
| '''Clip Type'''<br>''(ClipPlane)''
|
|
This property specifies whether to clip the dataset, and if so, it also specifies the parameters of the plane with which to clip.
This property specifies whether to clip the dataset, and if so, it also specifies the parameters of the plane with which to clip.


| �
| �
|
|
The value must be set to one of the following: None, Plane, Box, Sphere.
The value must be set to one of the following: None, Plane, Box, Sphere.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract CTH Parts filter.
This property specifies the input to the Extract CTH Parts filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array with 1 components.
The dataset must contain a cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Volume Fraction Value'''<br>''(VolumeFractionSurfaceValue)''
| '''Volume Fraction Value'''<br>''(VolumeFractionSurfaceValue)''
|
|
The value of this property is the volume fraction value for the surface.
The value of this property is the volume fraction value for the surface.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Extract Cells By Region==
==Extract Cells By Region==




This filter extracts cells that are inside/outside a region or at a region boundary.
This filter extracts cells that are inside/outside a region or at a region boundary.


This filter extracts from its input dataset all cells that are either completely inside or outside of a specified region (implicit function). On output, the filter generates an unstructured grid.<br>
This filter extracts from its input dataset all cells that are either completely inside or outside of a specified region (implicit function). On output, the filter generates an unstructured grid.<br>
To use this filter you must specify a region  (implicit function). You must also specify whethter to extract cells lying inside or outside of the region. An option exists to extract cells that are neither inside or outside (i.e., boundary).<br>
To use this filter you must specify a region  (implicit function). You must also specify whethter to extract cells lying inside or outside of the region. An option exists to extract cells that are neither inside or outside (i.e., boundary).<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Extract intersected'''<br>''(Extract intersected)''
| '''Extract intersected'''<br>''(Extract intersected)''
|
|
This parameter controls whether to extract cells that are on the boundary of the region.
This parameter controls whether to extract cells that are on the boundary of the region.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Extract only intersected'''<br>''(Extract only intersected)''
| '''Extract only intersected'''<br>''(Extract only intersected)''
|
|
This parameter controls whether to extract only cells that are on the boundary of the region. If this parameter is set, the Extraction Side parameter is ignored. If Extract Intersected is off, this parameter has no effect.
This parameter controls whether to extract only cells that are on the boundary of the region. If this parameter is set, the Extraction Side parameter is ignored. If Extract Intersected is off, this parameter has no effect.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Extraction Side'''<br>''(ExtractInside)''
| '''Extraction Side'''<br>''(ExtractInside)''
|
|
This parameter controls whether to extract cells that are inside or outside the region.
This parameter controls whether to extract cells that are inside or outside the region.


| 1
| 1
|
|
The value must be one of the following: outside (0), inside (1).
The value must be one of the following: outside (0), inside (1).


|-
|-
| '''Intersect With'''<br>''(ImplicitFunction)''
| '''Intersect With'''<br>''(ImplicitFunction)''
|
|
This property sets the region used to extract cells.
This property sets the region used to extract cells.


| �
| �
|
|
The value must be set to one of the following: Plane, Box, Sphere.
The value must be set to one of the following: Plane, Box, Sphere.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Slice filter.
This property specifies the input to the Slice filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Extract Edges==
==Extract Edges==




Extract edges of 2D and 3D cells as lines.
Extract edges of 2D and 3D cells as lines.


The Extract Edges filter produces a wireframe version of the input dataset by extracting all the edges of the dataset's cells as lines. This filter operates on any type of data set and produces polygonal output.<br>
The Extract Edges filter produces a wireframe version of the input dataset by extracting all the edges of the dataset's cells as lines. This filter operates on any type of data set and produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Edges filter.
This property specifies the input to the Extract Edges filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Extract Level==
==Extract Level==




This filter extracts a range of groups from a hierarchical dataset.
This filter extracts a range of groups from a hierarchical dataset.


This filter extracts a range of levels from a hierarchical dataset<br>
This filter extracts a range of levels from a hierarchical dataset<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Group filter.
This property specifies the input to the Extract Group filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


|-
|-
| '''Levels'''<br>''(Levels)''
| '''Levels'''<br>''(Levels)''
|
|
This property lists the levels to extract
This property lists the levels to extract
from the input hierarchical dataset.
from the input hierarchical dataset.


| �
| �
| �
| �
|}
|}




==Extract Selection==
==Extract Selection==




Extract different type of selections.
Extract different type of selections.


This filter extracts a set of cells/points given a selection.<br>
This filter extracts a set of cells/points given a selection.<br>
The selection can be obtained from a rubber-band selection<br>
The selection can be obtained from a rubber-band selection<br>
(either cell, visible or in a frustum) or threshold selection<br>
(either cell, visible or in a frustum) or threshold selection<br>
and passed to the filter or specified by providing an ID list.<br>
and passed to the filter or specified by providing an ID list.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input from which the selection is extracted.
This property specifies the input from which the selection is extracted.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable.


|-
|-
| '''Preserve Topology'''<br>''(PreserveTopology)''
| '''Preserve Topology'''<br>''(PreserveTopology)''
|
|
If this property is set to 1 the output preserves the topology of its
If this property is set to 1 the output preserves the topology of its
input and adds an insidedness array to mark which cells are inside or
input and adds an insidedness array to mark which cells are inside or
out. If 0 then the output is an unstructured grid which contains only
out. If 0 then the output is an unstructured grid which contains only
the subset of cells that are inside.
the subset of cells that are inside.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Selection'''<br>''(Selection)''
| '''Selection'''<br>''(Selection)''
|
|
The input that provides the selection object.
The input that provides the selection object.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.
The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.


|-
|-
| '''Show Bounds'''<br>''(ShowBounds)''
| '''Show Bounds'''<br>''(ShowBounds)''
|
|
For frustum selection, if this property is set to 1 the output is the
For frustum selection, if this property is set to 1 the output is the
outline of the frustum instead of the contents of the input that lie
outline of the frustum instead of the contents of the input that lie
within the frustum.
within the frustum.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Extract Subset==
==Extract Subset==




Extract a subgrid from a structured grid with the option of setting subsample strides.
Extract a subgrid from a structured grid with the option of setting subsample strides.


The Extract Grid filter returns a subgrid of a structured input data set (uniform rectilinear, curvilinear, or nonuniform rectilinear). The output data set type of this filter is the same as the input type.<br>
The Extract Grid filter returns a subgrid of a structured input data set (uniform rectilinear, curvilinear, or nonuniform rectilinear). The output data set type of this filter is the same as the input type.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Include Boundary'''<br>''(IncludeBoundary)''
| '''Include Boundary'''<br>''(IncludeBoundary)''
|
|
If the value of this property is 1, then if the sample rate in any dimension is greater than 1, the boundary indices of the input dataset will be passed to the output even if the boundary extent is not an even multiple of the sample rate in a given dimension.
If the value of this property is 1, then if the sample rate in any dimension is greater than 1, the boundary indices of the input dataset will be passed to the output even if the boundary extent is not an even multiple of the sample rate in a given dimension.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Grid filter.
This property specifies the input to the Extract Grid filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkRectilinearGrid, vtkStructuredPoints, vtkStructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkRectilinearGrid, vtkStructuredPoints, vtkStructuredGrid.


|-
|-
| '''Sample Rate I'''<br>''(SampleRateI)''
| '''Sample Rate I'''<br>''(SampleRateI)''
|
|
This property indicates the sampling rate in the I dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.
This property indicates the sampling rate in the I dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.


| 1
| 1
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Sample Rate J'''<br>''(SampleRateJ)''
| '''Sample Rate J'''<br>''(SampleRateJ)''
|
|
This property indicates the sampling rate in the J dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.
This property indicates the sampling rate in the J dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.


| 1
| 1
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Sample Rate K'''<br>''(SampleRateK)''
| '''Sample Rate K'''<br>''(SampleRateK)''
|
|
This property indicates the sampling rate in the K dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.
This property indicates the sampling rate in the K dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.


| 1
| 1
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''V OI'''<br>''(VOI)''
| '''V OI'''<br>''(VOI)''
|
|
This property specifies the minimum and maximum point indices along each of the I, J, and K axes; these values indicate the volume of interest (VOI). The output will have the (I,J,K) extent specified here.
This property specifies the minimum and maximum point indices along each of the I, J, and K axes; these values indicate the volume of interest (VOI). The output will have the (I,J,K) extent specified here.


| 0 0 0 0 0 0
| 0 0 0 0 0 0
|
|
The values must lie within the extent of the input dataset.
The values must lie within the extent of the input dataset.


|}
|}




==Extract Surface==
==Extract Surface==




Extract a 2D boundary surface using neighbor relations to eliminate internal faces.
Extract a 2D boundary surface using neighbor relations to eliminate internal faces.


The Extract Surface filter extracts the polygons forming the outer surface of the input dataset. This filter operates on any type of data and produces polygonal data as output.<br>
The Extract Surface filter extracts the polygons forming the outer surface of the input dataset. This filter operates on any type of data and produces polygonal data as output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Surface filter.
This property specifies the input to the Extract Surface filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Nonlinear Subdivision Level'''<br>''(NonlinearSubdivisionLevel)''
| '''Nonlinear Subdivision Level'''<br>''(NonlinearSubdivisionLevel)''
|
|
If the input is an unstructured grid with nonlinear faces, this
If the input is an unstructured grid with nonlinear faces, this
parameter determines how many times the face is subdivided into
parameter determines how many times the face is subdivided into
linear faces.  If 0, the output is the equivalent of its linear
linear faces.  If 0, the output is the equivalent of its linear
couterpart (and the midpoints determining the nonlinear
couterpart (and the midpoints determining the nonlinear
interpolation are discarded).  If 1, the nonlinear face is
interpolation are discarded).  If 1, the nonlinear face is
triangulated based on the midpoints.  If greater than 1, the
triangulated based on the midpoints.  If greater than 1, the
triangulated pieces are recursively subdivided to reach the
triangulated pieces are recursively subdivided to reach the
desired subdivision.  Setting the value to greater than 1 may
desired subdivision.  Setting the value to greater than 1 may
cause some point data to not be passed even if no quadratic faces
cause some point data to not be passed even if no quadratic faces
exist.  This option has no effect if the input is not an
exist.  This option has no effect if the input is not an
unstructured grid.
unstructured grid.


| 1
| 1
|
|
The value must be greater than or equal to 0 and less than or equal to 4.
The value must be greater than or equal to 0 and less than or equal to 4.


|-
|-
| '''Piece Invariant'''<br>''(PieceInvariant)''
| '''Piece Invariant'''<br>''(PieceInvariant)''
|
|
If the value of this property is set to 1, internal surfaces along process boundaries will be removed. NOTE: Enabling this option might cause multiple executions of the data source because more information is needed to remove internal surfaces.
If the value of this property is set to 1, internal surfaces along process boundaries will be removed. NOTE: Enabling this option might cause multiple executions of the data source because more information is needed to remove internal surfaces.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==FFT Of Selection Over Time==
==FFT Of Selection Over Time==




Extracts selection over time and plots the FFT
Extracts selection over time and plots the FFT


Extracts the data of a selection (e.g. points or cells) over time,<br>
Extracts the data of a selection (e.g. points or cells) over time,<br>
takes the FFT of them, and plots them.<br>
takes the FFT of them, and plots them.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input from which the selection is extracted.
The input from which the selection is extracted.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable, vtkCompositeDataSet.


|-
|-
| '''Selection'''<br>''(Selection)''
| '''Selection'''<br>''(Selection)''
|
|
The input that provides the selection object.
The input that provides the selection object.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.
The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.


|}
|}




==Feature Edges==
==Feature Edges==




This filter will extract edges along sharp edges of surfaces or boundaries of surfaces.
This filter will extract edges along sharp edges of surfaces or boundaries of surfaces.


The Feature Edges filter extracts various subsets of edges from the input data set. This filter operates on polygonal data and produces polygonal output.<br>
The Feature Edges filter extracts various subsets of edges from the input data set. This filter operates on polygonal data and produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Boundary Edges'''<br>''(BoundaryEdges)''
| '''Boundary Edges'''<br>''(BoundaryEdges)''
|
|
If the value of this property is set to 1, boundary edges will be extracted. Boundary edges are defined as lines cells or edges that are used by only one polygon.
If the value of this property is set to 1, boundary edges will be extracted. Boundary edges are defined as lines cells or edges that are used by only one polygon.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Coloring'''<br>''(Coloring)''
| '''Coloring'''<br>''(Coloring)''
|
|
If the value of this property is set to 1, then the extracted edges are assigned a scalar value based on the type of the edge.
If the value of this property is set to 1, then the extracted edges are assigned a scalar value based on the type of the edge.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Feature Angle'''<br>''(FeatureAngle)''
| '''Feature Angle'''<br>''(FeatureAngle)''
|
|
Ths value of this property is used to define a feature edge. If the surface normal between two adjacent triangles is at least as large as this Feature Angle, a feature edge exists. (See the FeatureEdges property.)
Ths value of this property is used to define a feature edge. If the surface normal between two adjacent triangles is at least as large as this Feature Angle, a feature edge exists. (See the FeatureEdges property.)


| 30
| 30
|
|
The value must be greater than or equal to 0 and less than or equal to 180.
The value must be greater than or equal to 0 and less than or equal to 180.


|-
|-
| '''Feature Edges'''<br>''(FeatureEdges)''
| '''Feature Edges'''<br>''(FeatureEdges)''
|
|
If the value of this property is set to 1, feature edges will be extracted. Feature edges are defined as edges that are used by two polygons whose dihedral angle is greater than the feature angle. (See the FeatureAngle property.)
If the value of this property is set to 1, feature edges will be extracted. Feature edges are defined as edges that are used by two polygons whose dihedral angle is greater than the feature angle. (See the FeatureAngle property.)
Toggle whether to extract feature edges.
Toggle whether to extract feature edges.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Feature Edges filter.
This property specifies the input to the Feature Edges filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Manifold Edges'''<br>''(ManifoldEdges)''
| '''Manifold Edges'''<br>''(ManifoldEdges)''
|
|
If the value of this property is set to 1, manifold edges will be extracted. Manifold edges are defined as edges that are used by exactly two polygons.
If the value of this property is set to 1, manifold edges will be extracted. Manifold edges are defined as edges that are used by exactly two polygons.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Non-Manifold Edges'''<br>''(NonManifoldEdges)''
| '''Non-Manifold Edges'''<br>''(NonManifoldEdges)''
|
|
If the value of this property is set to 1, non-manifold ediges will be extracted. Non-manifold edges are defined as edges that are use by three or more polygons.
If the value of this property is set to 1, non-manifold ediges will be extracted. Non-manifold edges are defined as edges that are use by three or more polygons.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Generate Ids==
==Generate Ids==




Generate scalars from point and cell ids.
Generate scalars from point and cell ids.


This filter generates scalars  using cell and point ids. That is, the point attribute data scalars are generated from the point ids, and the cell attribute data scalars or field data are generated from the the cell ids.<br>
This filter generates scalars  using cell and point ids. That is, the point attribute data scalars are generated from the point ids, and the cell attribute data scalars or field data are generated from the the cell ids.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Array Name'''<br>''(ArrayName)''
| '''Array Name'''<br>''(ArrayName)''
|
|
The name of the array that will contain ids.
The name of the array that will contain ids.


| Ids
| Ids
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Cell Data to Point Data filter.
This property specifies the input to the Cell Data to Point Data filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Generate Quadrature Points==
==Generate Quadrature Points==




Create a point set with data at quadrature points.
Create a point set with data at quadrature points.


"Create a point set with data at quadrature points."<br>
"Create a point set with data at quadrature points."<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array.
The dataset must contain a cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


|-
|-
| '''Select Source Array'''<br>''(SelectSourceArray)''
| '''Select Source Array'''<br>''(SelectSourceArray)''
|
|
Specifies the offset array from which we generate quadrature points.
Specifies the offset array from which we generate quadrature points.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|}
|}




==Generate Quadrature Scheme Dictionary==
==Generate Quadrature Scheme Dictionary==




Generate quadrature scheme dictionaries in data sets that do not have them.
Generate quadrature scheme dictionaries in data sets that do not have them.


Generate quadrature scheme dictionaries in data sets that do not have them.<br>
Generate quadrature scheme dictionaries in data sets that do not have them.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


|}
|}




==Generate Surface Normals==
==Generate Surface Normals==




This filter will produce surface normals used for smooth shading. Splitting is used to avoid smoothing across feature edges.
This filter will produce surface normals used for smooth shading. Splitting is used to avoid smoothing across feature edges.


This filter generates surface normals at the points of the input polygonal dataset to provide smooth shading of the dataset. The resulting dataset is also polygonal. The filter works by calculating a normal vector for each polygon in the dataset and then averaging the normals at the shared points.<br>
This filter generates surface normals at the points of the input polygonal dataset to provide smooth shading of the dataset. The resulting dataset is also polygonal. The filter works by calculating a normal vector for each polygon in the dataset and then averaging the normals at the shared points.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Compute Cell Normals'''<br>''(ComputeCellNormals)''
| '''Compute Cell Normals'''<br>''(ComputeCellNormals)''
|
|
This filter computes the normals at the points in the data set. In the process of doing this it computes polygon normals too. If you want these normals to be passed to the output of this filter, set the value of this property to 1.
This filter computes the normals at the points in the data set. In the process of doing this it computes polygon normals too. If you want these normals to be passed to the output of this filter, set the value of this property to 1.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Consistency'''<br>''(Consistency)''
| '''Consistency'''<br>''(Consistency)''
|
|
The value of this property controls whether consistent polygon ordering is enforced. Generally the normals for a data set should either all point inward or all point outward. If the value of this property is 1, then this filter will reorder the points of cells that whose normal vectors are oriented the opposite direction from the rest of those in the data set.
The value of this property controls whether consistent polygon ordering is enforced. Generally the normals for a data set should either all point inward or all point outward. If the value of this property is 1, then this filter will reorder the points of cells that whose normal vectors are oriented the opposite direction from the rest of those in the data set.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Feature Angle'''<br>''(FeatureAngle)''
| '''Feature Angle'''<br>''(FeatureAngle)''
|
|
The value of this property  defines a feature edge. If the surface normal between two adjacent triangles is at least as large as this Feature Angle, a feature edge exists. If Splitting is on, points are duplicated along these feature edges. (See the Splitting property.)
The value of this property  defines a feature edge. If the surface normal between two adjacent triangles is at least as large as this Feature Angle, a feature edge exists. If Splitting is on, points are duplicated along these feature edges. (See the Splitting property.)


| 30
| 30
|
|
The value must be greater than or equal to 0 and less than or equal to 180.
The value must be greater than or equal to 0 and less than or equal to 180.


|-
|-
| '''Flip Normals'''<br>''(FlipNormals)''
| '''Flip Normals'''<br>''(FlipNormals)''
|
|
If the value of this property is 1, this filter will reverse the normal direction (and reorder the points accordingly) for all polygons in the data set; this changes front-facing polygons to back-facing ones, and vice versa. You might want to do this if your viewing position will be inside the data set instead of outside of it.
If the value of this property is 1, this filter will reverse the normal direction (and reorder the points accordingly) for all polygons in the data set; this changes front-facing polygons to back-facing ones, and vice versa. You might want to do this if your viewing position will be inside the data set instead of outside of it.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Normals Generation filter.
This property specifies the input to the Normals Generation filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Non-Manifold Traversal'''<br>''(NonManifoldTraversal)''
| '''Non-Manifold Traversal'''<br>''(NonManifoldTraversal)''
|
|
Turn on/off traversal across non-manifold edges. Not traversing non-manifold edges will prevent problems where the consistency of polygonal ordering is corrupted due to topological loops.
Turn on/off traversal across non-manifold edges. Not traversing non-manifold edges will prevent problems where the consistency of polygonal ordering is corrupted due to topological loops.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Piece Invariant'''<br>''(PieceInvariant)''
| '''Piece Invariant'''<br>''(PieceInvariant)''
|
|
Turn this option to to produce the same results regardless of the number of processors used (i.e., avoid seams along processor boundaries). Turn this off if you do want to process ghost levels and do not mind seams.
Turn this option to to produce the same results regardless of the number of processors used (i.e., avoid seams along processor boundaries). Turn this off if you do want to process ghost levels and do not mind seams.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Splitting'''<br>''(Splitting)''
| '''Splitting'''<br>''(Splitting)''
|
|
This property controls the splitting of sharp edges. If sharp edges are split (property value = 1), then points are duplicated along these edges, and separate normals are computed for both sets of points to give crisp (rendered) surface definition.
This property controls the splitting of sharp edges. If sharp edges are split (property value = 1), then points are duplicated along these edges, and separate normals are computed for both sets of points to give crisp (rendered) surface definition.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Glyph==
==Glyph==




This filter generates an arrow, cone, cube, cylinder, line, sphere, or 2D glyph at each point of the input data set.  The glyphs can be oriented and scaled by point attributes of the input dataset.
This filter generates an arrow, cone, cube, cylinder, line, sphere, or 2D glyph at each point of the input data set.  The glyphs can be oriented and scaled by point attributes of the input dataset.


The Glyph filter generates a glyph (i.e., an arrow, cone, cube, cylinder, line, sphere, or 2D glyph) at each point in the input dataset. The glyphs can be oriented and scaled by the input point-centered scalars and vectors. The Glyph filter operates on any type of data set. Its output is polygonal. This filter is available on the Toolbar.<br>
The Glyph filter generates a glyph (i.e., an arrow, cone, cube, cylinder, line, sphere, or 2D glyph) at each point in the input dataset. The glyphs can be oriented and scaled by the input point-centered scalars and vectors. The Glyph filter operates on any type of data set. Its output is polygonal. This filter is available on the Toolbar.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Glyph Transform'''<br>''(GlyphTransform)''
| '''Glyph Transform'''<br>''(GlyphTransform)''
|
|
The values in this property allow you to specify the transform
The values in this property allow you to specify the transform
(translation, rotation, and scaling) to apply to the glyph source.
(translation, rotation, and scaling) to apply to the glyph source.


| �
| �
|
|
The value must be set to one of the following: Transform2.
The value must be set to one of the following: Transform2.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Glyph filter. This is the dataset to which the glyphs will be applied.
This property specifies the input to the Glyph filter. This is the dataset to which the glyphs will be applied.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Maximum Number of Points'''<br>''(MaximumNumberOfPoints)''
| '''Maximum Number of Points'''<br>''(MaximumNumberOfPoints)''
|
|
The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)
The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)


| 5000
| 5000
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Random Mode'''<br>''(RandomMode)''
| '''Random Mode'''<br>''(RandomMode)''
|
|
If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.
If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
This property indicates the name of the scalar array on which to operate. The indicated array may be used for scaling the glyphs. (See the SetScaleMode property.)
This property indicates the name of the scalar array on which to operate. The indicated array may be used for scaling the glyphs. (See the SetScaleMode property.)


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
This property indicates the name of the vector array on which to operate. The indicated array may be used for scaling and/or orienting the glyphs. (See the SetScaleMode and SetOrient properties.)
This property indicates the name of the vector array on which to operate. The indicated array may be used for scaling and/or orienting the glyphs. (See the SetScaleMode and SetOrient properties.)


| 1
| 1
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Orient'''<br>''(SetOrient)''
| '''Orient'''<br>''(SetOrient)''
|
|
If this property is set to 1, the glyphs will be oriented based on the selected vector array.
If this property is set to 1, the glyphs will be oriented based on the selected vector array.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Set Scale Factor'''<br>''(SetScaleFactor)''
| '''Set Scale Factor'''<br>''(SetScaleFactor)''
|
|
The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.
The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.




The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.




The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|-
|-
| '''Scale Mode'''<br>''(SetScaleMode)''
| '''Scale Mode'''<br>''(SetScaleMode)''
|
|
The value of this property specifies how/if the glyphs should be scaled based on the point-centered scalars/vectors in the input dataset.
The value of this property specifies how/if the glyphs should be scaled based on the point-centered scalars/vectors in the input dataset.


| 1
| 1
|
|
The value must be one of the following: scalar (0), vector (1), vector_components (2), off (3).
The value must be one of the following: scalar (0), vector (1), vector_components (2), off (3).


|-
|-
| '''Glyph Type'''<br>''(Source)''
| '''Glyph Type'''<br>''(Source)''
|
|
This property determines which type of glyph will be placed at the points in the input dataset.
This property determines which type of glyph will be placed at the points in the input dataset.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), glyph_sources.
The selected object must be the result of the following: sources (includes readers), glyph_sources.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.




The value must be set to one of the following: ArrowSource, ConeSource, CubeSource, CylinderSource, LineSource, SphereSource, GlyphSource2D.
The value must be set to one of the following: ArrowSource, ConeSource, CubeSource, CylinderSource, LineSource, SphereSource, GlyphSource2D.


|-
|-
| '''Mask Points'''<br>''(UseMaskPoints)''
| '''Mask Points'''<br>''(UseMaskPoints)''
|
|
If the value of this property is set to 1, limit the maximum number of glyphs to the value indicated by MaximumNumberOfPoints. (See the MaximumNumberOfPoints property.)
If the value of this property is set to 1, limit the maximum number of glyphs to the value indicated by MaximumNumberOfPoints. (See the MaximumNumberOfPoints property.)


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Glyph With Custom Source==
==Glyph With Custom Source==




This filter generates a glyph at each point of the input data set.  The glyphs can be oriented and scaled by point attributes of the input dataset.
This filter generates a glyph at each point of the input data set.  The glyphs can be oriented and scaled by point attributes of the input dataset.


The Glyph filter generates a glyph at each point in the input dataset. The glyphs can be oriented and scaled by the input point-centered scalars and vectors. The Glyph filter operates on any type of data set. Its output is polygonal. This filter is available on the Toolbar.<br>
The Glyph filter generates a glyph at each point in the input dataset. The glyphs can be oriented and scaled by the input point-centered scalars and vectors. The Glyph filter operates on any type of data set. Its output is polygonal. This filter is available on the Toolbar.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Glyph filter. This is the dataset to which the glyphs will be applied.
This property specifies the input to the Glyph filter. This is the dataset to which the glyphs will be applied.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Maximum Number of Points'''<br>''(MaximumNumberOfPoints)''
| '''Maximum Number of Points'''<br>''(MaximumNumberOfPoints)''
|
|
The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)
The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)


| 5000
| 5000
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Random Mode'''<br>''(RandomMode)''
| '''Random Mode'''<br>''(RandomMode)''
|
|
If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.
If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
This property indicates the name of the scalar array on which to operate. The indicated array may be used for scaling the glyphs. (See the SetScaleMode property.)
This property indicates the name of the scalar array on which to operate. The indicated array may be used for scaling the glyphs. (See the SetScaleMode property.)


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
This property indicates the name of the vector array on which to operate. The indicated array may be used for scaling and/or orienting the glyphs. (See the SetScaleMode and SetOrient properties.)
This property indicates the name of the vector array on which to operate. The indicated array may be used for scaling and/or orienting the glyphs. (See the SetScaleMode and SetOrient properties.)


| 1
| 1
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Orient'''<br>''(SetOrient)''
| '''Orient'''<br>''(SetOrient)''
|
|
If this property is set to 1, the glyphs will be oriented based on the selected vector array.
If this property is set to 1, the glyphs will be oriented based on the selected vector array.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Set Scale Factor'''<br>''(SetScaleFactor)''
| '''Set Scale Factor'''<br>''(SetScaleFactor)''
|
|
The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.
The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.




The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.




The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|-
|-
| '''Scale Mode'''<br>''(SetScaleMode)''
| '''Scale Mode'''<br>''(SetScaleMode)''
|
|
The value of this property specifies how/if the glyphs should be scaled based on the point-centered scalars/vectors in the input dataset.
The value of this property specifies how/if the glyphs should be scaled based on the point-centered scalars/vectors in the input dataset.


| 1
| 1
|
|
The value must be one of the following: scalar (0), vector (1), vector_components (2), off (3).
The value must be one of the following: scalar (0), vector (1), vector_components (2), off (3).


|-
|-
| '''Glyph Type'''<br>''(Source)''
| '''Glyph Type'''<br>''(Source)''
|
|
This property determines which type of glyph will be placed at the points in the input dataset.
This property determines which type of glyph will be placed at the points in the input dataset.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), glyph_sources.
The selected object must be the result of the following: sources (includes readers), glyph_sources.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Mask Points'''<br>''(UseMaskPoints)''
| '''Mask Points'''<br>''(UseMaskPoints)''
|
|
If the value of this property is set to 1, limit the maximum number of glyphs to the value indicated by MaximumNumberOfPoints. (See the MaximumNumberOfPoints property.)
If the value of this property is set to 1, limit the maximum number of glyphs to the value indicated by MaximumNumberOfPoints. (See the MaximumNumberOfPoints property.)


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Gradient==
==Gradient==




This filter computes gradient vectors for an image/volume.
This filter computes gradient vectors for an image/volume.


The Gradient filter computes the gradient vector at each point in an image or volume. This filter uses central differences to compute the gradients. The Gradient filter operates on uniform rectilinear (image) data and produces image data output.<br>
The Gradient filter computes the gradient vector at each point in an image or volume. This filter uses central differences to compute the gradients. The Gradient filter operates on uniform rectilinear (image) data and produces image data output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Dimensionality'''<br>''(Dimensionality)''
| '''Dimensionality'''<br>''(Dimensionality)''
|
|
This property indicates whether to compute the gradient in two dimensions or in three. If the gradient is being computed in two dimensions, the X and Y dimensions are used.
This property indicates whether to compute the gradient in two dimensions or in three. If the gradient is being computed in two dimensions, the X and Y dimensions are used.


| 3
| 3
|
|
The value must be one of the following: Two (2), Three (3).
The value must be one of the following: Two (2), Three (3).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Gradient filter.
This property specifies the input to the Gradient filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 1 components.
The dataset must contain a point array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData.


|-
|-
| '''Select Input Scalars'''<br>''(SelectInputScalars)''
| '''Select Input Scalars'''<br>''(SelectInputScalars)''
|
|
This property lists the name of the array from which to compute the gradient.
This property lists the name of the array from which to compute the gradient.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|}
|}




==Gradient Of Unstructured DataSet==
==Gradient Of Unstructured DataSet==




Estimate the gradient for each point or cell in any type of dataset.
Estimate the gradient for each point or cell in any type of dataset.


The Gradient (Unstructured) filter estimates the gradient vector at each point or cell. It operates on any type of vtkDataSet, and the output is the same type as the input. If the dataset is a vtkImageData, use the Gradient filter instead; it will be more efficient for this type of dataset.<br>
The Gradient (Unstructured) filter estimates the gradient vector at each point or cell. It operates on any type of vtkDataSet, and the output is the same type as the input. If the dataset is a vtkImageData, use the Gradient filter instead; it will be more efficient for this type of dataset.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Compute Vorticity'''<br>''(ComputeVorticity)''
| '''Compute Vorticity'''<br>''(ComputeVorticity)''
|
|
When this flag is on, the gradient filter will compute the
When this flag is on, the gradient filter will compute the
vorticity/curl of a 3 component array.
vorticity/curl of a 3 component array.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Faster Approximation'''<br>''(FasterApproximation)''
| '''Faster Approximation'''<br>''(FasterApproximation)''
|
|
When this flag is on, the gradient filter will provide a less
When this flag is on, the gradient filter will provide a less
accurate (but close) algorithm that performs fewer derivative
accurate (but close) algorithm that performs fewer derivative
calculations (and is therefore faster).  The error contains some
calculations (and is therefore faster).  The error contains some
smoothing of the output data and some possible errors on the
smoothing of the output data and some possible errors on the
boundary.  This parameter has no effect when performing the
boundary.  This parameter has no effect when performing the
gradient of cell data.
gradient of cell data.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Gradient (Unstructured) filter.
This property specifies the input to the Gradient (Unstructured) filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Result Array Name'''<br>''(ResultArrayName)''
| '''Result Array Name'''<br>''(ResultArrayName)''
|
|
This property provides a name for the output array containing the gradient vectors.
This property provides a name for the output array containing the gradient vectors.


| Gradients
| Gradients
| �
| �
|-
|-
| '''Scalar Array'''<br>''(SelectInputScalars)''
| '''Scalar Array'''<br>''(SelectInputScalars)''
|
|
This property lists the name of the scalar array from which to compute the gradient.
This property lists the name of the scalar array from which to compute the gradient.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.




Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|}
|}




==Grid Connectivity==
==Grid Connectivity==




Mass properties of connected fragments for unstructured grids.
Mass properties of connected fragments for unstructured grids.


This filter works on multiblock unstructured grid inputs and also works in<br>
This filter works on multiblock unstructured grid inputs and also works in<br>
parallel.  It Ignores any cells with a cell data Status value of 0.<br>
parallel.  It Ignores any cells with a cell data Status value of 0.<br>
It performs connectivity to distict fragments separately.  It then integrates<br>
It performs connectivity to distict fragments separately.  It then integrates<br>
attributes of the fragments.<br>
attributes of the fragments.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid, vtkCompositeDataSet.


|}
|}




==Group Datasets==
==Group Datasets==




Group data sets.
Group data sets.


Groups multiple datasets to create a multiblock dataset<br>
Groups multiple datasets to create a multiblock dataset<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property indicates the the inputs to the Group Datasets filter.
This property indicates the the inputs to the Group Datasets filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|}
|}




==Histogram==
==Histogram==




Extract a histogram from field data.
Extract a histogram from field data.




{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Bin Count'''<br>''(BinCount)''
| '''Bin Count'''<br>''(BinCount)''
|
|
The value of this property specifies the number of bins for the histogram.
The value of this property specifies the number of bins for the histogram.


| 10
| 10
|
|
The value must be greater than or equal to 1 and less than or equal to 256.
The value must be greater than or equal to 1 and less than or equal to 256.


|-
|-
| '''Calculate Averages'''<br>''(CalculateAverages)''
| '''Calculate Averages'''<br>''(CalculateAverages)''
|
|
This option controls whether the algorithm calculates averages
This option controls whether the algorithm calculates averages
of variables other than the primary variable that fall into each
of variables other than the primary variable that fall into each
bin.
bin.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Component'''<br>''(Component)''
| '''Component'''<br>''(Component)''
|
|
The value of this property specifies the array component from which the histogram should be computed.
The value of this property specifies the array component from which the histogram should be computed.


| 0
| 0
| �
| �
|-
|-
| '''Custom Bin Ranges'''<br>''(CustomBinRanges)''
| '''Custom Bin Ranges'''<br>''(CustomBinRanges)''
|
|
Set custom bin ranges to use. These are used only when
Set custom bin ranges to use. These are used only when
UseCustomBinRanges is set to true.
UseCustomBinRanges is set to true.


| 0 100
| 0 100
|
|
The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Histogram filter.
This property specifies the input to the Histogram filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Select Input Array'''<br>''(SelectInputArray)''
| '''Select Input Array'''<br>''(SelectInputArray)''
|
|
This property indicates the name of the array from which to compute the histogram.
This property indicates the name of the array from which to compute the histogram.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.




Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Use Custom Bin Ranges'''<br>''(UseCustomBinRanges)''
| '''Use Custom Bin Ranges'''<br>''(UseCustomBinRanges)''
|
|
When set to true, CustomBinRanges will  be used instead of using the
When set to true, CustomBinRanges will  be used instead of using the
full range for the selected array. By default, set to false.
full range for the selected array. By default, set to false.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Integrate Variables==
==Integrate Variables==




This filter integrates cell and point attributes.
This filter integrates cell and point attributes.


The Integrate Attributes filter integrates point and cell data over lines and surfaces.  It also computes length of lines, area of surface, or volume.<br>
The Integrate Attributes filter integrates point and cell data over lines and surfaces.  It also computes length of lines, area of surface, or volume.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Integrate Attributes filter.
This property specifies the input to the Integrate Attributes filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Interpolate to Quadrature Points==
==Interpolate to Quadrature Points==




Create scalar/vector data arrays interpolated to quadrature points.
Create scalar/vector data arrays interpolated to quadrature points.


"Create scalar/vector data arrays interpolated to quadrature points."<br>
"Create scalar/vector data arrays interpolated to quadrature points."<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


|-
|-
| '''Select Source Array'''<br>''(SelectSourceArray)''
| '''Select Source Array'''<br>''(SelectSourceArray)''
|
|
Specifies the offset array from which we interpolate values to quadrature points.
Specifies the offset array from which we interpolate values to quadrature points.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|}
|}




==Intersect Fragments==
==Intersect Fragments==




The Intersect Fragments filter perform geometric intersections on sets of fragments.
The Intersect Fragments filter perform geometric intersections on sets of fragments.


The Intersect Fragments filter perform geometric intersections on sets of<br>
The Intersect Fragments filter perform geometric intersections on sets of<br>
fragments. The filter takes two inputs, the first containing fragment<br>
fragments. The filter takes two inputs, the first containing fragment<br>
geometry and the second containing fragment centers. The filter has two<br>
geometry and the second containing fragment centers. The filter has two<br>
outputs. The first is geometry that results from the intersection. The<br>
outputs. The first is geometry that results from the intersection. The<br>
second is a set of points that is an approximation of the center of where<br>
second is a set of points that is an approximation of the center of where<br>
each fragment has been intersected.<br>
each fragment has been intersected.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Slice Type'''<br>''(CutFunction)''
| '''Slice Type'''<br>''(CutFunction)''
|
|
This property sets the type of intersecting geometry, and
This property sets the type of intersecting geometry, and
associated parameters.
associated parameters.


| �
| �
|
|
The value must be set to one of the following: Plane, Box, Sphere.
The value must be set to one of the following: Plane, Box, Sphere.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This input must contian fragment geometry.
This input must contian fragment geometry.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.


|-
|-
| '''Source'''<br>''(Source)''
| '''Source'''<br>''(Source)''
|
|
This input must contian fragment centers.
This input must contian fragment centers.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.


|}
|}




==Iso Volume==
==Iso Volume==




This filter extracts cells by clipping cells that have point        scalars not in the specified range.
This filter extracts cells by clipping cells that have point        scalars not in the specified range.


This filter clip away the cells using lower and upper thresholds.<br>
This filter clip away the cells using lower and upper thresholds.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Threshold filter.
This property specifies the input to the Threshold filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array with 1 components.
The dataset must contain a point or cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Input Scalars'''<br>''(SelectInputScalars)''
| '''Input Scalars'''<br>''(SelectInputScalars)''
|
|
The value of this property contains the name of the scalar array from which to perform thresholding.
The value of this property contains the name of the scalar array from which to perform thresholding.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.




Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Threshold Range'''<br>''(ThresholdBetween)''
| '''Threshold Range'''<br>''(ThresholdBetween)''
|
|
The values of this property specify the upper and lower bounds of the thresholding operation.
The values of this property specify the upper and lower bounds of the thresholding operation.


| 0 0
| 0 0
|
|
The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|}
|}




==K Means==
==K Means==




Compute a statistical model of a dataset and/or assess the dataset with a statistical model.
Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input.  Then, the model (however it is obtained) may optionally be used to assess the input dataset.
This filter either computes a statistical model of a dataset or takes such a model as its second input.  Then, the model (however it is obtained) may optionally be used to assess the input dataset.
<br>
<br>
This filter iteratively computes the center of k clusters in a space whose coordinates are specified by the arrays you select. The clusters are chosen as local minima of the sum of square Euclidean distances from each point to its nearest cluster center. The model is then a set of cluster centers. Data is assessed by assigning a cluster center and distance to the cluster to each point in the input data set.<br>
This filter iteratively computes the center of k clusters in a space whose coordinates are specified by the arrays you select. The clusters are chosen as local minima of the sum of square Euclidean distances from each point to its nearest cluster center. The model is then a set of cluster centers. Data is assessed by assigning a cluster center and distance to the cluster to each point in the input data set.<br>




{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Attribute Mode'''<br>''(AttributeMode)''
| '''Attribute Mode'''<br>''(AttributeMode)''
|
|
Specify which type of field data the arrays will be drawn from.
Specify which type of field data the arrays will be drawn from.


| 0
| 0
|
|
Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input to the filter.  Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.
The input to the filter.  Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


|-
|-
| '''k'''<br>''(K)''
| '''k'''<br>''(K)''
|
|
Specify the number of clusters.
Specify the number of clusters.


| 5
| 5
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Max Iterations'''<br>''(MaxNumIterations)''
| '''Max Iterations'''<br>''(MaxNumIterations)''
|
|
Specify the maximum number of iterations in which cluster centers are moved before the algorithm terminates.
Specify the maximum number of iterations in which cluster centers are moved before the algorithm terminates.


| 50
| 50
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Model Input'''<br>''(ModelInput)''
| '''Model Input'''<br>''(ModelInput)''
|
|
A previously-calculated model with which to assess a separate dataset. This input is optional.
A previously-calculated model with which to assess a separate dataset. This input is optional.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


|-
|-
| '''Variables of Interest'''<br>''(SelectArrays)''
| '''Variables of Interest'''<br>''(SelectArrays)''
|
|
Choose arrays whose entries will be used to form observations for statistical analysis.
Choose arrays whose entries will be used to form observations for statistical analysis.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Task'''<br>''(Task)''
| '''Task'''<br>''(Task)''
|
|
Specify the task to be performed: modeling and/or assessment.
Specify the task to be performed: modeling and/or assessment.
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.
When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.


| 3
| 3
|
|
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


|-
|-
| '''Tolerance'''<br>''(Tolerance)''
| '''Tolerance'''<br>''(Tolerance)''
|
|
Specify the relative tolerance that will cause early termination.
Specify the relative tolerance that will cause early termination.


| 0.01
| 0.01
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|-
|-
| '''Training Fraction'''<br>''(TrainingFraction)''
| '''Training Fraction'''<br>''(TrainingFraction)''
|
|
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Level Scalars==
==Level Scalars==




The Level Scalars filter uses colors to show levels of a hierarchical dataset.
The Level Scalars filter uses colors to show levels of a hierarchical dataset.


The Level Scalars filter uses colors to show levels of a hierarchical dataset.<br>
The Level Scalars filter uses colors to show levels of a hierarchical dataset.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Level Scalars filter.
This property specifies the input to the Level Scalars filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


|}
|}




==Linear Extrusion==
==Linear Extrusion==




This filter creates a swept surface defined by translating the input along a vector.
This filter creates a swept surface defined by translating the input along a vector.


The Linear Extrusion filter creates a swept surface by translating the input dataset along a specified vector. This filter is intended to operate on 2D polygonal data. This filter operates on polygonal data and produces polygonal data output.<br>
The Linear Extrusion filter creates a swept surface by translating the input dataset along a specified vector. This filter is intended to operate on 2D polygonal data. This filter operates on polygonal data and produces polygonal data output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Capping'''<br>''(Capping)''
| '''Capping'''<br>''(Capping)''
|
|
The value of this property indicates whether to cap the ends of the swept surface. Capping works by placing a copy of the input dataset on either end of the swept surface, so it behaves properly if the input is a 2D surface composed of filled polygons. If the input dataset is a closed solid (e.g., a sphere), then if capping is on (i.e., this property is set to 1), two copies of the data set will be displayed on output (the second translated from the first one along the specified vector). If instead capping is off (i.e., this property is set to 0), then an input closed solid will produce no output.
The value of this property indicates whether to cap the ends of the swept surface. Capping works by placing a copy of the input dataset on either end of the swept surface, so it behaves properly if the input is a 2D surface composed of filled polygons. If the input dataset is a closed solid (e.g., a sphere), then if capping is on (i.e., this property is set to 1), two copies of the data set will be displayed on output (the second translated from the first one along the specified vector). If instead capping is off (i.e., this property is set to 0), then an input closed solid will produce no output.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Linear Extrusion filter.
This property specifies the input to the Linear Extrusion filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Piece Invariant'''<br>''(PieceInvariant)''
| '''Piece Invariant'''<br>''(PieceInvariant)''
|
|
The value of this property determines whether the output will be the same regardless of the number of processors used to compute the result. The difference is whether there are internal polygonal faces on the processor boundaries. A value of 1 will keep the results the same; a value of 0 will allow internal faces on processor boundaries.
The value of this property determines whether the output will be the same regardless of the number of processors used to compute the result. The difference is whether there are internal polygonal faces on the processor boundaries. A value of 1 will keep the results the same; a value of 0 will allow internal faces on processor boundaries.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Scale Factor'''<br>''(ScaleFactor)''
| '''Scale Factor'''<br>''(ScaleFactor)''
|
|
The value of this property determines the distance along the vector the dataset will be translated. (A scale factor of 0.5 will move the dataset half the length of the vector, and a scale factor of 2 will move it twice the vector's length.)
The value of this property determines the distance along the vector the dataset will be translated. (A scale factor of 0.5 will move the dataset half the length of the vector, and a scale factor of 2 will move it twice the vector's length.)


| 1
| 1
| �
| �
|-
|-
| '''Vector'''<br>''(Vector)''
| '''Vector'''<br>''(Vector)''
|
|
The value of this property indicates the X, Y, and Z components of the vector along which to sweep the input dataset.
The value of this property indicates the X, Y, and Z components of the vector along which to sweep the input dataset.


| 0 0 1
| 0 0 1
| �
| �
|}
|}




==Loop Subdivision==
==Loop Subdivision==




This filter iteratively divides each triangle into four triangles.  New points are placed so the output surface is smooth.
This filter iteratively divides each triangle into four triangles.  New points are placed so the output surface is smooth.


The Loop Subdivision filter increases the granularity of a polygonal mesh. It works by dividing each triangle in the input into four new triangles. It is named for Charles Loop, the person who devised this subdivision scheme. This filter only operates on triangles, so a data set that contains other types of polygons should be passed through the Triangulate filter before applying this filter to it. This filter only operates on polygonal data (specifically triangle meshes), and it produces polygonal output.<br>
The Loop Subdivision filter increases the granularity of a polygonal mesh. It works by dividing each triangle in the input into four new triangles. It is named for Charles Loop, the person who devised this subdivision scheme. This filter only operates on triangles, so a data set that contains other types of polygons should be passed through the Triangulate filter before applying this filter to it. This filter only operates on polygonal data (specifically triangle meshes), and it produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Loop Subdivision filter.
This property specifies the input to the Loop Subdivision filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Number of Subdivisions'''<br>''(NumberOfSubdivisions)''
| '''Number of Subdivisions'''<br>''(NumberOfSubdivisions)''
|
|
Set the number of subdivision iterations to perform. Each subdivision divides single triangles into four new triangles.
Set the number of subdivision iterations to perform. Each subdivision divides single triangles into four new triangles.


| 1
| 1
|
|
The value must be greater than or equal to 1 and less than or equal to 4.
The value must be greater than or equal to 1 and less than or equal to 4.


|}
|}




==Mask Points==
==Mask Points==




Reduce the number of points.  This filter is often used before glyphing. Generating vertices is an option.
Reduce the number of points.  This filter is often used before glyphing. Generating vertices is an option.


The Mask Points filter reduces the number of points in the dataset. It operates on any type of dataset, but produces only points / vertices as output. This filter is often used before the Glyph filter, but the basic point-masking functionality is also available on the Properties page for the Glyph filter.<br>
The Mask Points filter reduces the number of points in the dataset. It operates on any type of dataset, but produces only points / vertices as output. This filter is often used before the Glyph filter, but the basic point-masking functionality is also available on the Properties page for the Glyph filter.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Generate Vertices'''<br>''(GenerateVertices)''
| '''Generate Vertices'''<br>''(GenerateVertices)''
|
|
This property specifies whether to generate vertex cells as the topography of the output. If set to 1, the geometry (vertices) will be displayed in the rendering window; otherwise no geometry will be displayed.
This property specifies whether to generate vertex cells as the topography of the output. If set to 1, the geometry (vertices) will be displayed in the rendering window; otherwise no geometry will be displayed.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Mask Points filter.
This property specifies the input to the Mask Points filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Maximum Number of Points'''<br>''(MaximumNumberOfPoints)''
| '''Maximum Number of Points'''<br>''(MaximumNumberOfPoints)''
|
|
The value of this property indicates the maximum number of points in the output dataset.
The value of this property indicates the maximum number of points in the output dataset.


| 5000
| 5000
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Offset'''<br>''(Offset)''
| '''Offset'''<br>''(Offset)''
|
|
The value of this property indicates the point in the input dataset from which to start masking.
The value of this property indicates the point in the input dataset from which to start masking.


| 0
| 0
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''On Ratio'''<br>''(OnRatio)''
| '''On Ratio'''<br>''(OnRatio)''
|
|
The value of this property specifies the ratio of points to retain in the output. (For example, if the on ratio is 3, then the output will contain 1/3 as many points -- up to the value of the MaximumNumberOfPoints property -- as the input.)
The value of this property specifies the ratio of points to retain in the output. (For example, if the on ratio is 3, then the output will contain 1/3 as many points -- up to the value of the MaximumNumberOfPoints property -- as the input.)


| 2
| 2
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Random'''<br>''(RandomMode)''
| '''Random'''<br>''(RandomMode)''
|
|
If the value of this property is set to 0, then the points in the output will be randomly selected from the input; otherwise this filter will subsample regularly. Selecting points at random is helpful to avoid striping when masking the points of a structured dataset.
If the value of this property is set to 0, then the points in the output will be randomly selected from the input; otherwise this filter will subsample regularly. Selecting points at random is helpful to avoid striping when masking the points of a structured dataset.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Single Vertex Per Cell'''<br>''(SingleVertexPerCell)''
| '''Single Vertex Per Cell'''<br>''(SingleVertexPerCell)''
|
|
Tell filter to only generate one vertex per cell instead of multiple vertices in one cell.
Tell filter to only generate one vertex per cell instead of multiple vertices in one cell.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Material Interface Filter==
==Material Interface Filter==




The Material Interface filter finds volumes in the input data containg material above a certain material fraction.
The Material Interface filter finds volumes in the input data containg material above a certain material fraction.


The Material Interface filter finds voxels inside of which a material<br>
The Material Interface filter finds voxels inside of which a material<br>
fraction (or normalized amount of material) is higher than a given<br>
fraction (or normalized amount of material) is higher than a given<br>
threshold. As these voxels are identified surfaces enclosing adjacent<br>
threshold. As these voxels are identified surfaces enclosing adjacent<br>
voxels above the threshold are generated. The resulting volume and its<br>
voxels above the threshold are generated. The resulting volume and its<br>
surface are what we call a fragment. The filter has the ability to<br>
surface are what we call a fragment. The filter has the ability to<br>
compute various volumetric attributes such as fragment volume, mass,<br>
compute various volumetric attributes such as fragment volume, mass,<br>
center of mass as well as volume and mass weighted averages for any of<br>
center of mass as well as volume and mass weighted averages for any of<br>
the fields present. Any field selected for such computation will be also<br>
the fields present. Any field selected for such computation will be also<br>
be coppied into the fragment surface's point data for visualization. The<br>
be coppied into the fragment surface's point data for visualization. The<br>
filter also has the ability to generate Oriented Bounding Boxes (OBB) for<br>
filter also has the ability to generate Oriented Bounding Boxes (OBB) for<br>
each fragment.<br><br><br>
each fragment.<br><br><br>
The data generated by the filter is organized in three outputs. The<br>
The data generated by the filter is organized in three outputs. The<br>
"geometry" output, containing the fragment surfaces. The "statistics"<br>
"geometry" output, containing the fragment surfaces. The "statistics"<br>
output, containing a point set of the centers of mass. The "obb<br>
output, containing a point set of the centers of mass. The "obb<br>
representaion" output, containing OBB representations (poly data). All<br>
representaion" output, containing OBB representations (poly data). All<br>
computed attributes are coppied into the statistics and geometry output.<br>
computed attributes are coppied into the statistics and geometry output.<br>
The obb representation output is used for validation and debugging<br>
The obb representation output is used for validation and debugging<br>
puproses and is turned off by default.<br><br><br>
puproses and is turned off by default.<br><br><br>
To measure the size of craters, the filter can invert a volume fraction<br>
To measure the size of craters, the filter can invert a volume fraction<br>
and clip the volume fraction with a sphere and/or a plane.<br>
and clip the volume fraction with a sphere and/or a plane.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Clip Center'''<br>''(ClipCenter)''
| '''Clip Center'''<br>''(ClipCenter)''
|
|
This property specifies center of the clipping plane or sphere.
This property specifies center of the clipping plane or sphere.


| 0 0 0
| 0 0 0
| �
| �
|-
|-
| '''Clip Plane Vector'''<br>''(ClipPlaneVector)''
| '''Clip Plane Vector'''<br>''(ClipPlaneVector)''
|
|
This property specifies the normal of the clipping plane.
This property specifies the normal of the clipping plane.


| 0 0 1
| 0 0 1
| �
| �
|-
|-
| '''Clip Radius'''<br>''(ClipRadius)''
| '''Clip Radius'''<br>''(ClipRadius)''
|
|
This property specifies the radius of the clipping sphere.
This property specifies the radius of the clipping sphere.


| 1
| 1
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Clip With Plane'''<br>''(ClipWithPlane)''
| '''Clip With Plane'''<br>''(ClipWithPlane)''
|
|
This option masks all material on on side of a plane.  It is useful for
This option masks all material on on side of a plane.  It is useful for
finding the properties of a crater.
finding the properties of a crater.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Clip With Sphere'''<br>''(ClipWithSphere)''
| '''Clip With Sphere'''<br>''(ClipWithSphere)''
|
|
This option masks all material outside of a sphere.
This option masks all material outside of a sphere.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute OBB'''<br>''(ComputeOBB)''
| '''Compute OBB'''<br>''(ComputeOBB)''
|
|
Compute Object Oriented Bounding boxes (OBB). When active the result of
Compute Object Oriented Bounding boxes (OBB). When active the result of
this computation is coppied into the statistics output. In the case
this computation is coppied into the statistics output. In the case
that the filter is built in its validation mode, the OBB's are
that the filter is built in its validation mode, the OBB's are
rendered.
rendered.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Input to the filter can be a hierarchical box data set containing image
Input to the filter can be a hierarchical box data set containing image
data or a multi-block of rectilinear grids.
data or a multi-block of rectilinear grids.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array.
The dataset must contain a cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


|-
|-
| '''Invert Volume Fraction'''<br>''(InvertVolumeFraction)''
| '''Invert Volume Fraction'''<br>''(InvertVolumeFraction)''
|
|
Inverting the volume fraction generates the negative of the material.
Inverting the volume fraction generates the negative of the material.
It is useful for analyzing craters.
It is useful for analyzing craters.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Material Fraction Threshold'''<br>''(MaterialFractionThreshold)''
| '''Material Fraction Threshold'''<br>''(MaterialFractionThreshold)''
|
|
Material fraction is defined as normalized amount of material per
Material fraction is defined as normalized amount of material per
voxel. Any voxel in the input data set with a material fraction greater
voxel. Any voxel in the input data set with a material fraction greater
than this value is included in the output data set.
than this value is included in the output data set.


| 0.5
| 0.5
|
|
The value must be greater than or equal to 0.08 and less than or equal to 1.
The value must be greater than or equal to 0.08 and less than or equal to 1.


|-
|-
| '''Output Base Name'''<br>''(OutputBaseName)''
| '''Output Base Name'''<br>''(OutputBaseName)''
|
|
This property specifies the base including path of where to write the
This property specifies the base including path of where to write the
statistics and gemoetry output text files. It follows the pattern
statistics and gemoetry output text files. It follows the pattern
"/path/to/folder/and/file" here file has no extention, as the filter
"/path/to/folder/and/file" here file has no extention, as the filter
will generate a unique extention.
will generate a unique extention.


| �
| �
| �
| �
|-
|-
| '''Select Mass Arrays'''<br>''(SelectMassArray)''
| '''Select Mass Arrays'''<br>''(SelectMassArray)''
|
|
Mass arrays are paired with material fraction arrays. This means that
Mass arrays are paired with material fraction arrays. This means that
the first selected material fraction array is paired with the first
the first selected material fraction array is paired with the first
selected mass array, and so on sequentially. As the filter identifies
selected mass array, and so on sequentially. As the filter identifies
voxels meeting the minimum material fraction threshold, these voxel's
voxels meeting the minimum material fraction threshold, these voxel's
mass will be used in fragment center of mass and mass calculation.
mass will be used in fragment center of mass and mass calculation.


A warning is generated if no mass array is selected for an individual
A warning is generated if no mass array is selected for an individual
material fraction array. However, in that case the filter will run
material fraction array. However, in that case the filter will run
without issue because the statistics output can be generated using
without issue because the statistics output can be generated using
fragments' centers computed from axis aligned bounding boxes.
fragments' centers computed from axis aligned bounding boxes.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Compute mass weighted average over:'''<br>''(SelectMassWtdAvgArray)''
| '''Compute mass weighted average over:'''<br>''(SelectMassWtdAvgArray)''
|
|
For arrays selected a mass weighted average is computed. These arrays
For arrays selected a mass weighted average is computed. These arrays
are also coppied into fragment geometry cell data as the fragment
are also coppied into fragment geometry cell data as the fragment
surfaces are generated.
surfaces are generated.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Select Material Fraction Arrays'''<br>''(SelectMaterialArray)''
| '''Select Material Fraction Arrays'''<br>''(SelectMaterialArray)''
|
|
Material fraction is defined as normalized amount of material per
Material fraction is defined as normalized amount of material per
voxel. It is expected that arrays containing material fraction data has
voxel. It is expected that arrays containing material fraction data has
been down converted to a unsigned char.
been down converted to a unsigned char.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Compute volume weighted average over:'''<br>''(SelectVolumeWtdAvgArray)''
| '''Compute volume weighted average over:'''<br>''(SelectVolumeWtdAvgArray)''
|
|
For arrays selected a volume weighted average is computed. The values
For arrays selected a volume weighted average is computed. The values
of these arrays are also coppied into fragment geometry cell data as
of these arrays are also coppied into fragment geometry cell data as
the fragment surfaces are generated.
the fragment surfaces are generated.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Write Geometry Output'''<br>''(WriteGeometryOutput)''
| '''Write Geometry Output'''<br>''(WriteGeometryOutput)''
|
|
If this property is set, then the geometry output is written to a text
If this property is set, then the geometry output is written to a text
file. The file name will be coonstructed using the path in the "Output
file. The file name will be coonstructed using the path in the "Output
Base Name" widget.
Base Name" widget.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Write Statistics Output'''<br>''(WriteStatisticsOutput)''
| '''Write Statistics Output'''<br>''(WriteStatisticsOutput)''
|
|
If this property is set, then the statistics output is written to a
If this property is set, then the statistics output is written to a
text file. The file name will be coonstructed using the path in the
text file. The file name will be coonstructed using the path in the
"Output Base Name" widget.
"Output Base Name" widget.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Median==
==Median==




Compute the median scalar values in a specified neighborhood for image/volume datasets.
Compute the median scalar values in a specified neighborhood for image/volume datasets.


The Median filter operates on uniform rectilinear (image or volume) data and produces uniform rectilinear output. It replaces the scalar value at each pixel / voxel with the median scalar value in the specified surrounding neighborhood. Since the median operation removes outliers, this filter is useful for removing high-intensity, low-probability noise (shot noise).<br>
The Median filter operates on uniform rectilinear (image or volume) data and produces uniform rectilinear output. It replaces the scalar value at each pixel / voxel with the median scalar value in the specified surrounding neighborhood. Since the median operation removes outliers, this filter is useful for removing high-intensity, low-probability noise (shot noise).<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Median filter.
This property specifies the input to the Median filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 1 components.
The dataset must contain a point array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData.


|-
|-
| '''Kernel Size'''<br>''(KernelSize)''
| '''Kernel Size'''<br>''(KernelSize)''
|
|
The value of this property specifies the number of pixels/voxels in each dimension to use in computing the median to assign to each pixel/voxel. If the kernel size in a particular dimension is 1, then the median will not be computed in that direction.
The value of this property specifies the number of pixels/voxels in each dimension to use in computing the median to assign to each pixel/voxel. If the kernel size in a particular dimension is 1, then the median will not be computed in that direction.


| 1 1 1
| 1 1 1
| �
| �
|-
|-
| '''Select Input Scalars'''<br>''(SelectInputScalars)''
| '''Select Input Scalars'''<br>''(SelectInputScalars)''
|
|
The value of thie property lists the name of the scalar array to use in computing the median.
The value of thie property lists the name of the scalar array to use in computing the median.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|}
|}




==Merge Blocks==
==Merge Blocks==






vtkCompositeDataToUnstructuredGridFilter appends all vtkDataSet<br>
vtkCompositeDataToUnstructuredGridFilter appends all vtkDataSet<br>
leaves of the input composite dataset to a single unstructure grid. The<br>
leaves of the input composite dataset to a single unstructure grid. The<br>
subtree to be combined can be choosen using the SubTreeCompositeIndex. If<br>
subtree to be combined can be choosen using the SubTreeCompositeIndex. If<br>
the SubTreeCompositeIndex is a leaf node, then no appending is required.<br>
the SubTreeCompositeIndex is a leaf node, then no appending is required.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input composite dataset.
Set the input composite dataset.


| �
| �
|
|
The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.


|}
|}




==Mesh Quality==
==Mesh Quality==




This filter creates a new cell array containing a geometric measure of each cell's fitness. Different quality measures can be chosen for different cell shapes.
This filter creates a new cell array containing a geometric measure of each cell's fitness. Different quality measures can be chosen for different cell shapes.


This filter creates a new cell array containing a geometric measure of each cell's fitness. Different quality measures can be chosen for different cell shapes. Supported shapes include triangles, quadrilaterals, tetrahedra, and hexahedra. For other shapes, a value of 0 is assigned.<br>
This filter creates a new cell array containing a geometric measure of each cell's fitness. Different quality measures can be chosen for different cell shapes. Supported shapes include triangles, quadrilaterals, tetrahedra, and hexahedra. For other shapes, a value of 0 is assigned.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Hex Quality Measure'''<br>''(HexQualityMeasure)''
| '''Hex Quality Measure'''<br>''(HexQualityMeasure)''
|
|
This property indicates which quality measure will be used to evaluate hexahedral quality.
This property indicates which quality measure will be used to evaluate hexahedral quality.


| 5
| 5
|
|
The value must be one of the following: Diagonal (21), Dimension (22), Distortion (15), Edge Ratio (0), Jacobian (25), Maximum Edge Ratio (16), Maximum Aspect Frobenius (5), Mean Aspect Frobenius (4), Oddy (23), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Shear (11), Shear and Size (24), Skew (17), Stretch (20), Taper (18), Volume (19).
The value must be one of the following: Diagonal (21), Dimension (22), Distortion (15), Edge Ratio (0), Jacobian (25), Maximum Edge Ratio (16), Maximum Aspect Frobenius (5), Mean Aspect Frobenius (4), Oddy (23), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Shear (11), Shear and Size (24), Skew (17), Stretch (20), Taper (18), Volume (19).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Mesh Quality filter.
This property specifies the input to the Mesh Quality filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Quad Quality Measure'''<br>''(QuadQualityMeasure)''
| '''Quad Quality Measure'''<br>''(QuadQualityMeasure)''
|
|
This property indicates which quality measure will be used to evaluate quadrilateral quality.
This property indicates which quality measure will be used to evaluate quadrilateral quality.


| 0
| 0
|
|
The value must be one of the following: Area (28), Aspect Ratio (1), Condition (9), Distortion (15), Edge Ratio (0), Jacobian (25), Maximum Aspect Frobenius (5), Maximum Aspect Frobenius (5), Maximum Edge Ratio (16), Mean Aspect Frobenius (4), Minimum Angle (6), Oddy (23), Radius Ratio (2), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Shear (11), Shear and Size (24), Skew (17), Stretch (20), Taper (18), Warpage (26).
The value must be one of the following: Area (28), Aspect Ratio (1), Condition (9), Distortion (15), Edge Ratio (0), Jacobian (25), Maximum Aspect Frobenius (5), Maximum Aspect Frobenius (5), Maximum Edge Ratio (16), Mean Aspect Frobenius (4), Minimum Angle (6), Oddy (23), Radius Ratio (2), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Shear (11), Shear and Size (24), Skew (17), Stretch (20), Taper (18), Warpage (26).


|-
|-
| '''Tet Quality Measure'''<br>''(TetQualityMeasure)''
| '''Tet Quality Measure'''<br>''(TetQualityMeasure)''
|
|
This property indicates which quality measure will be used to evaluate tetrahedral quality. The radius ratio is the size of a sphere circumscribed by a tetrahedron's 4 vertices divided by the size of a circle tangent to a tetrahedron's 4 faces. The edge ratio is the ratio of the longest edge length to the shortest edge length. The collapse ratio is the minimum ratio of height of a vertex above the triangle opposite it divided by the longest edge of the opposing triangle across all vertex/triangle pairs.
This property indicates which quality measure will be used to evaluate tetrahedral quality. The radius ratio is the size of a sphere circumscribed by a tetrahedron's 4 vertices divided by the size of a circle tangent to a tetrahedron's 4 faces. The edge ratio is the ratio of the longest edge length to the shortest edge length. The collapse ratio is the minimum ratio of height of a vertex above the triangle opposite it divided by the longest edge of the opposing triangle across all vertex/triangle pairs.


| 2
| 2
|
|
The value must be one of the following: Edge Ratio (0), Aspect Beta (29), Aspect Gamma (27), Aspect Frobenius (3), Aspect Ratio (1), Collapse Ratio (7), Condition (9), Distortion (15), Jacobian (25), Minimum Dihedral Angle (6), Radius Ratio (2), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Volume (19).
The value must be one of the following: Edge Ratio (0), Aspect Beta (29), Aspect Gamma (27), Aspect Frobenius (3), Aspect Ratio (1), Collapse Ratio (7), Condition (9), Distortion (15), Jacobian (25), Minimum Dihedral Angle (6), Radius Ratio (2), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Volume (19).


|-
|-
| '''Triangle Quality Measure'''<br>''(TriangleQualityMeasure)''
| '''Triangle Quality Measure'''<br>''(TriangleQualityMeasure)''
|
|
This property indicates which quality measure will be used to evaluate triangle quality. The radius ratio is the size of a circle circumscribed by a triangle's 3 vertices divided by the size of a circle tangent to a triangle's 3 edges. The edge ratio is the ratio of the longest edge length to the shortest edge length.
This property indicates which quality measure will be used to evaluate triangle quality. The radius ratio is the size of a circle circumscribed by a triangle's 3 vertices divided by the size of a circle tangent to a triangle's 3 edges. The edge ratio is the ratio of the longest edge length to the shortest edge length.


| 2
| 2
|
|
The value must be one of the following: Area (28), Aspect Ratio (1), Aspect Frobenius (3), Condition (9), Distortion (15), Edge Ratio (0), Maximum Angle (8), Minimum Angle (6), Scaled Jacobian (10), Radius Ratio (2), Relative Size Squared (12), Shape (13), Shape and Size (14).
The value must be one of the following: Area (28), Aspect Ratio (1), Aspect Frobenius (3), Condition (9), Distortion (15), Edge Ratio (0), Maximum Angle (8), Minimum Angle (6), Scaled Jacobian (10), Radius Ratio (2), Relative Size Squared (12), Shape (13), Shape and Size (14).


|}
|}




==Multicorrelative Statistics==
==Multicorrelative Statistics==




Compute a statistical model of a dataset and/or assess the dataset with a statistical model.
Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.
This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.
<br>
<br>
This filter computes the covariance matrix for all the arrays you select plus the mean of each array. The model is thus a multivariate Gaussian distribution with the mean vector and variances provided. Data is assessed using this model by computing the Mahalanobis distance for each input point. This distance will always be positive.
This filter computes the covariance matrix for all the arrays you select plus the mean of each array. The model is thus a multivariate Gaussian distribution with the mean vector and variances provided. Data is assessed using this model by computing the Mahalanobis distance for each input point. This distance will always be positive.


<br>
<br>
The learned model output format is rather dense and can be confusing, so it is discussed here. The first filter output is a multiblock dataset consisting of 2 tables:
The learned model output format is rather dense and can be confusing, so it is discussed here. The first filter output is a multiblock dataset consisting of 2 tables:
<br>
<br>
#  Raw covariance data.<br>
#  Raw covariance data.<br>
#  Covariance matrix and its Cholesky decomposition.
#  Covariance matrix and its Cholesky decomposition.
<br>
<br>
The raw covariance table has 3 meaningful columns: 2 titled "Column1" and "Column2" whose entries generally refer to the N arrays you selected when preparing the filter and 1 column titled "Entries" that contains numeric values. The first row will always contain the number of observations in the statistical analysis. The next N rows contain the mean for each of the N arrays you selected. The remaining rows contain covariances of pairs of arrays.
The raw covariance table has 3 meaningful columns: 2 titled "Column1" and "Column2" whose entries generally refer to the N arrays you selected when preparing the filter and 1 column titled "Entries" that contains numeric values. The first row will always contain the number of observations in the statistical analysis. The next N rows contain the mean for each of the N arrays you selected. The remaining rows contain covariances of pairs of arrays.
<br>
<br>
The second table (covariance matrix and Cholesky decomposition) contains information derived from the raw covariance data of the first table. The first N rows of the first column contain the name of one array you selected for analysis. These rows are followed by a single entry labeled "Cholesky" for a total of N+1 rows. The second column, Mean contains the mean of each variable in the first N entries and the number of observations processed in the final (N+1) row.
The second table (covariance matrix and Cholesky decomposition) contains information derived from the raw covariance data of the first table. The first N rows of the first column contain the name of one array you selected for analysis. These rows are followed by a single entry labeled "Cholesky" for a total of N+1 rows. The second column, Mean contains the mean of each variable in the first N entries and the number of observations processed in the final (N+1) row.


<br>
<br>
The remaining columns (there are N, one for each array) contain 2 matrices in triangular format. The upper right triangle contains the covariance matrix (which is symmetric, so its lower triangle may be inferred). The lower left triangle contains the Cholesky decomposition of the covariance matrix (which is triangular, so its upper triangle is zero). Because the diagonal must be stored for both matrices, an additional row is required ?�� hence the N+1 rows and the final entry of the column named "Column".<br>
The remaining columns (there are N, one for each array) contain 2 matrices in triangular format. The upper right triangle contains the covariance matrix (which is symmetric, so its lower triangle may be inferred). The lower left triangle contains the Cholesky decomposition of the covariance matrix (which is triangular, so its upper triangle is zero). Because the diagonal must be stored for both matrices, an additional row is required ?�� hence the N+1 rows and the final entry of the column named "Column".<br>




{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Attribute Mode'''<br>''(AttributeMode)''
| '''Attribute Mode'''<br>''(AttributeMode)''
|
|
Specify which type of field data the arrays will be drawn from.
Specify which type of field data the arrays will be drawn from.


| 0
| 0
|
|
Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.
The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


|-
|-
| '''Model Input'''<br>''(ModelInput)''
| '''Model Input'''<br>''(ModelInput)''
|
|
A previously-calculated model with which to assess a separate dataset. This input is optional.
A previously-calculated model with which to assess a separate dataset. This input is optional.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


|-
|-
| '''Variables of Interest'''<br>''(SelectArrays)''
| '''Variables of Interest'''<br>''(SelectArrays)''
|
|
Choose arrays whose entries will be used to form observations for statistical analysis.
Choose arrays whose entries will be used to form observations for statistical analysis.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Task'''<br>''(Task)''
| '''Task'''<br>''(Task)''
|
|
Specify the task to be performed: modeling and/or assessment.
Specify the task to be performed: modeling and/or assessment.
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.
When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.


| 3
| 3
|
|
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


|-
|-
| '''Training Fraction'''<br>''(TrainingFraction)''
| '''Training Fraction'''<br>''(TrainingFraction)''
|
|
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Normal Glyphs==
==Normal Glyphs==




Filter computing surface normals.
Filter computing surface normals.


Filter computing surface normals.<br>
Filter computing surface normals.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Maximum Number of Points'''<br>''(Glyph Max. Points)''
| '''Maximum Number of Points'''<br>''(Glyph Max. Points)''
|
|
The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)
The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)


| 5000
| 5000
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Random Mode'''<br>''(Glyph Random Mode)''
| '''Random Mode'''<br>''(Glyph Random Mode)''
|
|
If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.
If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Set Scale Factor'''<br>''(Glyph Scale Factor)''
| '''Set Scale Factor'''<br>''(Glyph Scale Factor)''
|
|
The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.
The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.




The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.




The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Surface filter.
This property specifies the input to the Extract Surface filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Invert'''<br>''(InvertArrow)''
| '''Invert'''<br>''(InvertArrow)''
|
|
Inverts the arrow direction.
Inverts the arrow direction.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Octree Depth Limit==
==Octree Depth Limit==




This filter takes in a octree and produces a new octree which is no deeper than the maximum specified depth level.
This filter takes in a octree and produces a new octree which is no deeper than the maximum specified depth level.


The Octree Depth Limit filter takes in an octree and produces a new octree that is nowhere deeper than the maximum specified depth level. The attribute data of pruned leaf cells are integrated in to their ancestors at the cut level.<br>
The Octree Depth Limit filter takes in an octree and produces a new octree that is nowhere deeper than the maximum specified depth level. The attribute data of pruned leaf cells are integrated in to their ancestors at the cut level.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Octree Depth Limit filter.
This property specifies the input to the Octree Depth Limit filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkHyperOctree.
The selected dataset must be one of the following types (or a subclass of one of them): vtkHyperOctree.


|-
|-
| '''Maximum Level'''<br>''(MaximumLevel)''
| '''Maximum Level'''<br>''(MaximumLevel)''
|
|
The value of this property specifies the maximum depth of the output octree.
The value of this property specifies the maximum depth of the output octree.


| 4
| 4
|
|
The value must be greater than or equal to 3 and less than or equal to 255.
The value must be greater than or equal to 3 and less than or equal to 255.


|}
|}




==Octree Depth Scalars==
==Octree Depth Scalars==




This filter adds a scalar to each leaf of the octree that represents the leaf's depth within the tree.
This filter adds a scalar to each leaf of the octree that represents the leaf's depth within the tree.


The vtkHyperOctreeDepth filter adds a scalar to each leaf of the octree that represents the leaf's depth within the tree.<br>
The vtkHyperOctreeDepth filter adds a scalar to each leaf of the octree that represents the leaf's depth within the tree.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Octree Depth Scalars filter.
This property specifies the input to the Octree Depth Scalars filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkHyperOctree.
The selected dataset must be one of the following types (or a subclass of one of them): vtkHyperOctree.


|}
|}




==Outline==
==Outline==




This filter generates a bounding box representation of the input.
This filter generates a bounding box representation of the input.


The Outline filter generates an axis-aligned bounding box for the input dataset. This filter operates on any type of dataset and produces polygonal output.<br>
The Outline filter generates an axis-aligned bounding box for the input dataset. This filter operates on any type of dataset and produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Outline filter.
This property specifies the input to the Outline filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Outline Corners==
==Outline Corners==




This filter generates a bounding box representation of the input. It only displays the corners of the bounding box.
This filter generates a bounding box representation of the input. It only displays the corners of the bounding box.


The Outline Corners filter generates the corners of a bounding box for the input dataset. This filter operates on any type of dataset and produces polygonal output.<br>
The Outline Corners filter generates the corners of a bounding box for the input dataset. This filter operates on any type of dataset and produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Corner Factor'''<br>''(CornerFactor)''
| '''Corner Factor'''<br>''(CornerFactor)''
|
|
The value of this property sets the size of the corners as a percentage of the length of the corresponding bounding box edge.
The value of this property sets the size of the corners as a percentage of the length of the corresponding bounding box edge.


| 0.2
| 0.2
|
|
The value must be greater than or equal to 0.001 and less than or equal to 0.5.
The value must be greater than or equal to 0.001 and less than or equal to 0.5.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Outline Corners filter.
This property specifies the input to the Outline Corners filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Outline Curvilinear DataSet==
==Outline Curvilinear DataSet==




This filter generates an outline representation of the input.
This filter generates an outline representation of the input.


The Outline filter generates an outline of the outside edges of the input dataset, rather than the dataset's bounding box. This filter operates on structured grid datasets and produces polygonal output.<br>
The Outline filter generates an outline of the outside edges of the input dataset, rather than the dataset's bounding box. This filter operates on structured grid datasets and produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the outline (curvilinear) filter.
This property specifies the input to the outline (curvilinear) filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkStructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkStructuredGrid.


|}
|}




==Particle Pathlines==
==Particle Pathlines==




Creates polylines representing pathlines of animating particles
Creates polylines representing pathlines of animating particles


Particle Pathlines takes any dataset as input, it extracts the<br>
Particle Pathlines takes any dataset as input, it extracts the<br>
point locations of all cells over time to build up a polyline<br>
point locations of all cells over time to build up a polyline<br>
trail.  The point number (index) is used as the 'key' if the points<br>
trail.  The point number (index) is used as the 'key' if the points<br>
are randomly changing their respective order in the points list,<br>
are randomly changing their respective order in the points list,<br>
then you should specify a scalar that represents the unique<br>
then you should specify a scalar that represents the unique<br>
ID. This is intended to handle the output of a filter such as the<br>
ID. This is intended to handle the output of a filter such as the<br>
TemporalStreamTracer.<br>
TemporalStreamTracer.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Id Channel Array'''<br>''(IdChannelArray)''
| '''Id Channel Array'''<br>''(IdChannelArray)''
|
|
Specify the name of a scalar array which will be used to fetch
Specify the name of a scalar array which will be used to fetch
the index of each point. This is necessary only if the particles
the index of each point. This is necessary only if the particles
change position (Id order) on each time step. The Id can be used
change position (Id order) on each time step. The Id can be used
to identify particles at each step and hence track them properly.
to identify particles at each step and hence track them properly.
If this array is set to "Global or Local IDs", the global point
If this array is set to "Global or Local IDs", the global point
ids are used if they exist or the point index is otherwise.
ids are used if they exist or the point index is otherwise.


| Global or Local IDs
| Global or Local IDs
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input cells to create pathlines for.
The input cells to create pathlines for.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Mask Points'''<br>''(MaskPoints)''
| '''Mask Points'''<br>''(MaskPoints)''
|
|
Set the number of particles to track as a ratio of the input.
Set the number of particles to track as a ratio of the input.
Example: setting MaskPoints to 10 will track every 10th point.
Example: setting MaskPoints to 10 will track every 10th point.


| 100
| 100
| �
| �
|-
|-
| '''Max Step Distance'''<br>''(MaxStepDistance)''
| '''Max Step Distance'''<br>''(MaxStepDistance)''
|
|
If a particle disappears from one end of a simulation and
If a particle disappears from one end of a simulation and
reappears on the other side, the track left will be
reappears on the other side, the track left will be
unrepresentative.  Set a MaxStepDistance{x,y,z} which acts as a
unrepresentative.  Set a MaxStepDistance{x,y,z} which acts as a
threshold above which if a step occurs larger than the value (for
threshold above which if a step occurs larger than the value (for
the dimension), the track will be dropped and restarted after the
the dimension), the track will be dropped and restarted after the
step. (ie the part before the wrap around will be dropped and the
step. (ie the part before the wrap around will be dropped and the
newer part kept).
newer part kept).


| 1 1 1
| 1 1 1
| �
| �
|-
|-
| '''Max Track Length'''<br>''(MaxTrackLength)''
| '''Max Track Length'''<br>''(MaxTrackLength)''
|
|
If the Particles being traced animate for a long time, the trails
If the Particles being traced animate for a long time, the trails
or traces will become long and stringy. Setting the
or traces will become long and stringy. Setting the
MaxTraceTimeLength will limit how much of the trace is
MaxTraceTimeLength will limit how much of the trace is
displayed. Tracks longer then the Max will disappear and the
displayed. Tracks longer then the Max will disappear and the
trace will apppear like a snake of fixed length which progresses
trace will apppear like a snake of fixed length which progresses
as the particle moves.  This length is given with respect to
as the particle moves.  This length is given with respect to
timesteps.
timesteps.


| 25
| 25
| �
| �
|-
|-
| '''Selection'''<br>''(Selection)''
| '''Selection'''<br>''(Selection)''
|
|
Set a second input, which is a selection. Particles with the same
Set a second input, which is a selection. Particles with the same
Id in the selection as the primary input will be chosen for
Id in the selection as the primary input will be chosen for
pathlines Note that you must have the same IdChannelArray in the
pathlines Note that you must have the same IdChannelArray in the
selection as the input
selection as the input


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==ParticleTracer==
==ParticleTracer==




Trace Particles through time in a vector field.
Trace Particles through time in a vector field.


The Particle Trace filter generates pathlines in a vector field from a collection of seed points. The vector field used is selected from the Vectors menu, so the input data set is required to have point-centered vectors. The Seed portion of the interface allows you to select whether the seed points for this integration lie in a point cloud or along a line. Depending on which is selected, the appropriate 3D widget (point or line widget) is displayed along with traditional user interface controls for positioning the point cloud or line within the data set. Instructions for using the 3D widgets and the corresponding manual controls can be found in section 7.4.<br>
The Particle Trace filter generates pathlines in a vector field from a collection of seed points. The vector field used is selected from the Vectors menu, so the input data set is required to have point-centered vectors. The Seed portion of the interface allows you to select whether the seed points for this integration lie in a point cloud or along a line. Depending on which is selected, the appropriate 3D widget (point or line widget) is displayed along with traditional user interface controls for positioning the point cloud or line within the data set. Instructions for using the 3D widgets and the corresponding manual controls can be found in section 7.4.<br>
This filter operates on any type of data set, provided it has point-centered vectors. The output is polygonal data containing polylines. This filter is available on the Toolbar.<br>
This filter operates on any type of data set, provided it has point-centered vectors. The output is polygonal data containing polylines. This filter is available on the Toolbar.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Compute Vorticity'''<br>''(ComputeVorticity)''
| '''Compute Vorticity'''<br>''(ComputeVorticity)''
|
|
Compute vorticity and angular rotation of particles as they progress
Compute vorticity and angular rotation of particles as they progress


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Enable Particle Writing'''<br>''(EnableParticleWriting)''
| '''Enable Particle Writing'''<br>''(EnableParticleWriting)''
|
|
Turn On/Off particle writing
Turn On/Off particle writing


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Force Reinjection Every NSteps'''<br>''(ForceReinjectionEveryNSteps)''
| '''Force Reinjection Every NSteps'''<br>''(ForceReinjectionEveryNSteps)''
| �
| �
| 1
| 1
| �
| �
|-
|-
| '''Ignore Pipeline Time'''<br>''(IgnorePipelineTime)''
| '''Ignore Pipeline Time'''<br>''(IgnorePipelineTime)''
|
|
Ignore the TIME_ requests made by the pipeline and only use the TimeStep set manually
Ignore the TIME_ requests made by the pipeline and only use the TimeStep set manually


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Initial Integration Step'''<br>''(InitialIntegrationStep)''
| '''Initial Integration Step'''<br>''(InitialIntegrationStep)''
| �
| �
| 0.25
| 0.25
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 3 components.
The dataset must contain a point array with 3 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|-
|-
| '''Particle File Name'''<br>''(ParticleFileName)''
| '''Particle File Name'''<br>''(ParticleFileName)''
|
|
Provide a name for the particle file generated if writing is enabled
Provide a name for the particle file generated if writing is enabled


| /project/csvis/biddisco/ptracer/run-1
| /project/csvis/biddisco/ptracer/run-1
| �
| �
|-
|-
| '''Select Input Vectors'''<br>''(SelectInputVectors)''
| '''Select Input Vectors'''<br>''(SelectInputVectors)''
| �
| �
| �
| �
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Source'''<br>''(Source)''
| '''Source'''<br>''(Source)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Static Mesh'''<br>''(StaticMesh)''
| '''Static Mesh'''<br>''(StaticMesh)''
|
|
Force the use of static mesh optimizations
Force the use of static mesh optimizations


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Static Seeds'''<br>''(StaticSeeds)''
| '''Static Seeds'''<br>''(StaticSeeds)''
|
|
Force the use of static seed optimizations
Force the use of static seed optimizations


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Term. Speed'''<br>''(TerminalSpeed)''
| '''Term. Speed'''<br>''(TerminalSpeed)''
|
|
If at any point the speed is below the value of this property, the integration is terminated.
If at any point the speed is below the value of this property, the integration is terminated.


| 1e-12
| 1e-12
| �
| �
|-
|-
| '''Termination Time'''<br>''(TerminationTime)''
| '''Termination Time'''<br>''(TerminationTime)''
| �
| �
| 0
| 0
| �
| �
|-
|-
| '''Termination Time Unit'''<br>''(TerminationTimeUnit)''
| '''Termination Time Unit'''<br>''(TerminationTimeUnit)''
|
|
The termination time may be specified as TimeSteps or Simulation  time
The termination time may be specified as TimeSteps or Simulation  time


| 1
| 1
|
|
The value must be one of the following: Simulation Time (0), TimeSteps (1).
The value must be one of the following: Simulation Time (0), TimeSteps (1).


|-
|-
| '''Time Step'''<br>''(TimeStep)''
| '''Time Step'''<br>''(TimeStep)''
| �
| �
| 0
| 0
| �
| �
|}
|}




==Plot Data==
==Plot Data==






This filter prepare arbitrary data to be plotted in any of the plots.<br>
This filter prepare arbitrary data to be plotted in any of the plots.<br>
By default the data is shown in a XY line plot.<br>
By default the data is shown in a XY line plot.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input.
The input.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|}
|}




==Plot Global Variables Over Time==
==Plot Global Variables Over Time==




Extracts and plots data in field data over time.
Extracts and plots data in field data over time.


This filter extracts the variables that reside in a dataset's field data and are<br>
This filter extracts the variables that reside in a dataset's field data and are<br>
defined over time. The output is a 1D rectilinear grid where the x coordinates<br>
defined over time. The output is a 1D rectilinear grid where the x coordinates<br>
correspond to time (the same array is also copied to a point array named Time or<br>
correspond to time (the same array is also copied to a point array named Time or<br>
TimeData (if Time exists in the input)).<br>
TimeData (if Time exists in the input)).<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input from which the selection is extracted.
The input from which the selection is extracted.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Plot On Intersection Curves==
==Plot On Intersection Curves==




Extracts the edges in a 2D plane and plots them
Extracts the edges in a 2D plane and plots them


Extracts the surface, intersect it with a 2D plane.<br>
Extracts the surface, intersect it with a 2D plane.<br>
Plot the resulting polylines.<br>
Plot the resulting polylines.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Extract Surface filter.
This property specifies the input to the Extract Surface filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Slice Type'''<br>''(Slice Type)''
| '''Slice Type'''<br>''(Slice Type)''
|
|
This property sets the parameters of the slice function.
This property sets the parameters of the slice function.


| �
| �
|
|
The value must be set to one of the following: Plane, Box, Sphere.
The value must be set to one of the following: Plane, Box, Sphere.


|}
|}




==Plot On Sorted Lines==
==Plot On Sorted Lines==






{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Plot Edges filter.
This property specifies the input to the Plot Edges filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|}
|}




==Plot Over Line==
==Plot Over Line==




Sample data attributes at the points along a line.  Probed lines will be displayed in a graph of the attributes.
Sample data attributes at the points along a line.  Probed lines will be displayed in a graph of the attributes.


The Plot Over Line filter samples the data set attributes of the current<br>
The Plot Over Line filter samples the data set attributes of the current<br>
data set at the points along a line. The values of the point-centered variables<br>
data set at the points along a line. The values of the point-centered variables<br>
along that line will be displayed in an XY Plot. This filter uses interpolation<br>
along that line will be displayed in an XY Plot. This filter uses interpolation<br>
to determine the values at the selected point, whether or not it lies at an<br>
to determine the values at the selected point, whether or not it lies at an<br>
input point. The Probe filter operates on any type of data and produces<br>
input point. The Probe filter operates on any type of data and produces<br>
polygonal output (a line).<br>
polygonal output (a line).<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the dataset from which to obtain probe values.
This property specifies the dataset from which to obtain probe values.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.


|-
|-
| '''Pass Partial Arrays'''<br>''(PassPartialArrays)''
| '''Pass Partial Arrays'''<br>''(PassPartialArrays)''
|
|
When dealing with composite datasets, partial arrays are common i.e.
When dealing with composite datasets, partial arrays are common i.e.
data-arrays that are not available in all of the blocks. By default,
data-arrays that are not available in all of the blocks. By default,
this filter only passes those point and cell data-arrays that are
this filter only passes those point and cell data-arrays that are
available in all the blocks i.e. partial array are removed.  When
available in all the blocks i.e. partial array are removed.  When
PassPartialArrays is turned on, this behavior is changed to take a
PassPartialArrays is turned on, this behavior is changed to take a
union of all arrays present thus partial arrays are passed as well.
union of all arrays present thus partial arrays are passed as well.
However, for composite dataset input, this filter still produces a
However, for composite dataset input, this filter still produces a
non-composite output. For all those locations in a block of where a
non-composite output. For all those locations in a block of where a
particular data array is missing, this filter uses vtkMath::Nan() for
particular data array is missing, this filter uses vtkMath::Nan() for
double and float arrays, while 0 for all other types of arrays i.e
double and float arrays, while 0 for all other types of arrays i.e
int, char etc.
int, char etc.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Probe Type'''<br>''(Source)''
| '''Probe Type'''<br>''(Source)''
|
|
This property specifies the dataset whose geometry will be used in determining positions to probe.
This property specifies the dataset whose geometry will be used in determining positions to probe.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers).
The selected object must be the result of the following: sources (includes readers).




The value must be set to one of the following: HighResLineSource.
The value must be set to one of the following: HighResLineSource.


|}
|}




==Plot Selection Over Time==
==Plot Selection Over Time==




Extracts selection over time and then plots it.
Extracts selection over time and then plots it.


This filter extracts the selection over time, i.e.  cell and/or point<br>
This filter extracts the selection over time, i.e.  cell and/or point<br>
variables at a cells/point selected are extracted over time<br>
variables at a cells/point selected are extracted over time<br>
The output multi-block consists of 1D rectilinear grids where the x coordinate<br>
The output multi-block consists of 1D rectilinear grids where the x coordinate<br>
corresponds to time (the same array is also copied to a point array named<br>
corresponds to time (the same array is also copied to a point array named<br>
Time or TimeData (if Time exists in the input)).<br>
Time or TimeData (if Time exists in the input)).<br>
If selection input is a Location based selection then the point values are<br>
If selection input is a Location based selection then the point values are<br>
interpolated from the nearby cells, ie those of the cell the location<br>
interpolated from the nearby cells, ie those of the cell the location<br>
lies in.<br>
lies in.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input from which the selection is extracted.
The input from which the selection is extracted.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable, vtkCompositeDataSet.


|-
|-
| '''Selection'''<br>''(Selection)''
| '''Selection'''<br>''(Selection)''
|
|
The input that provides the selection object.
The input that provides the selection object.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.
The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.


|}
|}




==Point Data to Cell Data==
==Point Data to Cell Data==




Create cell attributes by averaging point attributes.
Create cell attributes by averaging point attributes.


The Point Data to Cell Data filter averages the values of the point attributes of the points of a cell to compute cell attributes. This filter operates on any type of dataset, and the output dataset is the same type as the input.<br>
The Point Data to Cell Data filter averages the values of the point attributes of the points of a cell to compute cell attributes. This filter operates on any type of dataset, and the output dataset is the same type as the input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Point Data to Cell Data filter.
This property specifies the input to the Point Data to Cell Data filter.


| �
| �
|
|
Once set, the input dataset type cannot be changed.
Once set, the input dataset type cannot be changed.




The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array.
The dataset must contain a point array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Pass Point Data'''<br>''(PassPointData)''
| '''Pass Point Data'''<br>''(PassPointData)''
|
|
The value of this property controls whether the input point data will be passed to the output. If set to 1, then the input point data is passed through to the output; otherwise, only generated cell data is placed into the output.
The value of this property controls whether the input point data will be passed to the output. If set to 1, then the input point data is passed through to the output; otherwise, only generated cell data is placed into the output.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Principal Component Analysis==
==Principal Component Analysis==




Compute a statistical model of a dataset and/or assess the dataset with a statistical model.
Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.
This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.
<br>
<br>
This filter performs additional analysis above and beyond the multicorrelative filter. It computes the eigenvalues and eigenvectors of the covariance matrix from the multicorrelative filter. Data is then assessed by projecting the original tuples into a possibly lower-dimensional space.
This filter performs additional analysis above and beyond the multicorrelative filter. It computes the eigenvalues and eigenvectors of the covariance matrix from the multicorrelative filter. Data is then assessed by projecting the original tuples into a possibly lower-dimensional space.


<br>
<br>
Since the PCA filter uses the multicorrelative filter's analysis, it shares the same raw covariance table specified in the multicorrelative documentation. The second table in the multiblock dataset comprising the model output is an expanded version of the multicorrelative version.
Since the PCA filter uses the multicorrelative filter's analysis, it shares the same raw covariance table specified in the multicorrelative documentation. The second table in the multiblock dataset comprising the model output is an expanded version of the multicorrelative version.


<br>
<br>
As with the multicorrlative filter, the second model table contains the mean values, the upper-triangular portion of the symmetric covariance matrix, and the non-zero lower-triangular portion of the Cholesky decomposition of the covariance matrix. Below these entries are the eigenvalues of the covariance matrix (in the column labeled "Mean") and the eigenvectors (as row vectors) in an additional NxN matrix.<br>
As with the multicorrlative filter, the second model table contains the mean values, the upper-triangular portion of the symmetric covariance matrix, and the non-zero lower-triangular portion of the Cholesky decomposition of the covariance matrix. Below these entries are the eigenvalues of the covariance matrix (in the column labeled "Mean") and the eigenvectors (as row vectors) in an additional NxN matrix.<br>




{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Attribute Mode'''<br>''(AttributeMode)''
| '''Attribute Mode'''<br>''(AttributeMode)''
|
|
Specify which type of field data the arrays will be drawn from.
Specify which type of field data the arrays will be drawn from.


| 0
| 0
|
|
Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Basis Energy'''<br>''(BasisEnergy)''
| '''Basis Energy'''<br>''(BasisEnergy)''
|
|
The minimum energy to use when determining the dimensionality of the new space into which the assessment will project tuples.
The minimum energy to use when determining the dimensionality of the new space into which the assessment will project tuples.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|-
|-
| '''Basis Scheme'''<br>''(BasisScheme)''
| '''Basis Scheme'''<br>''(BasisScheme)''
|
|
When reporting assessments, should the full eigenvector decomposition be used to project the original vector into the new space (Full basis), or should a fixed subset of the decomposition be used (Fixed-size basis), or should the projection be clipped to preserve at least some fixed "energy" (Fixed-energy basis)?
When reporting assessments, should the full eigenvector decomposition be used to project the original vector into the new space (Full basis), or should a fixed subset of the decomposition be used (Fixed-size basis), or should the projection be clipped to preserve at least some fixed "energy" (Fixed-energy basis)?




As an example, suppose the variables of interest were {A,B,C,D,E} and that the eigenvalues of the covariance matrix for these were {5,2,1.5,1,.5}. If the "Full basis" scheme is used, then all 5 components of the eigenvectors will be used to project each {A,B,C,D,E}-tuple in the original data into a new 5-components space.
As an example, suppose the variables of interest were {A,B,C,D,E} and that the eigenvalues of the covariance matrix for these were {5,2,1.5,1,.5}. If the "Full basis" scheme is used, then all 5 components of the eigenvectors will be used to project each {A,B,C,D,E}-tuple in the original data into a new 5-components space.






If the "Fixed-size" scheme is used and the "Basis Size" property is set to 4, then only the first 4 eigenvector components will be used to project each {A,B,C,D,E}-tuple into the new space and that space will be of dimension 4, not 5.
If the "Fixed-size" scheme is used and the "Basis Size" property is set to 4, then only the first 4 eigenvector components will be used to project each {A,B,C,D,E}-tuple into the new space and that space will be of dimension 4, not 5.






If the "Fixed-energy basis" scheme is used and the "Basis Energy" property is set to 0.8, then only the first 3 eigenvector components will be used to project each {A,B,C,D,E}-tuple into the new space, which will be of dimension 3. The number 3 is chosen because 3 is the lowest N for which the sum of the first N eigenvalues divided by the sum of all eigenvalues is larger than the specified "Basis Energy" (i.e., (5+2+1.5)/10 = 0.85 > 0.8).
If the "Fixed-energy basis" scheme is used and the "Basis Energy" property is set to 0.8, then only the first 3 eigenvector components will be used to project each {A,B,C,D,E}-tuple into the new space, which will be of dimension 3. The number 3 is chosen because 3 is the lowest N for which the sum of the first N eigenvalues divided by the sum of all eigenvalues is larger than the specified "Basis Energy" (i.e., (5+2+1.5)/10 = 0.85 > 0.8).




| 0
| 0
|
|
The value must be one of the following: Full basis (0), Fixed-size basis (1), Fixed-energy basis (2).
The value must be one of the following: Full basis (0), Fixed-size basis (1), Fixed-energy basis (2).


|-
|-
| '''Basis Size'''<br>''(BasisSize)''
| '''Basis Size'''<br>''(BasisSize)''
|
|
The maximum number of eigenvector components to use when projecting into the new space.
The maximum number of eigenvector components to use when projecting into the new space.


| 2
| 2
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.
The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.
The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


|-
|-
| '''Model Input'''<br>''(ModelInput)''
| '''Model Input'''<br>''(ModelInput)''
|
|
A previously-calculated model with which to assess a separate dataset. This input is optional.
A previously-calculated model with which to assess a separate dataset. This input is optional.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


|-
|-
| '''Normalization Scheme'''<br>''(NormalizationScheme)''
| '''Normalization Scheme'''<br>''(NormalizationScheme)''
|
|
Before the eigenvector decomposition of the covariance matrix takes place, you may normalize each (i,j) entry by sqrt( cov(i,i) * cov(j,j) ). This implies that the variance of each variable of interest should be of equal importance.
Before the eigenvector decomposition of the covariance matrix takes place, you may normalize each (i,j) entry by sqrt( cov(i,i) * cov(j,j) ). This implies that the variance of each variable of interest should be of equal importance.


| 2
| 2
|
|
The value must be one of the following: No normalization (0), Normalize using covariances (3).
The value must be one of the following: No normalization (0), Normalize using covariances (3).


|-
|-
| '''Variables of Interest'''<br>''(SelectArrays)''
| '''Variables of Interest'''<br>''(SelectArrays)''
|
|
Choose arrays whose entries will be used to form observations for statistical analysis.
Choose arrays whose entries will be used to form observations for statistical analysis.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Task'''<br>''(Task)''
| '''Task'''<br>''(Task)''
|
|
Specify the task to be performed: modeling and/or assessment.
Specify the task to be performed: modeling and/or assessment.
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Statistics of all the data," creates an output table (or tables) summarizing the '''entire''' input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Model a subset of the data," creates an output table (or tables) summarizing a '''randomly-chosen subset''' of the input dataset;
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.
#  "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset.  The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.
When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training.  You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting.  The ''Training fraction'' setting will be ignored for tasks 1 and 3.


| 3
| 3
|
|
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).
The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


|-
|-
| '''Training Fraction'''<br>''(TrainingFraction)''
| '''Training Fraction'''<br>''(TrainingFraction)''
|
|
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.
Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Probe Location==
==Probe Location==




Sample data attributes at the points in a point cloud.
Sample data attributes at the points in a point cloud.


The Probe filter samples the data set attributes of the current data set at the points in a point cloud. The Probe filter uses interpolation to determine the values at the selected point, whether or not it lies at an input point. The Probe filter operates on any type of data and produces polygonal output (a point cloud).<br>
The Probe filter samples the data set attributes of the current data set at the points in a point cloud. The Probe filter uses interpolation to determine the values at the selected point, whether or not it lies at an input point. The Probe filter operates on any type of data and produces polygonal output (a point cloud).<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the dataset from which to obtain probe values.
This property specifies the dataset from which to obtain probe values.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.


|-
|-
| '''Probe Type'''<br>''(Source)''
| '''Probe Type'''<br>''(Source)''
|
|
This property specifies the dataset whose geometry will be used in determining positions to probe.
This property specifies the dataset whose geometry will be used in determining positions to probe.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers).
The selected object must be the result of the following: sources (includes readers).




The value must be set to one of the following: FixedRadiusPointSource.
The value must be set to one of the following: FixedRadiusPointSource.


|}
|}




==Process Id Scalars==
==Process Id Scalars==




This filter uses colors to show how data is partitioned across processes.
This filter uses colors to show how data is partitioned across processes.


The Process Id Scalars filter assigns a unique scalar value to each piece of the input according to which processor it resides on. This filter operates on any type of data when ParaView is run in parallel. It is useful for determining whether your data is load-balanced across the processors being used. The output data set type is the same as that of the input.<br>
The Process Id Scalars filter assigns a unique scalar value to each piece of the input according to which processor it resides on. This filter operates on any type of data when ParaView is run in parallel. It is useful for determining whether your data is load-balanced across the processors being used. The output data set type is the same as that of the input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Process Id Scalars filter.
This property specifies the input to the Process Id Scalars filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Random Mode'''<br>''(RandomMode)''
| '''Random Mode'''<br>''(RandomMode)''
|
|
The value of this property determines whether to use random id values for the various pieces. If set to 1, the unique value per piece will be chosen at random; otherwise the unique value will match the id of the process.
The value of this property determines whether to use random id values for the various pieces. If set to 1, the unique value per piece will be chosen at random; otherwise the unique value will match the id of the process.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Programmable Filter==
==Programmable Filter==




Executes a user supplied python script on its input dataset to produce an output dataset.
Executes a user supplied python script on its input dataset to produce an output dataset.


This filter will execute a python script to produce an output dataset.<br>
This filter will execute a python script to produce an output dataset.<br>
The filter keeps a copy of the python script in Script, and creates <br>
The filter keeps a copy of the python script in Script, and creates <br>
Interpretor, a python interpretor to run the script upon the first <br>
Interpretor, a python interpretor to run the script upon the first <br>
execution.<br>
execution.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Copy Arrays'''<br>''(CopyArrays)''
| '''Copy Arrays'''<br>''(CopyArrays)''
|
|
If this property is set to true, all the cell and point arrays from
If this property is set to true, all the cell and point arrays from
first input are copied to the output.
first input are copied to the output.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''RequestInformation Script'''<br>''(InformationScript)''
| '''RequestInformation Script'''<br>''(InformationScript)''
|
|
This property is a python script that is executed during the RequestInformation pipeline pass. Use this to provide information such as WHOLE_EXTENT to the pipeline downstream.
This property is a python script that is executed during the RequestInformation pipeline pass. Use this to provide information such as WHOLE_EXTENT to the pipeline downstream.


| �
| �
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input(s) to the programmable filter.
This property specifies the input(s) to the programmable filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Output Data Set Type'''<br>''(OutputDataSetType)''
| '''Output Data Set Type'''<br>''(OutputDataSetType)''
|
|
The value of this property determines the dataset type for the output of the programmable filter.
The value of this property determines the dataset type for the output of the programmable filter.


| 8
| 8
|
|
The value must be one of the following: Same as Input (8), vtkPolyData (0), vtkStructuredGrid (2), vtkRectilinearGrid (3), vtkUnstructuredGrid (4), vtkImageData (6), vtkUniformGrid (10), vtkMultiblockDataSet (13), vtkHierarchicalBoxDataSet (15), vtkTable (19).
The value must be one of the following: Same as Input (8), vtkPolyData (0), vtkStructuredGrid (2), vtkRectilinearGrid (3), vtkUnstructuredGrid (4), vtkImageData (6), vtkUniformGrid (10), vtkMultiblockDataSet (13), vtkHierarchicalBoxDataSet (15), vtkTable (19).


|-
|-
| '''Python Path'''<br>''(PythonPath)''
| '''Python Path'''<br>''(PythonPath)''
|
|
A semi-colon (;) separated list of directories to add to the python library
A semi-colon (;) separated list of directories to add to the python library
search path.
search path.


| �
| �
| �
| �
|-
|-
| '''Script'''<br>''(Script)''
| '''Script'''<br>''(Script)''
|
|
This property contains the text of a python program that the programmable filter runs.
This property contains the text of a python program that the programmable filter runs.


| �
| �
| �
| �
|-
|-
| '''RequestUpdateExtent Script'''<br>''(UpdateExtentScript)''
| '''RequestUpdateExtent Script'''<br>''(UpdateExtentScript)''
|
|
This property is a python script that is executed during the RequestUpdateExtent pipeline pass. Use this to modify the update extent that your filter ask up stream for.
This property is a python script that is executed during the RequestUpdateExtent pipeline pass. Use this to modify the update extent that your filter ask up stream for.


| �
| �
| �
| �
|}
|}




==Python Calculator==
==Python Calculator==




This filter evaluates a Python expression
This filter evaluates a Python expression


This filter uses Python to calculate an expression.<br>
This filter uses Python to calculate an expression.<br>
It depends heavily on the numpy and paraview.vtk modules.<br>
It depends heavily on the numpy and paraview.vtk modules.<br>
To use the parallel functions, mpi4py is also necessary. The expression<br>
To use the parallel functions, mpi4py is also necessary. The expression<br>
is evaluated and the resulting scalar value or numpy array is added<br>
is evaluated and the resulting scalar value or numpy array is added<br>
to the output as an array. See numpy and paraview.vtk documentation<br>
to the output as an array. See numpy and paraview.vtk documentation<br>
for the list of available functions.<br><br><br>
for the list of available functions.<br><br><br>
This filter tries to make it easy for the user to write expressions<br>
This filter tries to make it easy for the user to write expressions<br>
by defining certain variables. The filter tries to assign each array<br>
by defining certain variables. The filter tries to assign each array<br>
to a variable of the same name. If the name of the array is not a <br>
to a variable of the same name. If the name of the array is not a <br>
valid Python variable, it has to be accessed through a dictionary called<br>
valid Python variable, it has to be accessed through a dictionary called<br>
arrays (i.e. arrays['array_name']). The points can be accessed using the<br>
arrays (i.e. arrays['array_name']). The points can be accessed using the<br>
points variable.        <br>
points variable.        <br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Array Association'''<br>''(ArrayAssociation)''
| '''Array Association'''<br>''(ArrayAssociation)''
|
|
This property controls the association of the output array as well as
This property controls the association of the output array as well as
which arrays are defined as variables.
which arrays are defined as variables.


| 0
| 0
|
|
The value must be one of the following: Point Data (0), Cell Data (1).
The value must be one of the following: Point Data (0), Cell Data (1).


|-
|-
| '''Array Name'''<br>''(ArrayName)''
| '''Array Name'''<br>''(ArrayName)''
|
|
The name of the output array.
The name of the output array.


| result
| result
| �
| �
|-
|-
| '''Copy Arrays'''<br>''(CopyArrays)''
| '''Copy Arrays'''<br>''(CopyArrays)''
|
|
If this property is set to true, all the cell and point arrays from
If this property is set to true, all the cell and point arrays from
first input are copied to the output.
first input are copied to the output.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Expression'''<br>''(Expression)''
| '''Expression'''<br>''(Expression)''
|
|
The Python expression evaluated during execution.
The Python expression evaluated during execution.


| �
| �
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input of the filter.
Set the input of the filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Quadric Clustering==
==Quadric Clustering==




This filter is the same filter used to generate level of detail for ParaView.  It uses a structured grid of bins and merges all points contained in each bin.
This filter is the same filter used to generate level of detail for ParaView.  It uses a structured grid of bins and merges all points contained in each bin.


The Quadric Clustering filter produces a reduced-resolution polygonal approximation of the input polygonal dataset. This filter is the one used by ParaView for computing LODs. It uses spatial binning to reduce the number of points in the data set; points that lie within the same spatial bin are collapsed into one representative point.<br>
The Quadric Clustering filter produces a reduced-resolution polygonal approximation of the input polygonal dataset. This filter is the one used by ParaView for computing LODs. It uses spatial binning to reduce the number of points in the data set; points that lie within the same spatial bin are collapsed into one representative point.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Copy Cell Data'''<br>''(CopyCellData)''
| '''Copy Cell Data'''<br>''(CopyCellData)''
|
|
If this property is set to 1, the cell data from the input will be copied to the output.
If this property is set to 1, the cell data from the input will be copied to the output.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Quadric Clustering filter.
This property specifies the input to the Quadric Clustering filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Number of Dimensions'''<br>''(NumberOfDivisions)''
| '''Number of Dimensions'''<br>''(NumberOfDivisions)''
|
|
This property specifies the number of bins along the X, Y, and Z axes of the data set.
This property specifies the number of bins along the X, Y, and Z axes of the data set.


| 50 50 50
| 50 50 50
| �
| �
|-
|-
| '''Use Feature Edges'''<br>''(UseFeatureEdges)''
| '''Use Feature Edges'''<br>''(UseFeatureEdges)''
|
|
If this property is set to 1, feature edge quadrics will be used to maintain the boundary edges along processor divisions.
If this property is set to 1, feature edge quadrics will be used to maintain the boundary edges along processor divisions.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Use Feature Points'''<br>''(UseFeaturePoints)''
| '''Use Feature Points'''<br>''(UseFeaturePoints)''
|
|
If this property is set to 1, feature point quadrics will be used to maintain the boundary points along processor divisions.
If this property is set to 1, feature point quadrics will be used to maintain the boundary points along processor divisions.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Use Input Points'''<br>''(UseInputPoints)''
| '''Use Input Points'''<br>''(UseInputPoints)''
|
|
If the value of this property is set to 1, the representative point for each bin is selected from one of the input points that lies in that bin; the input point that produces the least error is chosen. If the value of this property is 0, the location of the representative point is calculated to produce the least error possible for that bin, but the point will most likely not be one of the input points.
If the value of this property is set to 1, the representative point for each bin is selected from one of the input points that lies in that bin; the input point that produces the least error is chosen. If the value of this property is 0, the location of the representative point is calculated to produce the least error possible for that bin, but the point will most likely not be one of the input points.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Use Internal Triangles'''<br>''(UseInternalTriangles)''
| '''Use Internal Triangles'''<br>''(UseInternalTriangles)''
|
|
If this property is set to 1, triangles completely contained in a spatial bin will be included in the computation of the bin's quadrics. When this property is set to 0, the filters operates faster, but the resulting surface may not be as well-behaved.
If this property is set to 1, triangles completely contained in a spatial bin will be included in the computation of the bin's quadrics. When this property is set to 0, the filters operates faster, but the resulting surface may not be as well-behaved.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Random Vectors==
==Random Vectors==




This filter creates a new 3-component point data array and sets it as the default vector array. It uses a random number generator to create values.
This filter creates a new 3-component point data array and sets it as the default vector array. It uses a random number generator to create values.


The Random Vectors filter generates a point-centered array of random vectors. It uses a random number generator to determine the components of the vectors. This filter operates on any type of data set, and the output data set will be of the same type as the input.<br>
The Random Vectors filter generates a point-centered array of random vectors. It uses a random number generator to determine the components of the vectors. This filter operates on any type of data set, and the output data set will be of the same type as the input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Random Vectors filter.
This property specifies the input to the Random Vectors filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Maximum Speed'''<br>''(MaximumSpeed)''
| '''Maximum Speed'''<br>''(MaximumSpeed)''
|
|
This property specifies the maximum length of the random point vectors generated.
This property specifies the maximum length of the random point vectors generated.


| 1
| 1
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|-
|-
| '''Minimum Speed'''<br>''(MinimumSpeed)''
| '''Minimum Speed'''<br>''(MinimumSpeed)''
|
|
This property specifies the minimum length of the random point vectors generated.
This property specifies the minimum length of the random point vectors generated.


| 0
| 0
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|}
|}




==Rectilinear Grid Connectivity==
==Rectilinear Grid Connectivity==




Parallel fragments extraction and attributes integration on rectilinear grids.
Parallel fragments extraction and attributes integration on rectilinear grids.


Extracts material fragments from multi-block vtkRectilinearGrid datasets<br>
Extracts material fragments from multi-block vtkRectilinearGrid datasets<br>
based on the selected volume fraction array(s) and a fraction isovalue and<br>
based on the selected volume fraction array(s) and a fraction isovalue and<br>
integrates the associated attributes.<br>
integrates the associated attributes.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Double Volume Arrays'''<br>''(AddDoubleVolumeArrayName)''
| '''Double Volume Arrays'''<br>''(AddDoubleVolumeArrayName)''
|
|
This property specifies the name(s) of the volume fraction array(s) for generating parts.
This property specifies the name(s) of the volume fraction array(s) for generating parts.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Float Volume Arrays'''<br>''(AddFloatVolumeArrayName)''
| '''Float Volume Arrays'''<br>''(AddFloatVolumeArrayName)''
|
|
This property specifies the name(s) of the volume fraction array(s) for generating parts.
This property specifies the name(s) of the volume fraction array(s) for generating parts.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Unsigned Character Volume Arrays'''<br>''(AddUnsignedCharVolumeArrayName)''
| '''Unsigned Character Volume Arrays'''<br>''(AddUnsignedCharVolumeArrayName)''
|
|
This property specifies the name(s) of the volume fraction array(s) for generating parts.
This property specifies the name(s) of the volume fraction array(s) for generating parts.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a cell array with 1 components.
The dataset must contain a cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkRectilinearGrid, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkRectilinearGrid, vtkCompositeDataSet.


|-
|-
| '''Volume Fraction Value'''<br>''(VolumeFractionSurfaceValue)''
| '''Volume Fraction Value'''<br>''(VolumeFractionSurfaceValue)''
|
|
The value of this property is the volume fraction value for the surface.
The value of this property is the volume fraction value for the surface.


| 0.1
| 0.1
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Reflect==
==Reflect==




This filter takes the union of the input and its reflection over an axis-aligned plane.
This filter takes the union of the input and its reflection over an axis-aligned plane.


The Reflect filter reflects the input dataset across the specified plane. This filter operates on any type of data set and produces an unstructured grid output.<br>
The Reflect filter reflects the input dataset across the specified plane. This filter operates on any type of data set and produces an unstructured grid output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Center'''<br>''(Center)''
| '''Center'''<br>''(Center)''
|
|
If the value of the Plane property is X, Y, or Z, then the value of this property specifies the center of the reflection plane.
If the value of the Plane property is X, Y, or Z, then the value of this property specifies the center of the reflection plane.


| 0
| 0
| �
| �
|-
|-
| '''Copy Input'''<br>''(CopyInput)''
| '''Copy Input'''<br>''(CopyInput)''
|
|
If this property is set to 1, the output will contain the union of the input dataset and its reflection. Otherwise the output will contain only the reflection of the input data.
If this property is set to 1, the output will contain the union of the input dataset and its reflection. Otherwise the output will contain only the reflection of the input data.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Reflect filter.
This property specifies the input to the Reflect filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Plane'''<br>''(Plane)''
| '''Plane'''<br>''(Plane)''
|
|
The value of this property determines which plane to reflect across. If the value is X, Y, or Z, the value of the Center property determines where the plane is placed along the specified axis. The other six options (X Min, X Max, etc.) place the reflection plane at the specified face of the bounding box of the input dataset.
The value of this property determines which plane to reflect across. If the value is X, Y, or Z, the value of the Center property determines where the plane is placed along the specified axis. The other six options (X Min, X Max, etc.) place the reflection plane at the specified face of the bounding box of the input dataset.


| 0
| 0
|
|
The value must be one of the following: X Min (0), Y Min (1), Z Min (2), X Max (3), Y Max (4), Z Max (5), X (6), Y (7), Z (8).
The value must be one of the following: X Min (0), Y Min (1), Z Min (2), X Max (3), Y Max (4), Z Max (5), X (6), Y (7), Z (8).


|}
|}




==Resample With Dataset==
==Resample With Dataset==




Sample data attributes at the points of a dataset.
Sample data attributes at the points of a dataset.


Probe is a filter that computes point attributes at specified point positions. The filter has two inputs: the Input and Source. The Input geometric structure is passed through the filter. The point attributes are computed at the Input point positions by interpolating into the source data. For example, we can compute data values on a plane (plane specified as Input) from a volume (Source). The cell data of the source data is copied to the output based on in which source cell each input point is. If an array of the same name exists both in source's point and cell data, only the one from the point data is probed.<br>
Probe is a filter that computes point attributes at specified point positions. The filter has two inputs: the Input and Source. The Input geometric structure is passed through the filter. The point attributes are computed at the Input point positions by interpolating into the source data. For example, we can compute data values on a plane (plane specified as Input) from a volume (Source). The cell data of the source data is copied to the output based on in which source cell each input point is. If an array of the same name exists both in source's point and cell data, only the one from the point data is probed.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the dataset from which to obtain probe values.
This property specifies the dataset from which to obtain probe values.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array.
The dataset must contain a point or cell array.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.


|-
|-
| '''Source'''<br>''(Source)''
| '''Source'''<br>''(Source)''
|
|
This property specifies the dataset whose geometry will be used in determining positions to probe.
This property specifies the dataset whose geometry will be used in determining positions to probe.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Ribbon==
==Ribbon==




This filter generates ribbon surface from lines.  It is useful for displaying streamlines.
This filter generates ribbon surface from lines.  It is useful for displaying streamlines.


The Ribbon filter creates ribbons from the lines in the input data set. This filter is useful for visualizing streamlines. Both the input and output of this filter are polygonal data. The input data set must also have at least one point-centered vector array.<br>
The Ribbon filter creates ribbons from the lines in the input data set. This filter is useful for visualizing streamlines. Both the input and output of this filter are polygonal data. The input data set must also have at least one point-centered vector array.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Angle'''<br>''(Angle)''
| '''Angle'''<br>''(Angle)''
|
|
The value of this property specifies the offset angle (in degrees) of the ribbon from the line normal.
The value of this property specifies the offset angle (in degrees) of the ribbon from the line normal.


| 0
| 0
|
|
The value must be greater than or equal to 0 and less than or equal to 360.
The value must be greater than or equal to 0 and less than or equal to 360.


|-
|-
| '''Default Normal'''<br>''(DefaultNormal)''
| '''Default Normal'''<br>''(DefaultNormal)''
|
|
The value of this property specifies the normal to use when the UseDefaultNormal property is set to 1 or the input contains no vector array (SelectInputVectors property).
The value of this property specifies the normal to use when the UseDefaultNormal property is set to 1 or the input contains no vector array (SelectInputVectors property).


| 0 0 1
| 0 0 1
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Ribbon filter.
This property specifies the input to the Ribbon filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
The value of this property indicates the name of the input scalar array used by this filter. The width of the ribbons will be varied based on the values in the specified array if the value of the Width property is 1.
The value of this property indicates the name of the input scalar array used by this filter. The width of the ribbons will be varied based on the values in the specified array if the value of the Width property is 1.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
The value of this property indicates the name of the input vector array used by this filter. If the UseDefaultNormal property is set to 0, the normal vectors for the ribbons come from the specified vector array.
The value of this property indicates the name of the input vector array used by this filter. If the UseDefaultNormal property is set to 0, the normal vectors for the ribbons come from the specified vector array.


| 1
| 1
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Use Default Normal'''<br>''(UseDefaultNormal)''
| '''Use Default Normal'''<br>''(UseDefaultNormal)''
|
|
If this property is set to 0, and the input contains no vector array, then default ribbon normals will be generated (DefaultNormal property); if a vector array has been set (SelectInputVectors property), the ribbon normals will be set from the specified array. If this property is set to 1, the default normal (DefaultNormal property) will be used, regardless of whether the SelectInputVectors property has been set.
If this property is set to 0, and the input contains no vector array, then default ribbon normals will be generated (DefaultNormal property); if a vector array has been set (SelectInputVectors property), the ribbon normals will be set from the specified array. If this property is set to 1, the default normal (DefaultNormal property) will be used, regardless of whether the SelectInputVectors property has been set.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Vary Width'''<br>''(VaryWidth)''
| '''Vary Width'''<br>''(VaryWidth)''
|
|
If this property is set to 1, the ribbon width will be scaled according to the scalar array specified in the SelectInputScalars property.
If this property is set to 1, the ribbon width will be scaled according to the scalar array specified in the SelectInputScalars property.
Toggle the variation of ribbon width with scalar value.
Toggle the variation of ribbon width with scalar value.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Width'''<br>''(Width)''
| '''Width'''<br>''(Width)''
|
|
If the VaryWidth property is set to 1, the value of this property is the minimum ribbon width. If the VaryWidth property is set to 0, the value of this property is half the width of the ribbon.
If the VaryWidth property is set to 1, the value of this property is the minimum ribbon width. If the VaryWidth property is set to 0, the value of this property is half the width of the ribbon.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.01.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.01.


|}
|}




==Rotational Extrusion==
==Rotational Extrusion==




This filter generates a swept surface while translating the input along a circular path.
This filter generates a swept surface while translating the input along a circular path.


The Rotational Extrusion filter forms a surface by rotating the input about the Z axis. This filter is intended to operate on 2D polygonal data. It produces polygonal output.<br>
The Rotational Extrusion filter forms a surface by rotating the input about the Z axis. This filter is intended to operate on 2D polygonal data. It produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Angle'''<br>''(Angle)''
| '''Angle'''<br>''(Angle)''
|
|
This property specifies the angle of rotation in degrees. The surface is swept from 0 to the value of this property.
This property specifies the angle of rotation in degrees. The surface is swept from 0 to the value of this property.


| 360
| 360
| �
| �
|-
|-
| '''Capping'''<br>''(Capping)''
| '''Capping'''<br>''(Capping)''
|
|
If this property is set to 1, the open ends of the swept surface will be capped with a copy of the input dataset. This works property if the input is a 2D surface composed of filled polygons. If the input dataset is a closed solid (e.g., a sphere), then either two copies of the dataset will be drawn or no surface will be drawn. No surface is drawn if either this property is set to 0 or if the two surfaces would occupy exactly the same 3D space (i.e., the Angle property's value is a multiple of 360, and the values of the Translation and DeltaRadius properties are 0).
If this property is set to 1, the open ends of the swept surface will be capped with a copy of the input dataset. This works property if the input is a 2D surface composed of filled polygons. If the input dataset is a closed solid (e.g., a sphere), then either two copies of the dataset will be drawn or no surface will be drawn. No surface is drawn if either this property is set to 0 or if the two surfaces would occupy exactly the same 3D space (i.e., the Angle property's value is a multiple of 360, and the values of the Translation and DeltaRadius properties are 0).


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Delta Radius'''<br>''(DeltaRadius)''
| '''Delta Radius'''<br>''(DeltaRadius)''
|
|
The value of this property specifies the change in radius during the sweep process.
The value of this property specifies the change in radius during the sweep process.


| 0
| 0
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Rotational Extrusion filter.
This property specifies the input to the Rotational Extrusion filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Resolution'''<br>''(Resolution)''
| '''Resolution'''<br>''(Resolution)''
|
|
The value of this property controls the number of intermediate node points used in performing the sweep (rotating from 0 degrees to the value specified by the Angle property.
The value of this property controls the number of intermediate node points used in performing the sweep (rotating from 0 degrees to the value specified by the Angle property.


| 12
| 12
|
|
The value must be greater than or equal to 1.
The value must be greater than or equal to 1.


|-
|-
| '''Translation'''<br>''(Translation)''
| '''Translation'''<br>''(Translation)''
|
|
The value of this property specifies the total amount of translation along the Z axis during the sweep process. Specifying a non-zero value for this property allows you to create a corkscrew (value of DeltaRadius > 0) or spring effect.
The value of this property specifies the total amount of translation along the Z axis during the sweep process. Specifying a non-zero value for this property allows you to create a corkscrew (value of DeltaRadius > 0) or spring effect.


| 0
| 0
| �
| �
|}
|}




==Scatter Plot==
==Scatter Plot==




Creates a scatter plot from a dataset.
Creates a scatter plot from a dataset.


This filter creates a scatter plot from a dataset. In point data mode,<br>
This filter creates a scatter plot from a dataset. In point data mode,<br>
it uses the X point coordinates as the default X array. All other arrays<br>
it uses the X point coordinates as the default X array. All other arrays<br>
are passed to the output and can be used in the scatter plot. In cell<br>
are passed to the output and can be used in the scatter plot. In cell<br>
data mode, the first single component array is used as the default X<br>
data mode, the first single component array is used as the default X<br>
array.<br>
array.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the filter.
This property specifies the input to the filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Shrink==
==Shrink==




This filter shrinks each input cell so they pull away from their neighbors.
This filter shrinks each input cell so they pull away from their neighbors.


The Shrink filter causes the individual cells of a dataset to break apart from each other by moving each cell's points toward the centroid of the cell. (The centroid of a cell is the average position of its points.) This filter operates on any type of dataset and produces unstructured grid output.<br>
The Shrink filter causes the individual cells of a dataset to break apart from each other by moving each cell's points toward the centroid of the cell. (The centroid of a cell is the average position of its points.) This filter operates on any type of dataset and produces unstructured grid output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Shrink filter.
This property specifies the input to the Shrink filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Shrink Factor'''<br>''(ShrinkFactor)''
| '''Shrink Factor'''<br>''(ShrinkFactor)''
|
|
The value of this property determines how far the points will move. A value of 0 positions the points at the centroid of the cell; a value of 1 leaves them at their original positions.
The value of this property determines how far the points will move. A value of 0 positions the points at the centroid of the cell; a value of 1 leaves them at their original positions.


| 0.5
| 0.5
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|}
|}




==Slice==
==Slice==




This filter slices a data set with a plane. Slicing is similar to a contour. It creates surfaces from volumes and lines from surfaces.
This filter slices a data set with a plane. Slicing is similar to a contour. It creates surfaces from volumes and lines from surfaces.


This filter extracts the portion of the input dataset that lies along the specified plane. The Slice filter takes any type of dataset as input. The output of this filter is polygonal data.<br>
This filter extracts the portion of the input dataset that lies along the specified plane. The Slice filter takes any type of dataset as input. The output of this filter is polygonal data.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Slice Offset Values'''<br>''(ContourValues)''
| '''Slice Offset Values'''<br>''(ContourValues)''
|
|
The values in this property specify a list of current offset values. This can be used to create multiple slices with different centers. Each entry represents a new slice with its center shifted by the offset value.
The values in this property specify a list of current offset values. This can be used to create multiple slices with different centers. Each entry represents a new slice with its center shifted by the offset value.


| �
| �
|
|
Determine the length of the dataset's diagonal. The value must lie within -diagonal length to +diagonal length.
Determine the length of the dataset's diagonal. The value must lie within -diagonal length to +diagonal length.


|-
|-
| '''Slice Type'''<br>''(CutFunction)''
| '''Slice Type'''<br>''(CutFunction)''
|
|
This property sets the parameters of the slice function.
This property sets the parameters of the slice function.


| �
| �
|
|
The value must be set to one of the following: Plane, Box, Sphere.
The value must be set to one of the following: Plane, Box, Sphere.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Slice filter.
This property specifies the input to the Slice filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Smooth==
==Smooth==




This filter smooths a polygonal surface by iteratively moving points toward their neighbors.
This filter smooths a polygonal surface by iteratively moving points toward their neighbors.


The Smooth filter operates on a polygonal data set by iteratively adjusting the position of the points using Laplacian smoothing. (Because this filter only adjusts point positions, the output data set is also polygonal.) This results in better-shaped cells and more evenly distributed points.<br><br><br>
The Smooth filter operates on a polygonal data set by iteratively adjusting the position of the points using Laplacian smoothing. (Because this filter only adjusts point positions, the output data set is also polygonal.) This results in better-shaped cells and more evenly distributed points.<br><br><br>
The Convergence slider limits the maximum motion of any point. It is expressed as a fraction of the length of the diagonal of the bounding box of the data set. If the maximum point motion during a smoothing iteration is less than the Convergence value, the smoothing operation terminates.<br>
The Convergence slider limits the maximum motion of any point. It is expressed as a fraction of the length of the diagonal of the bounding box of the data set. If the maximum point motion during a smoothing iteration is less than the Convergence value, the smoothing operation terminates.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Convergence'''<br>''(Convergence)''
| '''Convergence'''<br>''(Convergence)''
|
|
The value of this property limits the maximum motion of any point. It is expressed as a fraction of the length of the diagonal of the bounding box of the input dataset. If the maximum point motion during a smoothing iteration is less than the value of this property, the smoothing operation terminates.
The value of this property limits the maximum motion of any point. It is expressed as a fraction of the length of the diagonal of the bounding box of the input dataset. If the maximum point motion during a smoothing iteration is less than the value of this property, the smoothing operation terminates.


| 0
| 0
|
|
The value must be greater than or equal to 0 and less than or equal to 1.
The value must be greater than or equal to 0 and less than or equal to 1.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Smooth filter.
This property specifies the input to the Smooth filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Number of Iterations'''<br>''(NumberOfIterations)''
| '''Number of Iterations'''<br>''(NumberOfIterations)''
|
|
This property sets the maximum number of smoothing iterations to perform. More iterations produce better smoothing.
This property sets the maximum number of smoothing iterations to perform. More iterations produce better smoothing.


| 20
| 20
|
|
The value must be greater than or equal to 0.
The value must be greater than or equal to 0.


|}
|}




==Stream Tracer==
==Stream Tracer==




Integrate streamlines in a vector field.
Integrate streamlines in a vector field.


The Stream Tracer filter generates streamlines in a vector field from a collection of seed points. Production of streamlines terminates if a streamline crosses the exterior boundary of the input dataset. Other reasons for termination are listed for the MaximumNumberOfSteps, TerminalSpeed, and MaximumPropagation properties. This filter operates on any type of dataset, provided it has point-centered vectors. The output is polygonal data containing polylines.<br>
The Stream Tracer filter generates streamlines in a vector field from a collection of seed points. Production of streamlines terminates if a streamline crosses the exterior boundary of the input dataset. Other reasons for termination are listed for the MaximumNumberOfSteps, TerminalSpeed, and MaximumPropagation properties. This filter operates on any type of dataset, provided it has point-centered vectors. The output is polygonal data containing polylines.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Initial Step Length'''<br>''(InitialIntegrationStep)''
| '''Initial Step Length'''<br>''(InitialIntegrationStep)''
|
|
This property specifies the initial integration step size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4), it is fixed (always equal to this initial value) throughout the integration. For an adaptive integrator (Runge-Kutta 4-5), the actual step size varies such that the numerical error is less than a specified threshold.
This property specifies the initial integration step size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4), it is fixed (always equal to this initial value) throughout the integration. For an adaptive integrator (Runge-Kutta 4-5), the actual step size varies such that the numerical error is less than a specified threshold.


| 0.2
| 0.2
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Stream Tracer filter.
This property specifies the input to the Stream Tracer filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 3 components.
The dataset must contain a point array with 3 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Integration Direction'''<br>''(IntegrationDirection)''
| '''Integration Direction'''<br>''(IntegrationDirection)''
|
|
This property determines in which direction(s) a streamline is generated.
This property determines in which direction(s) a streamline is generated.


| 2
| 2
|
|
The value must be one of the following: FORWARD (0), BACKWARD (1), BOTH (2).
The value must be one of the following: FORWARD (0), BACKWARD (1), BOTH (2).


|-
|-
| '''Integration Step Unit'''<br>''(IntegrationStepUnit)''
| '''Integration Step Unit'''<br>''(IntegrationStepUnit)''
|
|
This property specifies the unit for Minimum/Initial/Maximum integration step size. The Length unit refers to the arc length that a particle travels/advects within a single step. The Cell Length unit represents the step size as a number of cells.
This property specifies the unit for Minimum/Initial/Maximum integration step size. The Length unit refers to the arc length that a particle travels/advects within a single step. The Cell Length unit represents the step size as a number of cells.


| 2
| 2
|
|
The value must be one of the following: Length (1), Cell Length (2).
The value must be one of the following: Length (1), Cell Length (2).


|-
|-
| '''Integrator Type'''<br>''(IntegratorType)''
| '''Integrator Type'''<br>''(IntegratorType)''
|
|
This property determines which integrator (with increasing accuracy) to use for creating streamlines.
This property determines which integrator (with increasing accuracy) to use for creating streamlines.


| 2
| 2
|
|
The value must be one of the following: Runge-Kutta 2 (0), Runge-Kutta 4 (1), Runge-Kutta 4-5 (2).
The value must be one of the following: Runge-Kutta 2 (0), Runge-Kutta 4 (1), Runge-Kutta 4-5 (2).


|-
|-
| '''Interpolator Type'''<br>''(InterpolatorType)''
| '''Interpolator Type'''<br>''(InterpolatorType)''
|
|
This property determines which interpolator to use for evaluating the velocity vector field. The first is faster though the second is more robust in locating cells during streamline integration.
This property determines which interpolator to use for evaluating the velocity vector field. The first is faster though the second is more robust in locating cells during streamline integration.


| 0
| 0
|
|
The value must be one of the following: Interpolator with Point Locator (0), Interpolator with Cell Locator (1).
The value must be one of the following: Interpolator with Point Locator (0), Interpolator with Cell Locator (1).


|-
|-
| '''Maximum Error'''<br>''(MaximumError)''
| '''Maximum Error'''<br>''(MaximumError)''
|
|
This property specifies the maximum error (for Runge-Kutta 4-5) tolerated throughout streamline integration. The Runge-Kutta 4-5 integrator tries to adjust the step size such that the estimated error is less than this threshold.
This property specifies the maximum error (for Runge-Kutta 4-5) tolerated throughout streamline integration. The Runge-Kutta 4-5 integrator tries to adjust the step size such that the estimated error is less than this threshold.


| 1e-06
| 1e-06
| �
| �
|-
|-
| '''Maximum Step Length'''<br>''(MaximumIntegrationStep)''
| '''Maximum Step Length'''<br>''(MaximumIntegrationStep)''
|
|
When using the Runge-Kutta 4-5 ingrator, this property specifies the maximum integration step size.
When using the Runge-Kutta 4-5 ingrator, this property specifies the maximum integration step size.


| 0.5
| 0.5
| �
| �
|-
|-
| '''Maximum Steps'''<br>''(MaximumNumberOfSteps)''
| '''Maximum Steps'''<br>''(MaximumNumberOfSteps)''
|
|
This property specifies the maximum number of steps, beyond which streamline integration is terminated.
This property specifies the maximum number of steps, beyond which streamline integration is terminated.


| 2000
| 2000
| �
| �
|-
|-
| '''Maximum Streamline Length'''<br>''(MaximumPropagation)''
| '''Maximum Streamline Length'''<br>''(MaximumPropagation)''
|
|
This property specifies the maximum streamline length (i.e., physical arc length), beyond which line integration is terminated.
This property specifies the maximum streamline length (i.e., physical arc length), beyond which line integration is terminated.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 1.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 1.


|-
|-
| '''Minimum Step Length'''<br>''(MinimumIntegrationStep)''
| '''Minimum Step Length'''<br>''(MinimumIntegrationStep)''
|
|
When using the Runge-Kutta 4-5 ingrator, this property specifies the minimum integration step size.
When using the Runge-Kutta 4-5 ingrator, this property specifies the minimum integration step size.


| 0.01
| 0.01
| �
| �
|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
This property contains the name of the vector array from which to generate streamlines.
This property contains the name of the vector array from which to generate streamlines.


| �
| �
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Seed Type'''<br>''(Source)''
| '''Seed Type'''<br>''(Source)''
|
|
The value of this property determines how the seeds for the streamlines will be generated.
The value of this property determines how the seeds for the streamlines will be generated.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers).
The selected object must be the result of the following: sources (includes readers).




The value must be set to one of the following: PointSource, HighResLineSource.
The value must be set to one of the following: PointSource, HighResLineSource.


|-
|-
| '''Terminal Speed'''<br>''(TerminalSpeed)''
| '''Terminal Speed'''<br>''(TerminalSpeed)''
|
|
This property specifies the terminal speed, below which particle advection/integration is terminated.
This property specifies the terminal speed, below which particle advection/integration is terminated.


| 1e-12
| 1e-12
| �
| �
|}
|}




==Stream Tracer With Custom Source==
==Stream Tracer With Custom Source==




Integrate streamlines in a vector field.
Integrate streamlines in a vector field.


The Stream Tracer filter generates streamlines in a vector field from a collection of seed points. Production of streamlines terminates if a streamline crosses the exterior boundary of the input dataset. Other reasons for termination are listed for the MaximumNumberOfSteps, TerminalSpeed, and MaximumPropagation properties. This filter operates on any type of dataset, provided it has point-centered vectors. The output is polygonal data containing polylines. This filter takes a Source input that provides the seed points.<br>
The Stream Tracer filter generates streamlines in a vector field from a collection of seed points. Production of streamlines terminates if a streamline crosses the exterior boundary of the input dataset. Other reasons for termination are listed for the MaximumNumberOfSteps, TerminalSpeed, and MaximumPropagation properties. This filter operates on any type of dataset, provided it has point-centered vectors. The output is polygonal data containing polylines. This filter takes a Source input that provides the seed points.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Initial Step Length'''<br>''(InitialIntegrationStep)''
| '''Initial Step Length'''<br>''(InitialIntegrationStep)''
|
|
This property specifies the initial integration step size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4), it is fixed (always equal to this initial value) throughout the integration. For an adaptive integrator (Runge-Kutta 4-5), the actual step size varies such that the numerical error is less than a specified threshold.
This property specifies the initial integration step size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4), it is fixed (always equal to this initial value) throughout the integration. For an adaptive integrator (Runge-Kutta 4-5), the actual step size varies such that the numerical error is less than a specified threshold.


| 0.2
| 0.2
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Stream Tracer filter.
This property specifies the input to the Stream Tracer filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 3 components.
The dataset must contain a point array with 3 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Integration Direction'''<br>''(IntegrationDirection)''
| '''Integration Direction'''<br>''(IntegrationDirection)''
|
|
This property determines in which direction(s) a streamline is generated.
This property determines in which direction(s) a streamline is generated.


| 2
| 2
|
|
The value must be one of the following: FORWARD (0), BACKWARD (1), BOTH (2).
The value must be one of the following: FORWARD (0), BACKWARD (1), BOTH (2).


|-
|-
| '''Integration Step Unit'''<br>''(IntegrationStepUnit)''
| '''Integration Step Unit'''<br>''(IntegrationStepUnit)''
|
|
This property specifies the unit for Minimum/Initial/Maximum integration step size. The Length unit refers to the arc length that a particle travels/advects within a single step. The Cell Length unit represents the step size as a number of cells.
This property specifies the unit for Minimum/Initial/Maximum integration step size. The Length unit refers to the arc length that a particle travels/advects within a single step. The Cell Length unit represents the step size as a number of cells.


| 2
| 2
|
|
The value must be one of the following: Length (1), Cell Length (2).
The value must be one of the following: Length (1), Cell Length (2).


|-
|-
| '''Integrator Type'''<br>''(IntegratorType)''
| '''Integrator Type'''<br>''(IntegratorType)''
|
|
This property determines which integrator (with increasing accuracy) to use for creating streamlines.
This property determines which integrator (with increasing accuracy) to use for creating streamlines.


| 2
| 2
|
|
The value must be one of the following: Runge-Kutta 2 (0), Runge-Kutta 4 (1), Runge-Kutta 4-5 (2).
The value must be one of the following: Runge-Kutta 2 (0), Runge-Kutta 4 (1), Runge-Kutta 4-5 (2).


|-
|-
| '''Maximum Error'''<br>''(MaximumError)''
| '''Maximum Error'''<br>''(MaximumError)''
|
|
This property specifies the maximum error (for Runge-Kutta 4-5) tolerated throughout streamline integration. The Runge-Kutta 4-5 integrator tries to adjust the step size such that the estimated error is less than this threshold.
This property specifies the maximum error (for Runge-Kutta 4-5) tolerated throughout streamline integration. The Runge-Kutta 4-5 integrator tries to adjust the step size such that the estimated error is less than this threshold.


| 1e-06
| 1e-06
| �
| �
|-
|-
| '''Maximum Step Length'''<br>''(MaximumIntegrationStep)''
| '''Maximum Step Length'''<br>''(MaximumIntegrationStep)''
|
|
When using the Runge-Kutta 4-5 ingrator, this property specifies the maximum integration step size.
When using the Runge-Kutta 4-5 ingrator, this property specifies the maximum integration step size.


| 0.5
| 0.5
| �
| �
|-
|-
| '''Maximum Steps'''<br>''(MaximumNumberOfSteps)''
| '''Maximum Steps'''<br>''(MaximumNumberOfSteps)''
|
|
This property specifies the maximum number of steps, beyond which streamline integration is terminated.
This property specifies the maximum number of steps, beyond which streamline integration is terminated.


| 2000
| 2000
| �
| �
|-
|-
| '''Maximum Streamline Length'''<br>''(MaximumPropagation)''
| '''Maximum Streamline Length'''<br>''(MaximumPropagation)''
|
|
This property specifies the maximum streamline length (i.e., physical arc length), beyond which line integration is terminated.
This property specifies the maximum streamline length (i.e., physical arc length), beyond which line integration is terminated.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 1.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 1.


|-
|-
| '''Minimum Step Length'''<br>''(MinimumIntegrationStep)''
| '''Minimum Step Length'''<br>''(MinimumIntegrationStep)''
|
|
When using the Runge-Kutta 4-5 ingrator, this property specifies the minimum integration step size.
When using the Runge-Kutta 4-5 ingrator, this property specifies the minimum integration step size.


| 0.01
| 0.01
| �
| �
|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
This property contains the name of the vector array from which to generate streamlines.
This property contains the name of the vector array from which to generate streamlines.


| �
| �
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Source'''<br>''(Source)''
| '''Source'''<br>''(Source)''
|
|
This property specifies the input used to obtain the seed points.
This property specifies the input used to obtain the seed points.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers).
The selected object must be the result of the following: sources (includes readers).


|-
|-
| '''Terminal Speed'''<br>''(TerminalSpeed)''
| '''Terminal Speed'''<br>''(TerminalSpeed)''
|
|
This property specifies the terminal speed, below which particle advection/integration is terminated.
This property specifies the terminal speed, below which particle advection/integration is terminated.


| 1e-12
| 1e-12
| �
| �
|}
|}




==Subdivide==
==Subdivide==




This filter iteratively divide triangles into four smaller triangles.  New points are placed linearly so the output surface matches the input surface.
This filter iteratively divide triangles into four smaller triangles.  New points are placed linearly so the output surface matches the input surface.


The Subdivide filter iteratively divides each triangle in the input dataset into 4 new triangles. Three new points are added per triangle -- one at the midpoint of each edge. This filter operates only on polygonal data containing triangles, so run your polygonal data through the Triangulate filter first if it is not composed of triangles. The output of this filter is also polygonal.<br>
The Subdivide filter iteratively divides each triangle in the input dataset into 4 new triangles. Three new points are added per triangle -- one at the midpoint of each edge. This filter operates only on polygonal data containing triangles, so run your polygonal data through the Triangulate filter first if it is not composed of triangles. The output of this filter is also polygonal.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This parameter specifies the input to the Subdivide filter.
This parameter specifies the input to the Subdivide filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Number of Subdivisions'''<br>''(NumberOfSubdivisions)''
| '''Number of Subdivisions'''<br>''(NumberOfSubdivisions)''
|
|
The value of this property specifies the number of subdivision iterations to perform.
The value of this property specifies the number of subdivision iterations to perform.


| 1
| 1
|
|
The value must be greater than or equal to 1 and less than or equal to 4.
The value must be greater than or equal to 1 and less than or equal to 4.


|}
|}




==Surface Flow==
==Surface Flow==




This filter integrates flow through a surface.
This filter integrates flow through a surface.


The flow integration fitler  integrates the dot product of a point flow vector field and surface normal. It computes the net flow across the 2D surface. It operates on any type of dataset and produces an unstructured grid output.<br>
The flow integration fitler  integrates the dot product of a point flow vector field and surface normal. It computes the net flow across the 2D surface. It operates on any type of dataset and produces an unstructured grid output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Surface Flow filter.
This property specifies the input to the Surface Flow filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 3 components.
The dataset must contain a point array with 3 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Select Input Vectors'''<br>''(SelectInputVectors)''
| '''Select Input Vectors'''<br>''(SelectInputVectors)''
|
|
The value of this property specifies the name of the input vector array containing the flow vector field.
The value of this property specifies the name of the input vector array containing the flow vector field.


| �
| �
|
|
An array of vectors is required.
An array of vectors is required.


|}
|}




==Surface Vectors==
==Surface Vectors==




This filter constrains vectors to lie on a surface.
This filter constrains vectors to lie on a surface.


The Surface Vectors filter is used for 2D data sets. It constrains vectors to lie in a surface by removing components of the vectors normal to the local surface.<br>
The Surface Vectors filter is used for 2D data sets. It constrains vectors to lie in a surface by removing components of the vectors normal to the local surface.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Constraint Mode'''<br>''(ConstraintMode)''
| '''Constraint Mode'''<br>''(ConstraintMode)''
|
|
This property specifies whether the vectors will be parallel or perpendicular to the surface. If the value is set to PerpendicularScale (2), then the output will contain a scalar array with the dot product of the surface normal and the vector at each point.
This property specifies whether the vectors will be parallel or perpendicular to the surface. If the value is set to PerpendicularScale (2), then the output will contain a scalar array with the dot product of the surface normal and the vector at each point.


| 0
| 0
|
|
The value must be one of the following: Parallel (0), Perpendicular (1), PerpendicularScale (2).
The value must be one of the following: Parallel (0), Perpendicular (1), PerpendicularScale (2).


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Surface Vectors filter.
This property specifies the input to the Surface Vectors filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 3 components.
The dataset must contain a point array with 3 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Select Input Vectors'''<br>''(SelectInputVectors)''
| '''Select Input Vectors'''<br>''(SelectInputVectors)''
|
|
This property specifies the name of the input vector array to process.
This property specifies the name of the input vector array to process.


| �
| �
|
|
An array of vectors is required.
An array of vectors is required.


|}
|}




==Table To Points==
==Table To Points==




Converts table to set of points.
Converts table to set of points.


The TableToPolyData filter converts a vtkTable to a set of points in a<br>
The TableToPolyData filter converts a vtkTable to a set of points in a<br>
vtkPolyData. One must specifies the columns in the input table to use as<br>
vtkPolyData. One must specifies the columns in the input table to use as<br>
the X, Y and Z coordinates for the points in the output.<br>
the X, Y and Z coordinates for the points in the output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input..
This property specifies the input..


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a  array with 1 components.
The dataset must contain a  array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable.


|-
|-
| '''X Column'''<br>''(XColumn)''
| '''X Column'''<br>''(XColumn)''
| �
| �
| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Y Column'''<br>''(YColumn)''
| '''Y Column'''<br>''(YColumn)''
| �
| �
| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Z Column'''<br>''(ZColumn)''
| '''Z Column'''<br>''(ZColumn)''
| �
| �
| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|}
|}




==Table To Structured Grid==
==Table To Structured Grid==




Converts to table to structured grid.
Converts to table to structured grid.


The TableToStructuredGrid filter converts a vtkTable to a<br>
The TableToStructuredGrid filter converts a vtkTable to a<br>
vtkStructuredGrid.  One must specifies the columns in the input table to<br>
vtkStructuredGrid.  One must specifies the columns in the input table to<br>
use as the X, Y and Z coordinates for the points in the output, and the<br>
use as the X, Y and Z coordinates for the points in the output, and the<br>
whole extent.<br>
whole extent.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input..
This property specifies the input..


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a  array with 1 components.
The dataset must contain a  array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkTable.
The selected dataset must be one of the following types (or a subclass of one of them): vtkTable.


|-
|-
| '''Whole Extent'''<br>''(WholeExtent)''
| '''Whole Extent'''<br>''(WholeExtent)''
| �
| �
| 0 0 0 0 0 0
| 0 0 0 0 0 0
| �
| �
|-
|-
| '''X Column'''<br>''(XColumn)''
| '''X Column'''<br>''(XColumn)''
| �
| �
| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Y Column'''<br>''(YColumn)''
| '''Y Column'''<br>''(YColumn)''
| �
| �
| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Z Column'''<br>''(ZColumn)''
| '''Z Column'''<br>''(ZColumn)''
| �
| �
| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|}
|}




==Temporal Cache==
==Temporal Cache==




Saves a copy of the data set for a fixed number of time steps.
Saves a copy of the data set for a fixed number of time steps.


The Temporal Cache can be used to save multiple copies of a data set at different time steps to prevent thrashing in the pipeline caused by downstream filters that adjust the requested time step.  For example, assume that there is a downstream Temporal Interpolator filter.  This filter will (usually) request two time steps from the upstream filters, which in turn (usually) causes the upstream filters to run twice, once for each time step.  The next time the interpolator requests the same two time steps, they might force the upstream filters to re-evaluate the same two time steps.  The Temporal Cache can keep copies of both of these time steps and provide the requested data without having to run upstream filters.<br>
The Temporal Cache can be used to save multiple copies of a data set at different time steps to prevent thrashing in the pipeline caused by downstream filters that adjust the requested time step.  For example, assume that there is a downstream Temporal Interpolator filter.  This filter will (usually) request two time steps from the upstream filters, which in turn (usually) causes the upstream filters to run twice, once for each time step.  The next time the interpolator requests the same two time steps, they might force the upstream filters to re-evaluate the same two time steps.  The Temporal Cache can keep copies of both of these time steps and provide the requested data without having to run upstream filters.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Cache Size'''<br>''(CacheSize)''
| '''Cache Size'''<br>''(CacheSize)''
|
|
The cache size determines the number of time steps that can be cached at one time.  The maximum number is 10.  The minimum is 2 (since it makes little sense to cache less than that).
The cache size determines the number of time steps that can be cached at one time.  The maximum number is 10.  The minimum is 2 (since it makes little sense to cache less than that).


| 2
| 2
|
|
The value must be greater than or equal to 2 and less than or equal to 10.
The value must be greater than or equal to 2 and less than or equal to 10.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input of the Temporal Cache filter.
This property specifies the input of the Temporal Cache filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|}
|}




==Temporal Interpolator==
==Temporal Interpolator==




Interpolate between time steps.
Interpolate between time steps.


The Temporal Interpolator converts data that is defined at discrete time steps to one that is defined over a continuum of time by linearly interpolating the data's field data between two adjacent time steps.  The interpolated values are a simple approximation and should not be interpreted as anything more.  The Temporal Interpolator assumes that the topology between adjacent time steps does not change.<br>
The Temporal Interpolator converts data that is defined at discrete time steps to one that is defined over a continuum of time by linearly interpolating the data's field data between two adjacent time steps.  The interpolated values are a simple approximation and should not be interpreted as anything more.  The Temporal Interpolator assumes that the topology between adjacent time steps does not change.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Discrete Time Step Interval'''<br>''(DiscreteTimeStepInterval)''
| '''Discrete Time Step Interval'''<br>''(DiscreteTimeStepInterval)''
|
|
If Discrete Time Step Interval is set to 0, then the Temporal Interpolator will provide a continuous region of time on its output.  If set to anything else, then the output will define a finite set of time points on its output, each spaced by the Discrete Time Step Interval.  The output will have (time range)/(discrete time step interval) time steps.  (Note that the time range is defined by the time range of the data of the input filter, which may be different from other pipeline objects or the range defined in the animation inspector.)  This is a useful option to use if you have a dataset with one missing time step and wish to 'file-in' the missing data with an interpolated value from the steps on either side.
If Discrete Time Step Interval is set to 0, then the Temporal Interpolator will provide a continuous region of time on its output.  If set to anything else, then the output will define a finite set of time points on its output, each spaced by the Discrete Time Step Interval.  The output will have (time range)/(discrete time step interval) time steps.  (Note that the time range is defined by the time range of the data of the input filter, which may be different from other pipeline objects or the range defined in the animation inspector.)  This is a useful option to use if you have a dataset with one missing time step and wish to 'file-in' the missing data with an interpolated value from the steps on either side.


| 0
| 0
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input of the Temporal Interpolator.
This property specifies the input of the Temporal Interpolator.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|}
|}




==Temporal Shift Scale==
==Temporal Shift Scale==




Shift and scale time values.
Shift and scale time values.


The Temporal Shift Scale filter linearly transforms the time values of a pipeline object by applying a shift and then scale.  Given a data at time t on the input, it will be transformed to time t*Shift + Scale on the output.  Inversely, if this filter has a request for time t, it will request time (t-Shift)/Scale on its input.<br>
The Temporal Shift Scale filter linearly transforms the time values of a pipeline object by applying a shift and then scale.  Given a data at time t on the input, it will be transformed to time t*Shift + Scale on the output.  Inversely, if this filter has a request for time t, it will request time (t-Shift)/Scale on its input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
The input to the Temporal Shift Scale filter.
The input to the Temporal Shift Scale filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|-
|-
| '''Maximum Number Of Periods'''<br>''(MaximumNumberOfPeriods)''
| '''Maximum Number Of Periods'''<br>''(MaximumNumberOfPeriods)''
| �
| �
| 1
| 1
|
|
The value must be greater than or equal to 0 and less than or equal to 100.
The value must be greater than or equal to 0 and less than or equal to 100.


|-
|-
| '''Periodic'''<br>''(Periodic)''
| '''Periodic'''<br>''(Periodic)''
| �
| �
| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Periodic End Correction'''<br>''(PeriodicEndCorrection)''
| '''Periodic End Correction'''<br>''(PeriodicEndCorrection)''
| �
| �
| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Post Shift'''<br>''(PostShift)''
| '''Post Shift'''<br>''(PostShift)''
|
|
The amount of time the input is shifted.
The amount of time the input is shifted.


| 0
| 0
| �
| �
|-
|-
| '''Pre Shift'''<br>''(PreShift)''
| '''Pre Shift'''<br>''(PreShift)''
| �
| �
| 0
| 0
| �
| �
|-
|-
| '''Scale'''<br>''(Scale)''
| '''Scale'''<br>''(Scale)''
|
|
The factor by which the input time is scaled.
The factor by which the input time is scaled.


| 1
| 1
| �
| �
|}
|}




==Temporal Snap-to-Time-Step==
==Temporal Snap-to-Time-Step==




Modifies the time range/steps of temporal data.
Modifies the time range/steps of temporal data.


This file modifies the time range or time steps of<br>
This file modifies the time range or time steps of<br>
the data without changing the data itself. The data is not resampled<br>
the data without changing the data itself. The data is not resampled<br>
by this filter, only the information accompanying the data is modified.<br>
by this filter, only the information accompanying the data is modified.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
| �
| �
| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


|-
|-
| '''Snap Mode'''<br>''(SnapMode)''
| '''Snap Mode'''<br>''(SnapMode)''
|
|
Determine which time step to snap to.
Determine which time step to snap to.


| 0
| 0
|
|
The value must be one of the following: Nearest (0), NextBelowOrEqual (1), NextAboveOrEqual (2).
The value must be one of the following: Nearest (0), NextBelowOrEqual (1), NextAboveOrEqual (2).


|}
|}




==Temporal Statistics==
==Temporal Statistics==




Loads in all time steps of a data set and computes some statistics about how each point and cell variable changes over time.
Loads in all time steps of a data set and computes some statistics about how each point and cell variable changes over time.


Given an input that changes over time, vtkTemporalStatistics looks<br>
Given an input that changes over time, vtkTemporalStatistics looks<br>
at the data for each time step and computes some statistical<br>
at the data for each time step and computes some statistical<br>
information of how a point or cell variable changes over time.  For<br>
information of how a point or cell variable changes over time.  For<br>
example, vtkTemporalStatistics can compute the average value of<br>
example, vtkTemporalStatistics can compute the average value of<br>
"pressure" over time of each point.<br><br><br>
"pressure" over time of each point.<br><br><br>
Note that this filter will require the upstream filter to be run on<br>
Note that this filter will require the upstream filter to be run on<br>
every time step that it reports that it can compute.  This may be a<br>
every time step that it reports that it can compute.  This may be a<br>
time consuming operation.<br><br><br>
time consuming operation.<br><br><br>
vtkTemporalStatistics ignores the temporal spacing.  Each timestep<br>
vtkTemporalStatistics ignores the temporal spacing.  Each timestep<br>
will be weighted the same regardless of how long of an interval it<br>
will be weighted the same regardless of how long of an interval it<br>
is to the next timestep.  Thus, the average statistic may be quite<br>
is to the next timestep.  Thus, the average statistic may be quite<br>
different from an integration of the variable if the time spacing<br>
different from an integration of the variable if the time spacing<br>
varies.<br>
varies.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Compute Average'''<br>''(ComputeAverage)''
| '''Compute Average'''<br>''(ComputeAverage)''
|
|
Compute the average of each point and cell variable over time.
Compute the average of each point and cell variable over time.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute Maximum'''<br>''(ComputeMaximum)''
| '''Compute Maximum'''<br>''(ComputeMaximum)''
|
|
Compute the maximum of each point and cell variable over time.
Compute the maximum of each point and cell variable over time.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute Minimum'''<br>''(ComputeMinimum)''
| '''Compute Minimum'''<br>''(ComputeMinimum)''
|
|
Compute the minimum of each point and cell variable over time.
Compute the minimum of each point and cell variable over time.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Compute Standard Deviation'''<br>''(ComputeStandardDeviation)''
| '''Compute Standard Deviation'''<br>''(ComputeStandardDeviation)''
|
|
Compute the standard deviation of each point and cell variable over time.
Compute the standard deviation of each point and cell variable over time.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input to the Temporal Statistics filter.
Set the input to the Temporal Statistics filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Tessellate==
==Tessellate==




Tessellate nonlinear curves, surfaces, and volumes with lines, triangles, and tetrahedra.
Tessellate nonlinear curves, surfaces, and volumes with lines, triangles, and tetrahedra.


The Tessellate filter tessellates cells with nonlinear geometry and/or scalar fields into a simplicial complex with linearly interpolated field values that more closely approximate the original field. This is useful for datasets containing quadratic cells.<br>
The Tessellate filter tessellates cells with nonlinear geometry and/or scalar fields into a simplicial complex with linearly interpolated field values that more closely approximate the original field. This is useful for datasets containing quadratic cells.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Chord Error'''<br>''(ChordError)''
| '''Chord Error'''<br>''(ChordError)''
|
|
This property controls the maximum chord error allowed at any edge midpoint in the output tessellation. The chord error is measured as the distance between the midpoint of any output edge and the original nonlinear geometry.
This property controls the maximum chord error allowed at any edge midpoint in the output tessellation. The chord error is measured as the distance between the midpoint of any output edge and the original nonlinear geometry.


| 0.001
| 0.001
| �
| �
|-
|-
| '''Field Error'''<br>''(FieldError2)''
| '''Field Error'''<br>''(FieldError2)''
|
|
This proeprty controls the maximum field error allowed at any edge midpoint in the output tessellation. The field error is measured as the difference between a field value at the midpoint of an output edge and the value of the corresponding field in the original nonlinear geometry.
This proeprty controls the maximum field error allowed at any edge midpoint in the output tessellation. The field error is measured as the difference between a field value at the midpoint of an output edge and the value of the corresponding field in the original nonlinear geometry.


| �
| �
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Tessellate filter.
This property specifies the input to the Tessellate filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData, vtkDataSet, vtkUnstructuredGrid.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData, vtkDataSet, vtkUnstructuredGrid.


|-
|-
| '''Maximum Number of Subdivisions'''<br>''(MaximumNumberOfSubdivisions)''
| '''Maximum Number of Subdivisions'''<br>''(MaximumNumberOfSubdivisions)''
|
|
This property specifies the maximum number of times an edge may be subdivided. Increasing this number allows further refinement but can drastically increase the computational and storage requirements, especially when the value of the OutputDimension property is 3.
This property specifies the maximum number of times an edge may be subdivided. Increasing this number allows further refinement but can drastically increase the computational and storage requirements, especially when the value of the OutputDimension property is 3.


| 3
| 3
|
|
The value must be greater than or equal to 0 and less than or equal to 8.
The value must be greater than or equal to 0 and less than or equal to 8.


|-
|-
| '''Merge Points'''<br>''(MergePoints)''
| '''Merge Points'''<br>''(MergePoints)''
|
|
If the value of this property is set to 1, coincident vertices will be merged after tessellation has occurred. Only geometry is considered during the merge and the first vertex encountered is the one whose point attributes will be used. Any discontinuities in point fields will be lost. On the other hand, many operations, such as streamline generation, require coincident vertices to be merged.
If the value of this property is set to 1, coincident vertices will be merged after tessellation has occurred. Only geometry is considered during the merge and the first vertex encountered is the one whose point attributes will be used. Any discontinuities in point fields will be lost. On the other hand, many operations, such as streamline generation, require coincident vertices to be merged.
Toggle whether to merge coincident vertices.
Toggle whether to merge coincident vertices.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Output Dimension'''<br>''(OutputDimension)''
| '''Output Dimension'''<br>''(OutputDimension)''
|
|
The value of this property sets the maximum dimensionality of the output tessellation. When the value of this property is 3, 3D cells produce tetrahedra, 2D cells produce triangles, and 1D cells produce line segments. When the value is 2, 3D cells will have their boundaries tessellated with triangles. When the value is 1, all cells except points produce line segments.
The value of this property sets the maximum dimensionality of the output tessellation. When the value of this property is 3, 3D cells produce tetrahedra, 2D cells produce triangles, and 1D cells produce line segments. When the value is 2, 3D cells will have their boundaries tessellated with triangles. When the value is 1, all cells except points produce line segments.


| 3
| 3
|
|
The value must be greater than or equal to 1 and less than or equal to 3.
The value must be greater than or equal to 1 and less than or equal to 3.


|}
|}




==Tetrahedralize==
==Tetrahedralize==




This filter converts 3-d cells to tetrahedrons and polygons to triangles.  The output is always of type unstructured grid.
This filter converts 3-d cells to tetrahedrons and polygons to triangles.  The output is always of type unstructured grid.


The Tetrahedralize filter converts the 3D cells of any type of dataset to tetrahedrons and the 2D ones to triangles. This filter always produces unstructured grid output.<br>
The Tetrahedralize filter converts the 3D cells of any type of dataset to tetrahedrons and the 2D ones to triangles. This filter always produces unstructured grid output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Tetrahedralize filter.
This property specifies the input to the Tetrahedralize filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Texture Map to Cylinder==
==Texture Map to Cylinder==




Generate texture coordinates by mapping points to cylinder.
Generate texture coordinates by mapping points to cylinder.


This is a filter that generates 2D texture coordinates by mapping input<br>
This is a filter that generates 2D texture coordinates by mapping input<br>
dataset points onto a cylinder. The cylinder is generated automatically.<br>
dataset points onto a cylinder. The cylinder is generated automatically.<br>
The cylinder is generated automatically by computing the axis of the<br>
The cylinder is generated automatically by computing the axis of the<br>
cylinder. Note that the generated texture coordinates for the s-coordinate<br>
cylinder. Note that the generated texture coordinates for the s-coordinate<br>
ranges from (0-1) (corresponding to angle of 0->360 around axis), while the<br>
ranges from (0-1) (corresponding to angle of 0->360 around axis), while the<br>
mapping of the t-coordinate is controlled by the projection of points along<br>
mapping of the t-coordinate is controlled by the projection of points along<br>
the axis.<br>
the axis.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input to the Texture Map to Cylinder filter.
Set the input to the Texture Map to Cylinder filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Prevent Seam'''<br>''(PreventSeam)''
| '''Prevent Seam'''<br>''(PreventSeam)''
|
|
Control how the texture coordinates are generated. If Prevent Seam
Control how the texture coordinates are generated. If Prevent Seam
is set, the s-coordinate ranges from 0->1 and 1->0 corresponding
is set, the s-coordinate ranges from 0->1 and 1->0 corresponding
to the theta angle variation between 0->180 and 180->0
to the theta angle variation between 0->180 and 180->0
degrees. Otherwise, the s-coordinate ranges from 0->1 between
degrees. Otherwise, the s-coordinate ranges from 0->1 between
0->360 degrees.
0->360 degrees.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Texture Map to Plane==
==Texture Map to Plane==




Generate texture coordinates by mapping points to plane.
Generate texture coordinates by mapping points to plane.


TextureMapToPlane is a filter that generates 2D texture coordinates by<br>
TextureMapToPlane is a filter that generates 2D texture coordinates by<br>
mapping input dataset points onto a plane. The plane is generated<br>
mapping input dataset points onto a plane. The plane is generated<br>
automatically. A least squares method is used to generate the plane<br>
automatically. A least squares method is used to generate the plane<br>
automatically.<br>
automatically.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input to the Texture Map to Plane filter.
Set the input to the Texture Map to Plane filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|}
|}




==Texture Map to Sphere==
==Texture Map to Sphere==




Generate texture coordinates by mapping points to sphere.
Generate texture coordinates by mapping points to sphere.


This is a filter that generates 2D texture coordinates by mapping input<br>
This is a filter that generates 2D texture coordinates by mapping input<br>
dataset points onto a sphere. The sphere is generated automatically. The<br>
dataset points onto a sphere. The sphere is generated automatically. The<br>
sphere is generated automatically by computing the center i.e. averaged<br>
sphere is generated automatically by computing the center i.e. averaged<br>
coordinates, of the sphere. Note that the generated texture coordinates<br>
coordinates, of the sphere. Note that the generated texture coordinates<br>
range between (0,1). The s-coordinate lies in the angular direction around<br>
range between (0,1). The s-coordinate lies in the angular direction around<br>
the z-axis, measured counter-clockwise from the x-axis. The t-coordinate<br>
the z-axis, measured counter-clockwise from the x-axis. The t-coordinate<br>
lies in the angular direction measured down from the north pole towards<br>
lies in the angular direction measured down from the north pole towards<br>
the south pole.<br>
the south pole.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
Set the input to the Texture Map to Sphere filter.
Set the input to the Texture Map to Sphere filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Prevent Seam'''<br>''(PreventSeam)''
| '''Prevent Seam'''<br>''(PreventSeam)''
|
|
Control how the texture coordinates are generated. If Prevent Seam
Control how the texture coordinates are generated. If Prevent Seam
is set, the s-coordinate ranges from 0->1 and 1->0 corresponding
is set, the s-coordinate ranges from 0->1 and 1->0 corresponding
to the theta angle variation between 0->180 and 180->0
to the theta angle variation between 0->180 and 180->0
degrees. Otherwise, the s-coordinate ranges from 0->1 between
degrees. Otherwise, the s-coordinate ranges from 0->1 between
0->360 degrees.
0->360 degrees.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Threshold==
==Threshold==




This filter extracts cells that have point or cell scalars in the specified range.
This filter extracts cells that have point or cell scalars in the specified range.


The Threshold filter extracts the portions of the input dataset whose scalars lie within the specified range. This filter operates on either point-centered or cell-centered data. This filter operates on any type of dataset and produces unstructured grid output.<br><br><br>
The Threshold filter extracts the portions of the input dataset whose scalars lie within the specified range. This filter operates on either point-centered or cell-centered data. This filter operates on any type of dataset and produces unstructured grid output.<br><br><br>
To select between these two options, select either Point Data or Cell Data from the Attribute Mode menu. Once the Attribute Mode has been selected, choose the scalar array from which to threshold the data from the Scalars menu. The Lower Threshold and Upper Threshold sliders determine the range of the scalars to retain in the output. The All Scalars check box only takes effect when the Attribute Mode is set to Point Data. If the All Scalars option is checked, then a cell will only be passed to the output if the scalar values of all of its points lie within the range indicated by the Lower Threshold and Upper Threshold sliders. If unchecked, then a cell will be added to the output if the specified scalar value for any of its points is within the chosen range.<br>
To select between these two options, select either Point Data or Cell Data from the Attribute Mode menu. Once the Attribute Mode has been selected, choose the scalar array from which to threshold the data from the Scalars menu. The Lower Threshold and Upper Threshold sliders determine the range of the scalars to retain in the output. The All Scalars check box only takes effect when the Attribute Mode is set to Point Data. If the All Scalars option is checked, then a cell will only be passed to the output if the scalar values of all of its points lie within the range indicated by the Lower Threshold and Upper Threshold sliders. If unchecked, then a cell will be added to the output if the specified scalar value for any of its points is within the chosen range.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''All Scalars'''<br>''(AllScalars)''
| '''All Scalars'''<br>''(AllScalars)''
|
|
If the value of this property is 1, then a cell is only included in the output if the value of the selected array for all its points is within the threshold. This is only relevant when thresholding by a point-centered array.
If the value of this property is 1, then a cell is only included in the output if the value of the selected array for all its points is within the threshold. This is only relevant when thresholding by a point-centered array.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Threshold filter.
This property specifies the input to the Threshold filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point or cell array with 1 components.
The dataset must contain a point or cell array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
The value of this property contains the name of the scalar array from which to perform thresholding.
The value of this property contains the name of the scalar array from which to perform thresholding.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.




Valud array names will be chosen from point and cell data.
Valud array names will be chosen from point and cell data.


|-
|-
| '''Threshold Range'''<br>''(ThresholdBetween)''
| '''Threshold Range'''<br>''(ThresholdBetween)''
|
|
The values of this property specify the upper and lower bounds of the thresholding operation.
The values of this property specify the upper and lower bounds of the thresholding operation.


| 0 0
| 0 0
|
|
The value must lie within the range of the selected data array.
The value must lie within the range of the selected data array.


|}
|}




==Transform==
==Transform==




This filter applies transformation to the polygons.
This filter applies transformation to the polygons.


The Transform filter allows you to specify the position, size, and orientation of polygonal, unstructured grid, and curvilinear data sets.<br>
The Transform filter allows you to specify the position, size, and orientation of polygonal, unstructured grid, and curvilinear data sets.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Transform filter.
This property specifies the input to the Transform filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Transform'''<br>''(Transform)''
| '''Transform'''<br>''(Transform)''
|
|
The values in this property allow you to specify the transform (translation, rotation, and scaling) to apply to the input dataset.
The values in this property allow you to specify the transform (translation, rotation, and scaling) to apply to the input dataset.


| �
| �
|
|
The selected object must be the result of the following: transforms.
The selected object must be the result of the following: transforms.




The value must be set to one of the following: Transform3.
The value must be set to one of the following: Transform3.


|}
|}




==Triangle Strips==
==Triangle Strips==




This filter uses a greedy algorithm to convert triangles into triangle strips
This filter uses a greedy algorithm to convert triangles into triangle strips


The Triangle Strips filter converts triangles into triangle strips and lines into polylines. This filter operates on polygonal data sets and produces polygonal output.<br>
The Triangle Strips filter converts triangles into triangle strips and lines into polylines. This filter operates on polygonal data sets and produces polygonal output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Triangle Strips filter.
This property specifies the input to the Triangle Strips filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Maximum Length'''<br>''(MaximumLength)''
| '''Maximum Length'''<br>''(MaximumLength)''
|
|
This property specifies the maximum number of triangles/lines to include in a triangle strip or polyline.
This property specifies the maximum number of triangles/lines to include in a triangle strip or polyline.


| 1000
| 1000
|
|
The value must be greater than or equal to 4 and less than or equal to 100000.
The value must be greater than or equal to 4 and less than or equal to 100000.


|}
|}




==Triangulate==
==Triangulate==




This filter converts polygons and triangle strips to basic triangles.
This filter converts polygons and triangle strips to basic triangles.


The Triangulate filter decomposes polygonal data into only triangles, points, and lines. It separates triangle strips and polylines into individual triangles and lines, respectively. The output is polygonal data. Some filters that take polygonal data as input require that the data be composed of triangles rather than other polygons, so passing your data through this filter first is useful in such situations. You should use this filter in these cases rather than the Tetrahedralize filter because they produce different output dataset types. The filters referenced require polygonal input, and the Tetrahedralize filter produces unstructured grid output.<br>
The Triangulate filter decomposes polygonal data into only triangles, points, and lines. It separates triangle strips and polylines into individual triangles and lines, respectively. The output is polygonal data. Some filters that take polygonal data as input require that the data be composed of triangles rather than other polygons, so passing your data through this filter first is useful in such situations. You should use this filter in these cases rather than the Tetrahedralize filter because they produce different output dataset types. The filters referenced require polygonal input, and the Tetrahedralize filter produces unstructured grid output.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Triangulate filter.
This property specifies the input to the Triangulate filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|}
|}




==Tube==
==Tube==




Convert lines into tubes. Normals are used to avoid cracks between tube segments.
Convert lines into tubes. Normals are used to avoid cracks between tube segments.


The Tube filter creates tubes around the lines in the input polygonal dataset. The output is also polygonal.<br>
The Tube filter creates tubes around the lines in the input polygonal dataset. The output is also polygonal.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Capping'''<br>''(Capping)''
| '''Capping'''<br>''(Capping)''
|
|
If this property is set to 1, endcaps will be drawn on the tube. Otherwise the ends of the tube will be open.
If this property is set to 1, endcaps will be drawn on the tube. Otherwise the ends of the tube will be open.


| 1
| 1
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Default Normal'''<br>''(DefaultNormal)''
| '''Default Normal'''<br>''(DefaultNormal)''
|
|
The value of this property specifies the normal to use when the UseDefaultNormal property is set to 1 or the input contains no vector array (SelectInputVectors property).
The value of this property specifies the normal to use when the UseDefaultNormal property is set to 1 or the input contains no vector array (SelectInputVectors property).


| 0 0 1
| 0 0 1
| �
| �
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Tube filter.
This property specifies the input to the Tube filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


|-
|-
| '''Number of Sides'''<br>''(NumberOfSides)''
| '''Number of Sides'''<br>''(NumberOfSides)''
|
|
The value of this property indicates the number of faces around the circumference of the tube.
The value of this property indicates the number of faces around the circumference of the tube.


| 6
| 6
|
|
The value must be greater than or equal to 3.
The value must be greater than or equal to 3.


|-
|-
| '''Radius'''<br>''(Radius)''
| '''Radius'''<br>''(Radius)''
|
|
The value of this property sets the radius of the tube. If the radius is varying (VaryRadius property), then this value is the minimum radius.
The value of this property sets the radius of the tube. If the radius is varying (VaryRadius property), then this value is the minimum radius.


| 1
| 1
|
|
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.01.
The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.01.


|-
|-
| '''Radius Factor'''<br>''(RadiusFactor)''
| '''Radius Factor'''<br>''(RadiusFactor)''
|
|
If varying the radius (VaryRadius property), the property sets the
If varying the radius (VaryRadius property), the property sets the
maximum tube radius in terms of a multiple of the minimum radius. If
maximum tube radius in terms of a multiple of the minimum radius. If
not varying the radius, this value has no effect.
not varying the radius, this value has no effect.


| 10
| 10
| �
| �
|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
This property indicates the name of the scalar array on which to
This property indicates the name of the scalar array on which to
operate. The indicated array may be used for scaling the tubes.
operate. The indicated array may be used for scaling the tubes.
(See the VaryRadius property.)
(See the VaryRadius property.)


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
This property indicates the name of the vector array on which to
This property indicates the name of the vector array on which to
operate. The indicated array may be used for scaling and/or
operate. The indicated array may be used for scaling and/or
orienting the tubes. (See the VaryRadius property.)
orienting the tubes. (See the VaryRadius property.)


| 1
| 1
|
|
An array of vectors is required.
An array of vectors is required.


|-
|-
| '''Use Default Normal'''<br>''(UseDefaultNormal)''
| '''Use Default Normal'''<br>''(UseDefaultNormal)''
|
|
If this property is set to 0, and the input contains no vector array, then default ribbon normals will be generated (DefaultNormal property); if a vector array has been set (SelectInputVectors property), the ribbon normals will be set from the specified array. If this property is set to 1, the default normal (DefaultNormal property) will be used, regardless of whether the SelectInputVectors property has been set.
If this property is set to 0, and the input contains no vector array, then default ribbon normals will be generated (DefaultNormal property); if a vector array has been set (SelectInputVectors property), the ribbon normals will be set from the specified array. If this property is set to 1, the default normal (DefaultNormal property) will be used, regardless of whether the SelectInputVectors property has been set.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''Vary Radius'''<br>''(VaryRadius)''
| '''Vary Radius'''<br>''(VaryRadius)''
|
|
The property determines whether/how to vary the radius of the tube. If
The property determines whether/how to vary the radius of the tube. If
varying by scalar (1), the tube radius is based on the point-based
varying by scalar (1), the tube radius is based on the point-based
scalar values in the dataset. If it is varied by vector, the vector
scalar values in the dataset. If it is varied by vector, the vector
magnitude is used in varying the radius.
magnitude is used in varying the radius.


| 0
| 0
|
|
The value must be one of the following: Off (0), By Scalar (1), By Vector (2), By Absolute Scalar (3).
The value must be one of the following: Off (0), By Scalar (1), By Vector (2), By Absolute Scalar (3).


|}
|}




==Warp By Scalar==
==Warp By Scalar==




This filter moves point coordinates along a vector scaled by a point attribute.  It can be used to produce carpet plots.
This filter moves point coordinates along a vector scaled by a point attribute.  It can be used to produce carpet plots.


The Warp (scalar) filter translates the points of the input data set along a vector by a distance determined by the specified scalars. This filter operates on polygonal, curvilinear, and unstructured grid data sets containing single-component scalar arrays. Because it only changes the positions of the points, the output data set type is the same as that of the input. Any scalars in the input dataset are copied to the output, so the data can be colored by them.<br>
The Warp (scalar) filter translates the points of the input data set along a vector by a distance determined by the specified scalars. This filter operates on polygonal, curvilinear, and unstructured grid data sets containing single-component scalar arrays. Because it only changes the positions of the points, the output data set type is the same as that of the input. Any scalars in the input dataset are copied to the output, so the data can be colored by them.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Warp (scalar) filter.
This property specifies the input to the Warp (scalar) filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 1 components.
The dataset must contain a point array with 1 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Normal'''<br>''(Normal)''
| '''Normal'''<br>''(Normal)''
|
|
The values of this property specify the direction along which to warp the dataset if any normals contained in the input dataset are not being used for this purpose. (See the UseNormal property.)
The values of this property specify the direction along which to warp the dataset if any normals contained in the input dataset are not being used for this purpose. (See the UseNormal property.)


| 0 0 1
| 0 0 1
| �
| �
|-
|-
| '''Scale Factor'''<br>''(ScaleFactor)''
| '''Scale Factor'''<br>''(ScaleFactor)''
|
|
The scalar value at a given point is multiplied by the value of this property to determine the magnitude of the change vector for that point.
The scalar value at a given point is multiplied by the value of this property to determine the magnitude of the change vector for that point.


| 1
| 1
| �
| �
|-
|-
| '''Scalars'''<br>''(SelectInputScalars)''
| '''Scalars'''<br>''(SelectInputScalars)''
|
|
This property contains the name of the scalar array by which to warp the dataset.
This property contains the name of the scalar array by which to warp the dataset.


| �
| �
|
|
An array of scalars is required.
An array of scalars is required.


|-
|-
| '''Use Normal'''<br>''(UseNormal)''
| '''Use Normal'''<br>''(UseNormal)''
|
|
If point normals are present in the dataset, the value of this property toggles whether to use a single normal value (value = 1) or the normals from the dataset (value = 0).
If point normals are present in the dataset, the value of this property toggles whether to use a single normal value (value = 1) or the normals from the dataset (value = 0).


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|-
|-
| '''XY Plane'''<br>''(XYPlane)''
| '''XY Plane'''<br>''(XYPlane)''
|
|
If the value of this property is 1, then the Z-coordinates from the input are considered to be the scalar values, and the displacement is along the Z axis. This is useful for creating carpet plots.
If the value of this property is 1, then the Z-coordinates from the input are considered to be the scalar values, and the displacement is along the Z axis. This is useful for creating carpet plots.


| 0
| 0
|
|
Only the values 0 and 1 are accepted.
Only the values 0 and 1 are accepted.


|}
|}




==Warp By Vector==
==Warp By Vector==




This filter displaces point coordinates along a vector attribute. It is useful for showing mechanical deformation.
This filter displaces point coordinates along a vector attribute. It is useful for showing mechanical deformation.


The Warp (vector) filter translates the points of the input dataset using a specified vector array. The vector array chosen specifies a vector per point in the input. Each point is translated along its vector by a given scale factor. This filter operates on polygonal, curvilinear, and unstructured grid datasets. Because this filter only changes the positions of the points, the output dataset type is the same as that of the input.<br>
The Warp (vector) filter translates the points of the input dataset using a specified vector array. The vector array chosen specifies a vector per point in the input. Each point is translated along its vector by a given scale factor. This filter operates on polygonal, curvilinear, and unstructured grid datasets. Because this filter only changes the positions of the points, the output dataset type is the same as that of the input.<br>


{| class="PropertiesTable" border="1" cellpadding="5"
{| class="PropertiesTable" border="1" cellpadding="5"
|-
|-
| '''Property'''
| '''Property'''
| '''Description'''
| '''Description'''
| '''Default Value(s)'''
| '''Default Value(s)'''
| '''Restrictions'''
| '''Restrictions'''
|-
|-
| '''Input'''<br>''(Input)''
| '''Input'''<br>''(Input)''
|
|
This property specifies the input to the Warp (vector) filter.
This property specifies the input to the Warp (vector) filter.


| �
| �
|
|
The selected object must be the result of the following: sources (includes readers), filters.
The selected object must be the result of the following: sources (includes readers), filters.




The dataset must contain a point array with 3 components.
The dataset must contain a point array with 3 components.




The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.
The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


|-
|-
| '''Scale Factor'''<br>''(ScaleFactor)''
| '''Scale Factor'''<br>''(ScaleFactor)''
|
|
Each component of the selected vector array will be multiplied by the value of this property before being used to compute new point coordinates.
Each component of the selected vector array will be multiplied by the value of this property before being used to compute new point coordinates.


| 1
| 1
| �
| �
|-
|-
| '''Vectors'''<br>''(SelectInputVectors)''
| '''Vectors'''<br>''(SelectInputVectors)''
|
|
The value of this property contains the name of the vector array by which to warp the dataset's point coordinates.
The value of this property contains the name of the vector array by which to warp the dataset's point coordinates.


| �
| �
|
|
An array of vectors is required.
An array of vectors is required.


|}
|}

Revision as of 15:40, 12 October 2010

ParaViewUsersGuide



AMR Contour


Property Description Default Value(s) Restrictions
Capping
(Capping)

If this property is on, the the boundary of the data set is capped.


1

Only the values 0 and 1 are accepted.


Isosurface
(ContourValue)

This property specifies the values at which to compute the isosurface.


1

The value must lie within the range of the selected data array.


Degenerate Cells
(DegenerateCells)

If this property is on, a transition mesh between levels is created.


1

Only the values 0 and 1 are accepted.


Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.


Merge Points
(MergePoints)

Use more memory to merge points on the boundaries of blocks.


1

Only the values 0 and 1 are accepted.


Multiprocess Communication
(MultiprocessCommunication)

If this property is off, each process executes independantly.


1

Only the values 0 and 1 are accepted.


Contour By
(SelectInputScalars)

This property specifies the name of the cell scalar array from which the contour filter will compute isolines and/or isosurfaces.


An array of scalars is required.


Skip Ghost Copy
(SkipGhostCopy)

A simple test to see if ghost values are already set properly.


1

Only the values 0 and 1 are accepted.


Triangulate
(Triangulate)

Use triangles instead of quads on capping surfaces.


1

Only the values 0 and 1 are accepted.



AMR Dual Clip

Clip with scalars. Tetrahedra.


Property Description Default Value(s) Restrictions
Degenerate Cells
(DegenerateCells)

If this property is on, a transition mesh between levels is created.


1

Only the values 0 and 1 are accepted.


Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.


Merge Points
(MergePoints)

Use more memory to merge points on the boundaries of blocks.


1

Only the values 0 and 1 are accepted.


Multiprocess Communication
(MultiprocessCommunication)

If this property is off, each process executes independantly.


1

Only the values 0 and 1 are accepted.


Select Material Arrays
(SelectMaterialArrays)

This property specifies the cell arrays from which the clip filter will

compute clipped cells.


An array of scalars is required.


Volume Fraction Value
(VolumeFractionSurfaceValue)

This property specifies the values at which to compute the isosurface.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Annotate Time Filter

Shows input data time as text annnotation in the view.


The Annotate Time filter can be used to show the data time in a text annotation.


Property Description Default Value(s) Restrictions
Format
(Format)

The value of this property is a format string used to display the input time. The format string is specified using printf style.


Time: %f
Input
(Input)

This property specifies the input dataset for which to display the time.


The selected object must be the result of the following: sources (includes readers), filters.


Scale
(Scale)

The factor by which the input time is scaled.


1
Shift
(Shift)

The amount of time the input is shifted (after scaling).


0


Append Attributes

Copies geometry from first input. Puts all of the arrays into the output.


The Append Attributes filter takes multiple input data sets with the same geometry and merges their point and cell attributes to produce a single output containing all the point and cell attributes of the inputs. Any inputs without the same number of points and cells as the first input are ignored. The input data sets must already be collected together, either as a result of a reader that loads multiple parts (e.g., EnSight reader) or because the Group Parts filter has been run to form a collection of data sets.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Append Attributes filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Append Datasets

Takes an input of multiple datasets and output has only one unstructured grid.


The Append Datasets filter operates on multiple data sets of any type (polygonal, structured, etc.). It merges their geometry into a single data set. Only the point and cell attributes that all of the input data sets have in common will appear in the output. The input data sets must already be collected together, either as a result of a reader that loads multiple parts (e.g., EnSight reader) or because the Group Parts filter has been run to form a collection of data sets.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the datasets to be merged into a single dataset by the Append Datasets filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Append Geometry

Takes an input of multiple poly data parts and output has only one part.


The Append Geometry filter operates on multiple polygonal data sets. It merges their geometry into a single data set. Only the point and cell attributes that all of the input data sets have in common will appear in the output.


Property Description Default Value(s) Restrictions
Input
(Input)

Set the input to the Append Geometry filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.



Block Scalars

The Level Scalars filter uses colors to show levels of a multiblock dataset.


The Level Scalars filter uses colors to show levels of a multiblock dataset.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Level Scalars filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.



Calculator

Compute new attribute arrays as function of existing arrays.


The Calculator filter computes a new data array or new point coordinates as a function of existing scalar or vector arrays. If point-centered arrays are used in the computation of a new data array, the resulting array will also be point-centered. Similarly, computations using cell-centered arrays will produce a new cell-centered array. If the function is computing point coordinates, the result of the function must be a three-component vector. The Calculator interface operates similarly to a scientific calculator. In creating the function to evaluate, the standard order of operations applies.

Each of the calculator functions is described below. Unless otherwise noted, enclose the operand in parentheses using the ( and ) buttons.

Clear: Erase the current function (displayed in the read-only text box above the calculator buttons).

/: Divide one scalar by another. The operands for this function are not required to be enclosed in parentheses.

  • Multiply two scalars, or multiply a vector by a scalar (scalar multiple). The operands for this function are not required to be enclosed in parentheses.

-: Negate a scalar or vector (unary minus), or subtract one scalar or vector from another. The operands for this function are not required to be enclosed in parentheses.

+: Add two scalars or two vectors. The operands for this function are not required to be enclosed in parentheses.

sin: Compute the sine of a scalar.

cos: Compute the cosine of a scalar.

tan: Compute the tangent of a scalar.

asin: Compute the arcsine of a scalar.

acos: Compute the arccosine of a scalar.

atan: Compute the arctangent of a scalar.

sinh: Compute the hyperbolic sine of a scalar.

cosh: Compute the hyperbolic cosine of a scalar.

tanh: Compute the hyperbolic tangent of a scalar.

min: Compute minimum of two scalars.

max: Compute maximum of two scalars.

x^y: Raise one scalar to the power of another scalar. The operands for this function are not required to be enclosed in parentheses.

sqrt: Compute the square root of a scalar.

e^x: Raise e to the power of a scalar.

log: Compute the logarithm of a scalar (deprecated. same as log10).

log10: Compute the logarithm of a scalar to the base 10.

ln: Compute the logarithm of a scalar to the base 'e'.

ceil: Compute the ceiling of a scalar.

floor: Compute the floor of a scalar.

abs: Compute the absolute value of a scalar.

v1.v2: Compute the dot product of two vectors. The operands for this function are not required to be enclosed in parentheses.

cross: Compute cross product of two vectors.

mag: Compute the magnitude of a vector.

norm: Normalize a vector.

The operands are described below.

The digits 0 - 9 and the decimal point are used to enter constant scalar values.

iHat, jHat, and kHat are vector constants representing unit vectors in the X, Y, and Z directions, respectively.

The scalars menu lists the names of the scalar arrays and the components of the vector arrays of either the point-centered or cell-centered data. The vectors menu lists the names of the point-centered or cell-centered vector arrays. The function will be computed for each point (or cell) using the scalar or vector value of the array at that point (or cell).

The filter operates on any type of data set, but the input data set must have at least one scalar or vector array. The arrays can be either point-centered or cell-centered. The Calculator filter's output is of the same data set type as the input.


Property Description Default Value(s) Restrictions
Attribute Mode
(AttributeMode)

This property determines whether the computation is to be performed on point-centered or cell-centered data.


0

The value must be one of the following: point_data (1), cell_data (2), field_data (5).


Coordinate Results
(CoordinateResults)

The value of this property determines whether the results of this computation should be used as point coordinates or as a new array.


0

Only the values 0 and 1 are accepted.


Function
(Function)

This property contains the equation for computing the new array.


Input
(Input)

This property specifies the input dataset to the Calculator filter. The scalar and vector variables may be chosen from this dataset's arrays.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Replace Invalid Results
(ReplaceInvalidValues)

This property determines whether invalid values in the computation will be replaced with a specific value. (See the ReplacementValue property.)


1

Only the values 0 and 1 are accepted.


Replacement Value
(ReplacementValue)

If invalid values in the computation are to be replaced with another value, this property contains that value.


0
Result Array Name
(ResultArrayName)

This property contains the name for the output array containing the result of this computation.


Result


Cell Centers

Create a point (no geometry) at the center of each input cell.


The Cell Centers filter places a point at the center of each cell in the input data set. The center computed is the parametric center of the cell, not necessarily the geometric or bounding box center. The cell attributes of the input will be associated with these newly created points of the output. You have the option of creating a vertex cell per point in the outpuut. This is useful because vertex cells are rendered, but points are not. The points themselves could be used for placing glyphs (using the Glyph filter). The Cell Centers filter takes any type of data set as input and produces a polygonal data set as output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Cell Centers filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Vertex Cells
(VertexCells)

If set to 1, a vertex cell will be generated per point in the output. Otherwise only points will be generated.


0

Only the values 0 and 1 are accepted.



Cell Data to Point Data

Create point attributes by averaging cell attributes.


The Cell Data to Point Data filter averages the values of the cell attributes of the cells surrounding a point to compute point attributes. The Cell Data to Point Data filter operates on any type of data set, and the output data set is of the same type as the input.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Cell Data to Point Data filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Pass Cell Data
(PassCellData)

If this property is set to 1, then the input cell data is passed through to the output; otherwise, only the generated point data will be available in the output.


0

Only the values 0 and 1 are accepted.



Clean

Merge coincident points if they do not meet a feature edge criteria.


The Clean filter takes polygonal data as input and generates polygonal data as output. This filter can merge duplicate points, remove unused points, and transform degenerate cells into their appropriate forms (e.g., a triangle is converted into a line if two of its points are merged).


Property Description Default Value(s) Restrictions
Absolute Tolerance
(AbsoluteTolerance)

If merging nearby points (see PointMerging property) and using absolute tolerance (see ToleranceIsAbsolute property), this property specifies the tolerance for performing merging in the spatial units of the input data set.


1

The value must be greater than or equal to 0.


Convert Lines To Points
(ConvertLinesToPoints)

If this property is set to 1, degenerate lines (a "line" whose endpoints are at the same spatial location) will be converted to points.


1

Only the values 0 and 1 are accepted.


Convert Polys To Lines
(ConvertPolysToLines)

If this property is set to 1, degenerate polygons (a "polygon" with only two distinct point coordinates) will be converted to lines.


1

Only the values 0 and 1 are accepted.


Convert Strips To Polys
(ConvertStripsToPolys)

If this property is set to 1, degenerate triangle strips (a triangle "strip" containing only one triangle) will be converted to triangles.


1

Only the values 0 and 1 are accepted.


Input
(Input)

Set the input to the Clean filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Piece Invariant
(PieceInvariant)

If this property is set to 1, the whole data set will be processed at once so that cleaning the data set always produces the same results. If it is set to 0, the data set can be processed one piece at a time, so it is not necessary for the entire data set to fit into memory; however the results are not guaranteed to be the same as they would be if the Piece invariant option was on. Setting this option to 0 may produce seams in the output dataset when ParaView is run in parallel.


1

Only the values 0 and 1 are accepted.


Point Merging
(PointMerging)

If this property is set to 1, then points will be merged if they are within the specified Tolerance or AbsoluteTolerance (see the Tolerance and AbsoluteTolerance propertys), depending on the value of the ToleranceIsAbsolute property. (See the ToleranceIsAbsolute property.) If this property is set to 0, points will not be merged.


1

Only the values 0 and 1 are accepted.


Tolerance
(Tolerance)

If merging nearby points (see PointMerging property) and not using absolute tolerance (see ToleranceIsAbsolute property), this property specifies the tolerance for performing merging as a fraction of the length of the diagonal of the bounding box of the input data set.


0

The value must be greater than or equal to 0 and less than or equal to 1.


Tolerance Is Absolute
(ToleranceIsAbsolute)

This property determines whether to use absolute or relative (a percentage of the bounding box) tolerance when performing point merging.


0

Only the values 0 and 1 are accepted.



Clean to Grid

This filter merges points and converts the data set to unstructured grid.


The Clean to Grid filter merges points that are exactly coincident. It also converts the data set to an unstructured grid. You may wish to do this if you want to apply a filter to your data set that is available for unstructured grids but not for the initial type of your data set (e.g., applying warp vector to volumetric data). The Clean to Grid filter operates on any type of data set.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Clean to Grid filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Clip

Clip with an implicit plane. Clipping does not reduce the dimensionality of the data set. The output data type of this filter is always an unstructured grid.


The Clip filter cuts away a portion of the input data set using an implicit plane. This filter operates on all types of data sets, and it returns unstructured grid data on output.


Property Description Default Value(s) Restrictions
Clip Type
(ClipFunction)

This property specifies the parameters of the clip function (an implicit plane) used to clip the dataset.


The value must be set to one of the following: Plane, Box, Sphere, Scalar.


Input
(Input)

This property specifies the dataset on which the Clip filter will operate.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Inside Out
(InsideOut)

If this property is set to 0, the clip filter will return that portion of the dataset that lies within the clip function. If set to 1, the portions of the dataset that lie outside the clip function will be returned instead.


0

Only the values 0 and 1 are accepted.


Scalars
(SelectInputScalars)

If clipping with scalars, this property specifies the name of the scalar array on which to perform the clip operation.


An array of scalars is required.


Valud array names will be chosen from point and cell data.


Use Value As Offset
(UseValueAsOffset)

If UseValueAsOffset is true, Value is used as an offset parameter to the implicit function. Otherwise, Value is used only when clipping using a scalar array.


0

Only the values 0 and 1 are accepted.


Value
(Value)

If clipping with scalars, this property sets the scalar value about which to clip the dataset based on the scalar array chosen. (See SelectInputScalars.) If clipping with a clip function, this property specifies an offset from the clip function to use in the clipping operation. Neither functionality is currently available in ParaView's user interface.


0

The value must lie within the range of the selected data array.



Clip Closed Surface

Clip a polygonal dataset with a plane to produce closed surfaces


This clip filter cuts away a portion of the input polygonal dataset using a plane to generate a new polygonal dataset.


Property Description Default Value(s) Restrictions
Base Color
(BaseColor)

Specify the color for the faces from the input.


0.1 0.1 1

The value must be greater than or equal to (0, 0, 0) and less than or equal to (1, 1, 1).


Clip Color
(ClipColor)

Specifiy the color for the capping faces (generated on the clipping interface).


1 0.11 0.1

The value must be greater than or equal to (0, 0, 0) and less than or equal to (1, 1, 1).


Clipping Plane
(ClippingPlane)

This property specifies the parameters of the clipping plane used to clip the polygonal data.


The value must be set to one of the following: Plane.


Generate Cell Origins
(GenerateColorScalars)

Generate (cell) data for coloring purposes such that the newly generated cells (including capping faces and clipping outlines) can be distinguished from the input cells.


0

Only the values 0 and 1 are accepted.


Generate Faces
(GenerateFaces)

Generate polygonal faces in the output.


1

Only the values 0 and 1 are accepted.


Generate Outline
(GenerateOutline)

Generate clipping outlines in the output wherever an input face is cut by the clipping plane.


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the dataset on which the Clip filter will operate.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Inside Out
(InsideOut)

If this flag is turned off, the clipper will return the portion of the data that lies within the clipping plane. Otherwise, the clipper will return the portion of the data that lies outside the clipping plane.


0

Only the values 0 and 1 are accepted.


Clipping Tolerance
(Tolerance)

Specify the tolerance for creating new points. A small value might incur degenerate triangles.


1e-06


Compute Derivatives

This filter computes derivatives of scalars and vectors.


CellDerivatives is a filter that computes derivatives of scalars and vectors at the center of cells. You can choose to generate different output including the scalar gradient (a vector), computed tensor vorticity (a vector), gradient of input vectors (a tensor), and strain matrix of the input vectors (a tensor); or you may choose to pass data through to the output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Output Tensor Type
(OutputTensorType)

This property controls how the filter works to generate tensor cell data. You can choose to compute the gradient of the input vectors, or compute the strain tensor of the vector gradient tensor. By default, the filter will take the gradient of the vector data to construct a tensor.


1

The value must be one of the following: Nothing (0), Vector Gradient (1), Strain (2).


Output Vector Type
(OutputVectorType)

This property Controls how the filter works to generate vector cell data. You can choose to compute the gradient of the input scalars, or extract the vorticity of the computed vector gradient tensor. By default, the filter will take the gradient of the input scalar data.


1

The value must be one of the following: Nothing (0), Scalar Gradient (1), Vorticity (2).


Scalars
(SelectInputScalars)

This property indicates the name of the scalar array to differentiate.


An array of scalars is required.


Vectors
(SelectInputVectors)

This property indicates the name of the vector array to differentiate.


1

An array of vectors is required.



Connectivity

Mark connected components with integer point attribute array.


The Connectivity filter assigns a region id to connected components of the input data set. (The region id is assigned as a point scalar value.) This filter takes any data set type as input and produces unstructured grid output.


Property Description Default Value(s) Restrictions
Color Regions
(ColorRegions)

Controls the coloring of the connected regions.


1

Only the values 0 and 1 are accepted.


Extraction Mode
(ExtractionMode)

Controls the extraction of connected surfaces.


5

The value must be one of the following: Extract Point Seeded Regions (1), Extract Cell Seeded Regions (2), Extract Specified Regions (3), Extract Largest Region (4), Extract All Regions (5), Extract Closes Point Region (6).


Input
(Input)

This property specifies the input to the Connectivity filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Contingency Statistics

Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.

This filter computes contingency tables between pairs of attributes. This result is a tabular bivariate probability distribution which serves as a Bayesian-style prior model. Data is assessed by computing

  • the probability of observing both variables simultaneously;
  • the probability of each variable conditioned on the other (the two values need not be identical); and
  • the pointwise mutual information (PMI).


Finally, the summary statistics include the information entropy of the observations.


Property Description Default Value(s) Restrictions
Attribute Mode
(AttributeMode)

Specify which type of field data the arrays will be drawn from.


0

Valud array names will be chosen from point and cell data.


Input
(Input)

The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


Model Input
(ModelInput)

A previously-calculated model with which to assess a separate dataset. This input is optional.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


Variables of Interest
(SelectArrays)

Choose arrays whose entries will be used to form observations for statistical analysis.


An array of scalars is required.


Task
(Task)

Specify the task to be performed: modeling and/or assessment.

  1. "Statistics of all the data," creates an output table (or tables) summarizing the entire input dataset;
  1. "Model a subset of the data," creates an output table (or tables) summarizing a randomly-chosen subset of the input dataset;
  1. "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
  1. "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset. The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training. You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting. The Training fraction setting will be ignored for tasks 1 and 3.


3

The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


Training Fraction
(TrainingFraction)

Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Contour

Generate isolines or isosurfaces using point scalars.


The Contour filter computes isolines or isosurfaces using a selected point-centered scalar array. The Contour filter operates on any type of data set, but the input is required to have at least one point-centered scalar (single-component) array. The output of this filter is polygonal.


Property Description Default Value(s) Restrictions
Compute Gradients
(ComputeGradients)

If this property is set to 1, a scalar array containing a gradient value at each point in the isosurface or isoline will be created by this filter; otherwise an array of gradients will not be computed. This operation is fairly expensive both in terms of computation time and memory required, so if the output dataset produced by the contour filter will be processed by filters that modify the dataset's topology or geometry, it may be wise to set the value of this property to 0. Not that if ComputeNormals is set to 1, then gradients will have to be calculated, but they will only be stored in the output dataset if ComputeGradients is also set to 1.


0

Only the values 0 and 1 are accepted.


Compute Normals
(ComputeNormals)

If this property is set to 1, a scalar array containing a normal value at each point in the isosurface or isoline will be created by the contour filter; otherwise an array of normals will not be computed. This operation is fairly expensive both in terms of computation time and memory required, so if the output dataset produced by the contour filter will be processed by filters that modify the dataset's topology or geometry, it may be wise to set the value of this property to 0.

Select whether to compute normals.


1

Only the values 0 and 1 are accepted.


Compute Scalars
(ComputeScalars)

If this property is set to 1, an array of scalars (containing the contour value) will be added to the output dataset. If set to 0, the output will not contain this array.


0

Only the values 0 and 1 are accepted.


Isosurfaces
(ContourValues)

This property specifies the values at which to compute isosurfaces/isolines and also the number of such values.


The value must lie within the range of the selected data array.


Input
(Input)

This property specifies the input dataset to be used by the contour filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Point Merge Method
(Locator)

This property specifies an incremental point locator for merging duplicate / coincident points.


The selected object must be the result of the following: incremental_point_locators.


The value must be set to one of the following: MergePoints, IncrementalOctreeMergePoints, NonMergingPointLocator.


Contour By
(SelectInputScalars)

This property specifies the name of the scalar array from which the contour filter will compute isolines and/or isosurfaces.


An array of scalars is required.


Valud array names will be chosen from point and cell data.



Cosmology FOF Halo Finder

Sorry, no help is currently available.



Property Description Default Value(s) Restrictions
bb (linking length/distance)
(BB)

Linking length measured in units of interparticle spacing and is dimensionless. Used to link particles into halos for the friend-of-a-friend algorithm.


0.2

The value must be greater than or equal to 0.


Compute the most bound particle for halos
(ComputeMostBoundParticle)

If checked, the most bound particle will be calculated. This can be very slow.


0

Only the values 0 and 1 are accepted.


Compute the most connected particle for halos
(ComputeMostConnectedParticle)

If checked, the most connected particle will be calculated. This can be very slow.


0

Only the values 0 and 1 are accepted.


Copy halo catalog information to original particles
(CopyHaloDataToParticles)

If checked, the halo catalog information will be copied to the original particles as well.


1

Only the values 0 and 1 are accepted.


Halo position for 3D visualization
(HaloPositionType)

This sets the position for the halo catalog particles (second output) in 3D space for visualization. Input particle positions (first output) will be unaltered by this. MBP and MCP for particle positions can potentially take a very long time to calculate.


0

The value must be one of the following: Average (0), Center of Mass (1), Most Bound Particle (2), Most Connected Particle (3).


Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


np (number of seeded particles in one dimension, i.e., total particles = np^3)
(NP)

Number of seeded particles in one dimension. Therefore, total simulation particles is np^3 (cubed).


256

The value must be greater than or equal to 0.


overlap (shared point/ghost cell gap distance)
(Overlap)

The space in rL units to extend processor particle ownership for ghost particles/cells. Needed for correct halo calculation when halos cross processor boundaries in parallel computation.


5

The value must be greater than or equal to 0.


pmin (minimum particle threshold for a halo)
(PMin)

Minimum number of particles (threshold) needed before a group is called a halo.


10

The value must be greater than or equal to 1.


rL (physical box side length)
(RL)

The box side length used to wrap particles around if they exceed rL (or less than 0) in any dimension (only positive positions are allowed in the input, or the are wrapped around).


90.1408

The value must be greater than or equal to 0.



Curvature

This filter will compute the Gaussian or mean curvature of the mesh at each point.


The Curvature filter computes the curvature at each point in a polygonal data set. This filter supports both Gaussian and mean curvatures.


the type can be selected from the Curvature type menu button.


Property Description Default Value(s) Restrictions
Curvature Type
(CurvatureType)

This propery specifies which type of curvature to compute.


0

The value must be one of the following: Gaussian (0), Mean (1).


Input
(Input)

This property specifies the input to the Curvature filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Invert Mean Curvature
(InvertMeanCurvature)

If this property is set to 1, the mean curvature calculation will be inverted. This is useful for meshes with inward-pointing normals.


0

Only the values 0 and 1 are accepted.



D3

Repartition a data set into load-balanced spatially convex regions. Create ghost cells if requested.


The D3 filter is available when ParaView is run in parallel. It operates on any type of data set to evenly divide it across the processors into spatially contiguous regions. The output of this filter is of type unstructured grid.


Property Description Default Value(s) Restrictions
Boundary Mode
(BoundaryMode)

This property determines how cells that lie on processor boundaries are handled. The "Assign cells uniquely" option assigns each boundary cell to exactly one process, which is useful for isosurfacing. Selecting "Duplicate cells" causes the cells on the boundaries to be copied to each process that shares that boundary. The "Divide cells" option breaks cells across process boundary lines so that pieces of the cell lie in different processes. This option is useful for volume rendering.


0

The value must be one of the following: Assign cells uniquely (0), Duplicate cells (1), Divide cells (2).


Input
(Input)

This property specifies the input to the D3 filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Minimal Memory
(UseMinimalMemory)

If this property is set to 1, the D3 filter requires communication routines to use minimal memory than without this restriction.


0

Only the values 0 and 1 are accepted.



Decimate

Simplify a polygonal model using an adaptive edge collapse algorithm. This filter works with triangles only.


The Decimate filter reduces the number of triangles in a polygonal data set. Because this filter only operates on triangles, first run the Triangulate filter on a dataset that contains polygons other than triangles.


Property Description Default Value(s) Restrictions
Boundary Vertex Deletion
(BoundaryVertexDeletion)

If this property is set to 1, then vertices on the boundary of the dataset can be removed. Setting the value of this property to 0 preserves the boundary of the dataset, but it may cause the filter not to reach its reduction target.


1

Only the values 0 and 1 are accepted.


Feature Angle
(FeatureAngle)

The value of thie property is used in determining where the data set may be split. If the angle between two adjacent triangles is greater than or equal to the FeatureAngle value, then their boundary is considered a feature edge where the dataset can be split.


15

The value must be greater than or equal to 0 and less than or equal to 180.


Input
(Input)

This property specifies the input to the Decimate filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Preserve Topology
(PreserveTopology)

If this property is set to 1, decimation will not split the dataset or produce holes, but it may keep the filter from reaching the reduction target. If it is set to 0, better reduction can occur (reaching the reduction target), but holes in the model may be produced.


0

Only the values 0 and 1 are accepted.


Target Reduction
(TargetReduction)

This property specifies the desired reduction in the total number of polygons in the output dataset. For example, if the TargetReduction value is 0.9, the Decimate filter will attempt to produce an output dataset that is 10% the size of the input.)


0.9

The value must be greater than or equal to 0 and less than or equal to 1.



Delaunay 2D

Create 2D Delaunay triangulation of input points. It expects a vtkPointSet as input and produces vtkPolyData as output. The points are expected to be in a mostly planar distribution.


Delaunay2D is a filter that constructs a 2D Delaunay triangulation from a list of input points. These points may be represented by any dataset of type vtkPointSet and subclasses. The output of the filter is a polygonal dataset containing a triangle mesh.


The 2D Delaunay triangulation is defined as the triangulation that satisfies the Delaunay criterion for n-dimensional simplexes (in this case n=2 and the simplexes are triangles). This criterion states that a circumsphere of each simplex in a triangulation contains only the n+1 defining points of the simplex. In two dimensions, this translates into an optimal triangulation. That is, the maximum interior angle of any triangle is less than or equal to that of any possible triangulation.


Delaunay triangulations are used to build topological structures from unorganized (or unstructured) points. The input to this filter is a list of points specified in 3D, even though the triangulation is 2D. Thus the triangulation is constructed in the x-y plane, and the z coordinate is ignored (although carried through to the output). You can use the option ProjectionPlaneMode in order to compute the best-fitting plane to the set of points, project the points and that plane and then perform the triangulation using their projected positions and then use it as the plane in which the triangulation is performed.


The Delaunay triangulation can be numerically sensitive in some cases. To prevent problems, try to avoid injecting points that will result in triangles with bad aspect ratios (1000:1 or greater). In practice this means inserting points that are "widely dispersed", and enables smooth transition of triangle sizes throughout the mesh. (You may even want to add extra points to create a better point distribution.) If numerical problems are present, you will see a warning message to this effect at the end of the triangulation process.


Warning:

Points arranged on a regular lattice (termed degenerate cases) can be triangulated in more than one way (at least according to the Delaunay criterion). The choice of triangulation (as implemented by this algorithm) depends on the order of the input points. The first three points will form a triangle; other degenerate points will not break this triangle.


Points that are coincident (or nearly so) may be discarded by the algorithm. This is because the Delaunay triangulation requires unique input points. The output of the Delaunay triangulation is supposedly a convex hull. In certain cases this implementation may not generate the convex hull.


Property Description Default Value(s) Restrictions
Alpha
(Alpha)

The value of this property controls the output of this filter. For a non-zero alpha value, only edges or triangles contained within a sphere centered at mesh vertices will be output. Otherwise, only triangles will be output.


0

The value must be greater than or equal to 0.


Bounding Triangulation
(BoundingTriangulation)

If this property is set to 1, bounding triangulation points (and associated triangles) are included in the output. These are introduced as an initial triangulation to begin the triangulation process. This feature is nice for debugging output.


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input dataset to the Delaunay 2D filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Offset
(Offset)

This property is a multiplier to control the size of the initial, bounding Delaunay triangulation.


1

The value must be greater than or equal to 0.75.


Projection Plane Mode
(ProjectionPlaneMode)

This property determines type of projection plane to use in performing the triangulation.


0

The value must be one of the following: XY Plane (0), Best-Fitting Plane (2).


Tolerance
(Tolerance)

This property specifies a tolerance to control discarding of closely spaced points. This tolerance is specified as a fraction of the diagonal length of the bounding box of the points.


1e-05

The value must be greater than or equal to 0 and less than or equal to 1.



Delaunay 3D

Create a 3D Delaunay triangulation of input points. It expects a vtkPointSet as input and produces vtkUnstructuredGrid as output.


Delaunay3D is a filter that constructs a 3D Delaunay triangulation

from a list of input points. These points may be represented by any

dataset of type vtkPointSet and subclasses. The output of the filter

is an unstructured grid dataset. Usually the output is a tetrahedral

mesh, but if a non-zero alpha distance value is specified (called

the "alpha" value), then only tetrahedra, triangles, edges, and

vertices lying within the alpha radius are output. In other words,

non-zero alpha values may result in arbitrary combinations of

tetrahedra, triangles, lines, and vertices. (The notion of alpha

value is derived from Edelsbrunner's work on "alpha shapes".)


The 3D Delaunay triangulation is defined as the triangulation that

satisfies the Delaunay criterion for n-dimensional simplexes (in

this case n=3 and the simplexes are tetrahedra). This criterion

states that a circumsphere of each simplex in a triangulation

contains only the n+1 defining points of the simplex. (See text for

more information.) While in two dimensions this translates into an

"optimal" triangulation, this is not true in 3D, since a measurement

for optimality in 3D is not agreed on.


Delaunay triangulations are used to build topological structures

from unorganized (or unstructured) points. The input to this filter

is a list of points specified in 3D. (If you wish to create 2D

triangulations see Delaunay2D.) The output is an unstructured

grid.


The Delaunay triangulation can be numerically sensitive. To prevent

problems, try to avoid injecting points that will result in

triangles with bad aspect ratios (1000:1 or greater). In practice

this means inserting points that are "widely dispersed", and enables

smooth transition of triangle sizes throughout the mesh. (You may

even want to add extra points to create a better point

distribution.) If numerical problems are present, you will see a

warning message to this effect at the end of the triangulation

process.


Warning:

Points arranged on a regular lattice (termed degenerate cases) can

be triangulated in more than one way (at least according to the

Delaunay criterion). The choice of triangulation (as implemented by

this algorithm) depends on the order of the input points. The first

four points will form a tetrahedron; other degenerate points

(relative to this initial tetrahedron) will not break it.


Points that are coincident (or nearly so) may be discarded by the

algorithm. This is because the Delaunay triangulation requires

unique input points. You can control the definition of coincidence

with the "Tolerance" instance variable.


The output of the Delaunay triangulation is supposedly a convex

hull. In certain cases this implementation may not generate the

convex hull. This behavior can be controlled by the Offset instance

variable. Offset is a multiplier used to control the size of the

initial triangulation. The larger the offset value, the more likely

you will generate a convex hull; and the more likely you are to see

numerical problems.


The implementation of this algorithm varies from the 2D Delaunay

algorithm (i.e., Delaunay2D) in an important way. When points are

injected into the triangulation, the search for the enclosing

tetrahedron is quite different. In the 3D case, the closest

previously inserted point point is found, and then the connected

tetrahedra are searched to find the containing one. (In 2D, a "walk"

towards the enclosing triangle is performed.) If the triangulation

is Delaunay, then an enclosing tetrahedron will be found. However,

in degenerate cases an enclosing tetrahedron may not be found and

the point will be rejected.


Property Description Default Value(s) Restrictions
Alpha
(Alpha)

This property specifies the alpha (or distance) value to control

the output of this filter. For a non-zero alpha value, only

edges, faces, or tetra contained within the circumsphere (of

radius alpha) will be output. Otherwise, only tetrahedra will be

output.


0

The value must be greater than or equal to 0.


Bounding Triangulation
(BoundingTriangulation)

This boolean controls whether bounding triangulation points (and

associated triangles) are included in the output. (These are

introduced as an initial triangulation to begin the triangulation

process. This feature is nice for debugging output.)


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input dataset to the Delaunay 3D filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Offset
(Offset)

This property specifies a multiplier to control the size of the

initial, bounding Delaunay triangulation.


2.5

The value must be greater than or equal to 2.5.


Tolerance
(Tolerance)

This property specifies a tolerance to control discarding of

closely spaced points. This tolerance is specified as a fraction

of the diagonal length of the bounding box of the points.


0.001

The value must be greater than or equal to 0 and less than or equal to 1.



Descriptive Statistics

Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.


This filter computes the min, max, mean, raw moments M2 through M4, standard deviation, skewness, and kurtosis for each array you select.



The model is simply a univariate Gaussian distribution with the mean and standard deviation provided. Data is assessed using this model by detrending the data (i.e., subtracting the mean) and then dividing by the standard deviation. Thus the assessment is an array whose entries are the number of standard deviations from the mean that each input point lies.


Property Description Default Value(s) Restrictions
Attribute Mode
(AttributeMode)

Specify which type of field data the arrays will be drawn from.


0

Valud array names will be chosen from point and cell data.


Input
(Input)

The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


Model Input
(ModelInput)

A previously-calculated model with which to assess a separate dataset. This input is optional.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


Variables of Interest
(SelectArrays)

Choose arrays whose entries will be used to form observations for statistical analysis.


An array of scalars is required.


Deviations should be
(SignedDeviations)

Should the assessed values be signed deviations or unsigned?


0

The value must be one of the following: Unsigned (0), Signed (1).


Task
(Task)

Specify the task to be performed: modeling and/or assessment.

  1. "Statistics of all the data," creates an output table (or tables) summarizing the entire input dataset;
  1. "Model a subset of the data," creates an output table (or tables) summarizing a randomly-chosen subset of the input dataset;
  1. "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
  1. "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset. The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training. You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting. The Training fraction setting will be ignored for tasks 1 and 3.


3

The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


Training Fraction
(TrainingFraction)

Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Elevation

Create point attribute array by projecting points onto an elevation vector.


The Elevation filter generates point scalar values for an input dataset along a specified direction vector.


The Input menu allows the user to select the data set to which this filter will be applied. Use the Scalar range entry boxes to specify the minimum and maximum scalar value to be generated. The Low Point and High Point define a line onto which each point of the data set is projected. The minimum scalar value is associated with the Low Point, and the maximum scalar value is associated with the High Point. The scalar value for each point in the data set is determined by the location along the line to which that point projects.


Property Description Default Value(s) Restrictions
High Point
(HighPoint)

This property defines the other end of the direction vector (large scalar values).


0 0 1

The coordinate must lie within the bounding box of the dataset. It will default to the maximum in each dimension.


Input
(Input)

This property specifies the input dataset to the Elevation filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Low Point
(LowPoint)

This property defines one end of the direction vector (small scalar values).


0 0 0

The coordinate must lie within the bounding box of the dataset. It will default to the minimum in each dimension.


Scalar Range
(ScalarRange)

This property determines the range into which scalars will be mapped.


0 1


Extract AMR Blocks

This filter extracts a list of datasets from hierarchical datasets.


This filter extracts a list of datasets from hierarchical datasets.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Extract Datasets filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


Selected Data Sets
(SelectedDataSets)

This property provides a list of datasets to extract.



Extract Block

This filter extracts a range of blocks from a multiblock dataset.


This filter extracts a range of groups from a multiblock dataset


Property Description Default Value(s) Restrictions
Block Indices
(BlockIndices)

This property lists the ids of the blocks to extract

from the input multiblock dataset.


Input
(Input)

This property specifies the input to the Extract Group filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.


Maintain Structure
(MaintainStructure)

This is used only when PruneOutput is ON. By default, when pruning the

output i.e. remove empty blocks, if node has only 1 non-null child

block, then that node is removed. To preserve these parent nodes, set

this flag to true.


0

Only the values 0 and 1 are accepted.


Prune Output
(PruneOutput)

When set, the output mutliblock dataset will be pruned to remove empty

nodes. On by default.


1

Only the values 0 and 1 are accepted.



Extract CTH Parts

Create a surface from a CTH volume fraction.


Extract CTH Parts is a specialized filter for visualizing the data from a CTH simulation. It first converts the selected cell-centered arrays to point-centered ones. It then contours each array at a value of 0.5. The user has the option of clipping the resulting surface(s) with a plane. This filter only operates on unstructured data. It produces polygonal output.


Property Description Default Value(s) Restrictions
Double Volume Arrays
(AddDoubleVolumeArrayName)

This property specifies the name(s) of the volume fraction array(s) for generating parts.


An array of scalars is required.


Float Volume Arrays
(AddFloatVolumeArrayName)

This property specifies the name(s) of the volume fraction array(s) for generating parts.


An array of scalars is required.


Unsigned Character Volume Arrays
(AddUnsignedCharVolumeArrayName)

This property specifies the name(s) of the volume fraction array(s) for generating parts.


An array of scalars is required.


Clip Type
(ClipPlane)

This property specifies whether to clip the dataset, and if so, it also specifies the parameters of the plane with which to clip.


The value must be set to one of the following: None, Plane, Box, Sphere.


Input
(Input)

This property specifies the input to the Extract CTH Parts filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Volume Fraction Value
(VolumeFractionSurfaceValue)

The value of this property is the volume fraction value for the surface.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Extract Cells By Region

This filter extracts cells that are inside/outside a region or at a region boundary.


This filter extracts from its input dataset all cells that are either completely inside or outside of a specified region (implicit function). On output, the filter generates an unstructured grid.

To use this filter you must specify a region (implicit function). You must also specify whethter to extract cells lying inside or outside of the region. An option exists to extract cells that are neither inside or outside (i.e., boundary).


Property Description Default Value(s) Restrictions
Extract intersected
(Extract intersected)

This parameter controls whether to extract cells that are on the boundary of the region.


0

Only the values 0 and 1 are accepted.


Extract only intersected
(Extract only intersected)

This parameter controls whether to extract only cells that are on the boundary of the region. If this parameter is set, the Extraction Side parameter is ignored. If Extract Intersected is off, this parameter has no effect.


0

Only the values 0 and 1 are accepted.


Extraction Side
(ExtractInside)

This parameter controls whether to extract cells that are inside or outside the region.


1

The value must be one of the following: outside (0), inside (1).


Intersect With
(ImplicitFunction)

This property sets the region used to extract cells.


The value must be set to one of the following: Plane, Box, Sphere.


Input
(Input)

This property specifies the input to the Slice filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Extract Edges

Extract edges of 2D and 3D cells as lines.


The Extract Edges filter produces a wireframe version of the input dataset by extracting all the edges of the dataset's cells as lines. This filter operates on any type of data set and produces polygonal output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Extract Edges filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Extract Level

This filter extracts a range of groups from a hierarchical dataset.


This filter extracts a range of levels from a hierarchical dataset


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Extract Group filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


Levels
(Levels)

This property lists the levels to extract

from the input hierarchical dataset.



Extract Selection

Extract different type of selections.


This filter extracts a set of cells/points given a selection.

The selection can be obtained from a rubber-band selection

(either cell, visible or in a frustum) or threshold selection

and passed to the filter or specified by providing an ID list.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input from which the selection is extracted.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable.


Preserve Topology
(PreserveTopology)

If this property is set to 1 the output preserves the topology of its

input and adds an insidedness array to mark which cells are inside or

out. If 0 then the output is an unstructured grid which contains only

the subset of cells that are inside.


0

Only the values 0 and 1 are accepted.


Selection
(Selection)

The input that provides the selection object.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.


Show Bounds
(ShowBounds)

For frustum selection, if this property is set to 1 the output is the

outline of the frustum instead of the contents of the input that lie

within the frustum.


0

Only the values 0 and 1 are accepted.



Extract Subset

Extract a subgrid from a structured grid with the option of setting subsample strides.


The Extract Grid filter returns a subgrid of a structured input data set (uniform rectilinear, curvilinear, or nonuniform rectilinear). The output data set type of this filter is the same as the input type.


Property Description Default Value(s) Restrictions
Include Boundary
(IncludeBoundary)

If the value of this property is 1, then if the sample rate in any dimension is greater than 1, the boundary indices of the input dataset will be passed to the output even if the boundary extent is not an even multiple of the sample rate in a given dimension.


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Extract Grid filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkRectilinearGrid, vtkStructuredPoints, vtkStructuredGrid.


Sample Rate I
(SampleRateI)

This property indicates the sampling rate in the I dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.


1

The value must be greater than or equal to 1.


Sample Rate J
(SampleRateJ)

This property indicates the sampling rate in the J dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.


1

The value must be greater than or equal to 1.


Sample Rate K
(SampleRateK)

This property indicates the sampling rate in the K dimension. A value grater than 1 results in subsampling; every nth index will be included in the output.


1

The value must be greater than or equal to 1.


V OI
(VOI)

This property specifies the minimum and maximum point indices along each of the I, J, and K axes; these values indicate the volume of interest (VOI). The output will have the (I,J,K) extent specified here.


0 0 0 0 0 0

The values must lie within the extent of the input dataset.



Extract Surface

Extract a 2D boundary surface using neighbor relations to eliminate internal faces.


The Extract Surface filter extracts the polygons forming the outer surface of the input dataset. This filter operates on any type of data and produces polygonal data as output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Extract Surface filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Nonlinear Subdivision Level
(NonlinearSubdivisionLevel)

If the input is an unstructured grid with nonlinear faces, this

parameter determines how many times the face is subdivided into

linear faces. If 0, the output is the equivalent of its linear

couterpart (and the midpoints determining the nonlinear

interpolation are discarded). If 1, the nonlinear face is

triangulated based on the midpoints. If greater than 1, the

triangulated pieces are recursively subdivided to reach the

desired subdivision. Setting the value to greater than 1 may

cause some point data to not be passed even if no quadratic faces

exist. This option has no effect if the input is not an

unstructured grid.


1

The value must be greater than or equal to 0 and less than or equal to 4.


Piece Invariant
(PieceInvariant)

If the value of this property is set to 1, internal surfaces along process boundaries will be removed. NOTE: Enabling this option might cause multiple executions of the data source because more information is needed to remove internal surfaces.


1

Only the values 0 and 1 are accepted.



FFT Of Selection Over Time

Extracts selection over time and plots the FFT


Extracts the data of a selection (e.g. points or cells) over time,

takes the FFT of them, and plots them.


Property Description Default Value(s) Restrictions
Input
(Input)

The input from which the selection is extracted.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable, vtkCompositeDataSet.


Selection
(Selection)

The input that provides the selection object.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.



Feature Edges

This filter will extract edges along sharp edges of surfaces or boundaries of surfaces.


The Feature Edges filter extracts various subsets of edges from the input data set. This filter operates on polygonal data and produces polygonal output.


Property Description Default Value(s) Restrictions
Boundary Edges
(BoundaryEdges)

If the value of this property is set to 1, boundary edges will be extracted. Boundary edges are defined as lines cells or edges that are used by only one polygon.


1

Only the values 0 and 1 are accepted.


Coloring
(Coloring)

If the value of this property is set to 1, then the extracted edges are assigned a scalar value based on the type of the edge.


0

Only the values 0 and 1 are accepted.


Feature Angle
(FeatureAngle)

Ths value of this property is used to define a feature edge. If the surface normal between two adjacent triangles is at least as large as this Feature Angle, a feature edge exists. (See the FeatureEdges property.)


30

The value must be greater than or equal to 0 and less than or equal to 180.


Feature Edges
(FeatureEdges)

If the value of this property is set to 1, feature edges will be extracted. Feature edges are defined as edges that are used by two polygons whose dihedral angle is greater than the feature angle. (See the FeatureAngle property.)

Toggle whether to extract feature edges.


1

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Feature Edges filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Manifold Edges
(ManifoldEdges)

If the value of this property is set to 1, manifold edges will be extracted. Manifold edges are defined as edges that are used by exactly two polygons.


0

Only the values 0 and 1 are accepted.


Non-Manifold Edges
(NonManifoldEdges)

If the value of this property is set to 1, non-manifold ediges will be extracted. Non-manifold edges are defined as edges that are use by three or more polygons.


1

Only the values 0 and 1 are accepted.



Generate Ids

Generate scalars from point and cell ids.


This filter generates scalars using cell and point ids. That is, the point attribute data scalars are generated from the point ids, and the cell attribute data scalars or field data are generated from the the cell ids.


Property Description Default Value(s) Restrictions
Array Name
(ArrayName)

The name of the array that will contain ids.


Ids
Input
(Input)

This property specifies the input to the Cell Data to Point Data filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Generate Quadrature Points

Create a point set with data at quadrature points.


"Create a point set with data at quadrature points."


Property Description Default Value(s) Restrictions
Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


Select Source Array
(SelectSourceArray)

Specifies the offset array from which we generate quadrature points.


An array of scalars is required.



Generate Quadrature Scheme Dictionary

Generate quadrature scheme dictionaries in data sets that do not have them.


Generate quadrature scheme dictionaries in data sets that do not have them.


Property Description Default Value(s) Restrictions
Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.



Generate Surface Normals

This filter will produce surface normals used for smooth shading. Splitting is used to avoid smoothing across feature edges.


This filter generates surface normals at the points of the input polygonal dataset to provide smooth shading of the dataset. The resulting dataset is also polygonal. The filter works by calculating a normal vector for each polygon in the dataset and then averaging the normals at the shared points.


Property Description Default Value(s) Restrictions
Compute Cell Normals
(ComputeCellNormals)

This filter computes the normals at the points in the data set. In the process of doing this it computes polygon normals too. If you want these normals to be passed to the output of this filter, set the value of this property to 1.


0

Only the values 0 and 1 are accepted.


Consistency
(Consistency)

The value of this property controls whether consistent polygon ordering is enforced. Generally the normals for a data set should either all point inward or all point outward. If the value of this property is 1, then this filter will reorder the points of cells that whose normal vectors are oriented the opposite direction from the rest of those in the data set.


1

Only the values 0 and 1 are accepted.


Feature Angle
(FeatureAngle)

The value of this property defines a feature edge. If the surface normal between two adjacent triangles is at least as large as this Feature Angle, a feature edge exists. If Splitting is on, points are duplicated along these feature edges. (See the Splitting property.)


30

The value must be greater than or equal to 0 and less than or equal to 180.


Flip Normals
(FlipNormals)

If the value of this property is 1, this filter will reverse the normal direction (and reorder the points accordingly) for all polygons in the data set; this changes front-facing polygons to back-facing ones, and vice versa. You might want to do this if your viewing position will be inside the data set instead of outside of it.


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Normals Generation filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Non-Manifold Traversal
(NonManifoldTraversal)

Turn on/off traversal across non-manifold edges. Not traversing non-manifold edges will prevent problems where the consistency of polygonal ordering is corrupted due to topological loops.


1

Only the values 0 and 1 are accepted.


Piece Invariant
(PieceInvariant)

Turn this option to to produce the same results regardless of the number of processors used (i.e., avoid seams along processor boundaries). Turn this off if you do want to process ghost levels and do not mind seams.


1

Only the values 0 and 1 are accepted.


Splitting
(Splitting)

This property controls the splitting of sharp edges. If sharp edges are split (property value = 1), then points are duplicated along these edges, and separate normals are computed for both sets of points to give crisp (rendered) surface definition.


1

Only the values 0 and 1 are accepted.



Glyph

This filter generates an arrow, cone, cube, cylinder, line, sphere, or 2D glyph at each point of the input data set. The glyphs can be oriented and scaled by point attributes of the input dataset.


The Glyph filter generates a glyph (i.e., an arrow, cone, cube, cylinder, line, sphere, or 2D glyph) at each point in the input dataset. The glyphs can be oriented and scaled by the input point-centered scalars and vectors. The Glyph filter operates on any type of data set. Its output is polygonal. This filter is available on the Toolbar.


Property Description Default Value(s) Restrictions
Glyph Transform
(GlyphTransform)

The values in this property allow you to specify the transform

(translation, rotation, and scaling) to apply to the glyph source.


The value must be set to one of the following: Transform2.


Input
(Input)

This property specifies the input to the Glyph filter. This is the dataset to which the glyphs will be applied.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Maximum Number of Points
(MaximumNumberOfPoints)

The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)


5000

The value must be greater than or equal to 0.


Random Mode
(RandomMode)

If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.


1

Only the values 0 and 1 are accepted.


Scalars
(SelectInputScalars)

This property indicates the name of the scalar array on which to operate. The indicated array may be used for scaling the glyphs. (See the SetScaleMode property.)


An array of scalars is required.


Vectors
(SelectInputVectors)

This property indicates the name of the vector array on which to operate. The indicated array may be used for scaling and/or orienting the glyphs. (See the SetScaleMode and SetOrient properties.)


1

An array of vectors is required.


Orient
(SetOrient)

If this property is set to 1, the glyphs will be oriented based on the selected vector array.


1

Only the values 0 and 1 are accepted.


Set Scale Factor
(SetScaleFactor)

The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.


The value must lie within the range of the selected data array.


The value must lie within the range of the selected data array.


Scale Mode
(SetScaleMode)

The value of this property specifies how/if the glyphs should be scaled based on the point-centered scalars/vectors in the input dataset.


1

The value must be one of the following: scalar (0), vector (1), vector_components (2), off (3).


Glyph Type
(Source)

This property determines which type of glyph will be placed at the points in the input dataset.


The selected object must be the result of the following: sources (includes readers), glyph_sources.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


The value must be set to one of the following: ArrowSource, ConeSource, CubeSource, CylinderSource, LineSource, SphereSource, GlyphSource2D.


Mask Points
(UseMaskPoints)

If the value of this property is set to 1, limit the maximum number of glyphs to the value indicated by MaximumNumberOfPoints. (See the MaximumNumberOfPoints property.)


1

Only the values 0 and 1 are accepted.



Glyph With Custom Source

This filter generates a glyph at each point of the input data set. The glyphs can be oriented and scaled by point attributes of the input dataset.


The Glyph filter generates a glyph at each point in the input dataset. The glyphs can be oriented and scaled by the input point-centered scalars and vectors. The Glyph filter operates on any type of data set. Its output is polygonal. This filter is available on the Toolbar.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Glyph filter. This is the dataset to which the glyphs will be applied.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Maximum Number of Points
(MaximumNumberOfPoints)

The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)


5000

The value must be greater than or equal to 0.


Random Mode
(RandomMode)

If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.


1

Only the values 0 and 1 are accepted.


Scalars
(SelectInputScalars)

This property indicates the name of the scalar array on which to operate. The indicated array may be used for scaling the glyphs. (See the SetScaleMode property.)


An array of scalars is required.


Vectors
(SelectInputVectors)

This property indicates the name of the vector array on which to operate. The indicated array may be used for scaling and/or orienting the glyphs. (See the SetScaleMode and SetOrient properties.)


1

An array of vectors is required.


Orient
(SetOrient)

If this property is set to 1, the glyphs will be oriented based on the selected vector array.


1

Only the values 0 and 1 are accepted.


Set Scale Factor
(SetScaleFactor)

The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.


The value must lie within the range of the selected data array.


The value must lie within the range of the selected data array.


Scale Mode
(SetScaleMode)

The value of this property specifies how/if the glyphs should be scaled based on the point-centered scalars/vectors in the input dataset.


1

The value must be one of the following: scalar (0), vector (1), vector_components (2), off (3).


Glyph Type
(Source)

This property determines which type of glyph will be placed at the points in the input dataset.


The selected object must be the result of the following: sources (includes readers), glyph_sources.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Mask Points
(UseMaskPoints)

If the value of this property is set to 1, limit the maximum number of glyphs to the value indicated by MaximumNumberOfPoints. (See the MaximumNumberOfPoints property.)


1

Only the values 0 and 1 are accepted.



Gradient

This filter computes gradient vectors for an image/volume.


The Gradient filter computes the gradient vector at each point in an image or volume. This filter uses central differences to compute the gradients. The Gradient filter operates on uniform rectilinear (image) data and produces image data output.


Property Description Default Value(s) Restrictions
Dimensionality
(Dimensionality)

This property indicates whether to compute the gradient in two dimensions or in three. If the gradient is being computed in two dimensions, the X and Y dimensions are used.


3

The value must be one of the following: Two (2), Three (3).


Input
(Input)

This property specifies the input to the Gradient filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData.


Select Input Scalars
(SelectInputScalars)

This property lists the name of the array from which to compute the gradient.


An array of scalars is required.



Gradient Of Unstructured DataSet

Estimate the gradient for each point or cell in any type of dataset.


The Gradient (Unstructured) filter estimates the gradient vector at each point or cell. It operates on any type of vtkDataSet, and the output is the same type as the input. If the dataset is a vtkImageData, use the Gradient filter instead; it will be more efficient for this type of dataset.


Property Description Default Value(s) Restrictions
Compute Vorticity
(ComputeVorticity)

When this flag is on, the gradient filter will compute the

vorticity/curl of a 3 component array.


0

Only the values 0 and 1 are accepted.


Faster Approximation
(FasterApproximation)

When this flag is on, the gradient filter will provide a less

accurate (but close) algorithm that performs fewer derivative

calculations (and is therefore faster). The error contains some

smoothing of the output data and some possible errors on the

boundary. This parameter has no effect when performing the

gradient of cell data.


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Gradient (Unstructured) filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Result Array Name
(ResultArrayName)

This property provides a name for the output array containing the gradient vectors.


Gradients
Scalar Array
(SelectInputScalars)

This property lists the name of the scalar array from which to compute the gradient.


An array of scalars is required.


Valud array names will be chosen from point and cell data.



Grid Connectivity

Mass properties of connected fragments for unstructured grids.


This filter works on multiblock unstructured grid inputs and also works in

parallel. It Ignores any cells with a cell data Status value of 0.

It performs connectivity to distict fragments separately. It then integrates

attributes of the fragments.


Property Description Default Value(s) Restrictions
Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid, vtkCompositeDataSet.



Group Datasets

Group data sets.


Groups multiple datasets to create a multiblock dataset


Property Description Default Value(s) Restrictions
Input
(Input)

This property indicates the the inputs to the Group Datasets filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.



Histogram

Extract a histogram from field data.


Property Description Default Value(s) Restrictions
Bin Count
(BinCount)

The value of this property specifies the number of bins for the histogram.


10

The value must be greater than or equal to 1 and less than or equal to 256.


Calculate Averages
(CalculateAverages)

This option controls whether the algorithm calculates averages

of variables other than the primary variable that fall into each

bin.


1

Only the values 0 and 1 are accepted.


Component
(Component)

The value of this property specifies the array component from which the histogram should be computed.


0
Custom Bin Ranges
(CustomBinRanges)

Set custom bin ranges to use. These are used only when

UseCustomBinRanges is set to true.


0 100

The value must lie within the range of the selected data array.


Input
(Input)

This property specifies the input to the Histogram filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Select Input Array
(SelectInputArray)

This property indicates the name of the array from which to compute the histogram.


An array of scalars is required.


Valud array names will be chosen from point and cell data.


Use Custom Bin Ranges
(UseCustomBinRanges)

When set to true, CustomBinRanges will be used instead of using the

full range for the selected array. By default, set to false.


0

Only the values 0 and 1 are accepted.



Integrate Variables

This filter integrates cell and point attributes.


The Integrate Attributes filter integrates point and cell data over lines and surfaces. It also computes length of lines, area of surface, or volume.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Integrate Attributes filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Interpolate to Quadrature Points

Create scalar/vector data arrays interpolated to quadrature points.


"Create scalar/vector data arrays interpolated to quadrature points."


Property Description Default Value(s) Restrictions
Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkUnstructuredGrid.


Select Source Array
(SelectSourceArray)

Specifies the offset array from which we interpolate values to quadrature points.


An array of scalars is required.



Intersect Fragments

The Intersect Fragments filter perform geometric intersections on sets of fragments.


The Intersect Fragments filter perform geometric intersections on sets of

fragments. The filter takes two inputs, the first containing fragment

geometry and the second containing fragment centers. The filter has two

outputs. The first is geometry that results from the intersection. The

second is a set of points that is an approximation of the center of where

each fragment has been intersected.


Property Description Default Value(s) Restrictions
Slice Type
(CutFunction)

This property sets the type of intersecting geometry, and

associated parameters.


The value must be set to one of the following: Plane, Box, Sphere.


Input
(Input)

This input must contian fragment geometry.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.


Source
(Source)

This input must contian fragment centers.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkMultiBlockDataSet.



Iso Volume

This filter extracts cells by clipping cells that have point scalars not in the specified range.


This filter clip away the cells using lower and upper thresholds.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Threshold filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Input Scalars
(SelectInputScalars)

The value of this property contains the name of the scalar array from which to perform thresholding.


An array of scalars is required.


Valud array names will be chosen from point and cell data.


Threshold Range
(ThresholdBetween)

The values of this property specify the upper and lower bounds of the thresholding operation.


0 0

The value must lie within the range of the selected data array.



K Means

Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.


This filter iteratively computes the center of k clusters in a space whose coordinates are specified by the arrays you select. The clusters are chosen as local minima of the sum of square Euclidean distances from each point to its nearest cluster center. The model is then a set of cluster centers. Data is assessed by assigning a cluster center and distance to the cluster to each point in the input data set.


Property Description Default Value(s) Restrictions
Attribute Mode
(AttributeMode)

Specify which type of field data the arrays will be drawn from.


0

Valud array names will be chosen from point and cell data.


Input
(Input)

The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


k
(K)

Specify the number of clusters.


5

The value must be greater than or equal to 1.


Max Iterations
(MaxNumIterations)

Specify the maximum number of iterations in which cluster centers are moved before the algorithm terminates.


50

The value must be greater than or equal to 1.


Model Input
(ModelInput)

A previously-calculated model with which to assess a separate dataset. This input is optional.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


Variables of Interest
(SelectArrays)

Choose arrays whose entries will be used to form observations for statistical analysis.


An array of scalars is required.


Task
(Task)

Specify the task to be performed: modeling and/or assessment.

  1. "Statistics of all the data," creates an output table (or tables) summarizing the entire input dataset;
  1. "Model a subset of the data," creates an output table (or tables) summarizing a randomly-chosen subset of the input dataset;
  1. "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
  1. "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset. The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training. You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting. The Training fraction setting will be ignored for tasks 1 and 3.


3

The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


Tolerance
(Tolerance)

Specify the relative tolerance that will cause early termination.


0.01

The value must be greater than or equal to 0 and less than or equal to 1.


Training Fraction
(TrainingFraction)

Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Level Scalars

The Level Scalars filter uses colors to show levels of a hierarchical dataset.


The Level Scalars filter uses colors to show levels of a hierarchical dataset.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Level Scalars filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.



Linear Extrusion

This filter creates a swept surface defined by translating the input along a vector.


The Linear Extrusion filter creates a swept surface by translating the input dataset along a specified vector. This filter is intended to operate on 2D polygonal data. This filter operates on polygonal data and produces polygonal data output.


Property Description Default Value(s) Restrictions
Capping
(Capping)

The value of this property indicates whether to cap the ends of the swept surface. Capping works by placing a copy of the input dataset on either end of the swept surface, so it behaves properly if the input is a 2D surface composed of filled polygons. If the input dataset is a closed solid (e.g., a sphere), then if capping is on (i.e., this property is set to 1), two copies of the data set will be displayed on output (the second translated from the first one along the specified vector). If instead capping is off (i.e., this property is set to 0), then an input closed solid will produce no output.


1

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Linear Extrusion filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Piece Invariant
(PieceInvariant)

The value of this property determines whether the output will be the same regardless of the number of processors used to compute the result. The difference is whether there are internal polygonal faces on the processor boundaries. A value of 1 will keep the results the same; a value of 0 will allow internal faces on processor boundaries.


0

Only the values 0 and 1 are accepted.


Scale Factor
(ScaleFactor)

The value of this property determines the distance along the vector the dataset will be translated. (A scale factor of 0.5 will move the dataset half the length of the vector, and a scale factor of 2 will move it twice the vector's length.)


1
Vector
(Vector)

The value of this property indicates the X, Y, and Z components of the vector along which to sweep the input dataset.


0 0 1


Loop Subdivision

This filter iteratively divides each triangle into four triangles. New points are placed so the output surface is smooth.


The Loop Subdivision filter increases the granularity of a polygonal mesh. It works by dividing each triangle in the input into four new triangles. It is named for Charles Loop, the person who devised this subdivision scheme. This filter only operates on triangles, so a data set that contains other types of polygons should be passed through the Triangulate filter before applying this filter to it. This filter only operates on polygonal data (specifically triangle meshes), and it produces polygonal output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Loop Subdivision filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Number of Subdivisions
(NumberOfSubdivisions)

Set the number of subdivision iterations to perform. Each subdivision divides single triangles into four new triangles.


1

The value must be greater than or equal to 1 and less than or equal to 4.



Mask Points

Reduce the number of points. This filter is often used before glyphing. Generating vertices is an option.


The Mask Points filter reduces the number of points in the dataset. It operates on any type of dataset, but produces only points / vertices as output. This filter is often used before the Glyph filter, but the basic point-masking functionality is also available on the Properties page for the Glyph filter.


Property Description Default Value(s) Restrictions
Generate Vertices
(GenerateVertices)

This property specifies whether to generate vertex cells as the topography of the output. If set to 1, the geometry (vertices) will be displayed in the rendering window; otherwise no geometry will be displayed.


0

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Mask Points filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Maximum Number of Points
(MaximumNumberOfPoints)

The value of this property indicates the maximum number of points in the output dataset.


5000

The value must be greater than or equal to 0.


Offset
(Offset)

The value of this property indicates the point in the input dataset from which to start masking.


0

The value must be greater than or equal to 0.


On Ratio
(OnRatio)

The value of this property specifies the ratio of points to retain in the output. (For example, if the on ratio is 3, then the output will contain 1/3 as many points -- up to the value of the MaximumNumberOfPoints property -- as the input.)


2

The value must be greater than or equal to 1.


Random
(RandomMode)

If the value of this property is set to 0, then the points in the output will be randomly selected from the input; otherwise this filter will subsample regularly. Selecting points at random is helpful to avoid striping when masking the points of a structured dataset.


0

Only the values 0 and 1 are accepted.


Single Vertex Per Cell
(SingleVertexPerCell)

Tell filter to only generate one vertex per cell instead of multiple vertices in one cell.


0

Only the values 0 and 1 are accepted.



Material Interface Filter

The Material Interface filter finds volumes in the input data containg material above a certain material fraction.


The Material Interface filter finds voxels inside of which a material

fraction (or normalized amount of material) is higher than a given

threshold. As these voxels are identified surfaces enclosing adjacent

voxels above the threshold are generated. The resulting volume and its

surface are what we call a fragment. The filter has the ability to

compute various volumetric attributes such as fragment volume, mass,

center of mass as well as volume and mass weighted averages for any of

the fields present. Any field selected for such computation will be also

be coppied into the fragment surface's point data for visualization. The

filter also has the ability to generate Oriented Bounding Boxes (OBB) for

each fragment.


The data generated by the filter is organized in three outputs. The

"geometry" output, containing the fragment surfaces. The "statistics"

output, containing a point set of the centers of mass. The "obb

representaion" output, containing OBB representations (poly data). All

computed attributes are coppied into the statistics and geometry output.

The obb representation output is used for validation and debugging

puproses and is turned off by default.


To measure the size of craters, the filter can invert a volume fraction

and clip the volume fraction with a sphere and/or a plane.


Property Description Default Value(s) Restrictions
Clip Center
(ClipCenter)

This property specifies center of the clipping plane or sphere.


0 0 0
Clip Plane Vector
(ClipPlaneVector)

This property specifies the normal of the clipping plane.


0 0 1
Clip Radius
(ClipRadius)

This property specifies the radius of the clipping sphere.


1

The value must be greater than or equal to 0.


Clip With Plane
(ClipWithPlane)

This option masks all material on on side of a plane. It is useful for

finding the properties of a crater.


0

Only the values 0 and 1 are accepted.


Clip With Sphere
(ClipWithSphere)

This option masks all material outside of a sphere.


0

Only the values 0 and 1 are accepted.


Compute OBB
(ComputeOBB)

Compute Object Oriented Bounding boxes (OBB). When active the result of

this computation is coppied into the statistics output. In the case

that the filter is built in its validation mode, the OBB's are

rendered.


0

Only the values 0 and 1 are accepted.


Input
(Input)

Input to the filter can be a hierarchical box data set containing image

data or a multi-block of rectilinear grids.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkHierarchicalBoxDataSet.


Invert Volume Fraction
(InvertVolumeFraction)

Inverting the volume fraction generates the negative of the material.

It is useful for analyzing craters.


0

Only the values 0 and 1 are accepted.


Material Fraction Threshold
(MaterialFractionThreshold)

Material fraction is defined as normalized amount of material per

voxel. Any voxel in the input data set with a material fraction greater

than this value is included in the output data set.


0.5

The value must be greater than or equal to 0.08 and less than or equal to 1.


Output Base Name
(OutputBaseName)

This property specifies the base including path of where to write the

statistics and gemoetry output text files. It follows the pattern

"/path/to/folder/and/file" here file has no extention, as the filter

will generate a unique extention.


Select Mass Arrays
(SelectMassArray)

Mass arrays are paired with material fraction arrays. This means that

the first selected material fraction array is paired with the first

selected mass array, and so on sequentially. As the filter identifies

voxels meeting the minimum material fraction threshold, these voxel's

mass will be used in fragment center of mass and mass calculation.


A warning is generated if no mass array is selected for an individual

material fraction array. However, in that case the filter will run

without issue because the statistics output can be generated using

fragments' centers computed from axis aligned bounding boxes.


An array of scalars is required.


Compute mass weighted average over:
(SelectMassWtdAvgArray)

For arrays selected a mass weighted average is computed. These arrays

are also coppied into fragment geometry cell data as the fragment

surfaces are generated.


An array of scalars is required.


Select Material Fraction Arrays
(SelectMaterialArray)

Material fraction is defined as normalized amount of material per

voxel. It is expected that arrays containing material fraction data has

been down converted to a unsigned char.


An array of scalars is required.


Compute volume weighted average over:
(SelectVolumeWtdAvgArray)

For arrays selected a volume weighted average is computed. The values

of these arrays are also coppied into fragment geometry cell data as

the fragment surfaces are generated.


An array of scalars is required.


Write Geometry Output
(WriteGeometryOutput)

If this property is set, then the geometry output is written to a text

file. The file name will be coonstructed using the path in the "Output

Base Name" widget.


0

Only the values 0 and 1 are accepted.


Write Statistics Output
(WriteStatisticsOutput)

If this property is set, then the statistics output is written to a

text file. The file name will be coonstructed using the path in the

"Output Base Name" widget.


0

Only the values 0 and 1 are accepted.



Median

Compute the median scalar values in a specified neighborhood for image/volume datasets.


The Median filter operates on uniform rectilinear (image or volume) data and produces uniform rectilinear output. It replaces the scalar value at each pixel / voxel with the median scalar value in the specified surrounding neighborhood. Since the median operation removes outliers, this filter is useful for removing high-intensity, low-probability noise (shot noise).


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Median filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData.


Kernel Size
(KernelSize)

The value of this property specifies the number of pixels/voxels in each dimension to use in computing the median to assign to each pixel/voxel. If the kernel size in a particular dimension is 1, then the median will not be computed in that direction.


1 1 1
Select Input Scalars
(SelectInputScalars)

The value of thie property lists the name of the scalar array to use in computing the median.


An array of scalars is required.



Merge Blocks


vtkCompositeDataToUnstructuredGridFilter appends all vtkDataSet

leaves of the input composite dataset to a single unstructure grid. The

subtree to be combined can be choosen using the SubTreeCompositeIndex. If

the SubTreeCompositeIndex is a leaf node, then no appending is required.


Property Description Default Value(s) Restrictions
Input
(Input)

Set the input composite dataset.


The selected dataset must be one of the following types (or a subclass of one of them): vtkCompositeDataSet.



Mesh Quality

This filter creates a new cell array containing a geometric measure of each cell's fitness. Different quality measures can be chosen for different cell shapes.


This filter creates a new cell array containing a geometric measure of each cell's fitness. Different quality measures can be chosen for different cell shapes. Supported shapes include triangles, quadrilaterals, tetrahedra, and hexahedra. For other shapes, a value of 0 is assigned.


Property Description Default Value(s) Restrictions
Hex Quality Measure
(HexQualityMeasure)

This property indicates which quality measure will be used to evaluate hexahedral quality.


5

The value must be one of the following: Diagonal (21), Dimension (22), Distortion (15), Edge Ratio (0), Jacobian (25), Maximum Edge Ratio (16), Maximum Aspect Frobenius (5), Mean Aspect Frobenius (4), Oddy (23), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Shear (11), Shear and Size (24), Skew (17), Stretch (20), Taper (18), Volume (19).


Input
(Input)

This property specifies the input to the Mesh Quality filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Quad Quality Measure
(QuadQualityMeasure)

This property indicates which quality measure will be used to evaluate quadrilateral quality.


0

The value must be one of the following: Area (28), Aspect Ratio (1), Condition (9), Distortion (15), Edge Ratio (0), Jacobian (25), Maximum Aspect Frobenius (5), Maximum Aspect Frobenius (5), Maximum Edge Ratio (16), Mean Aspect Frobenius (4), Minimum Angle (6), Oddy (23), Radius Ratio (2), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Shear (11), Shear and Size (24), Skew (17), Stretch (20), Taper (18), Warpage (26).


Tet Quality Measure
(TetQualityMeasure)

This property indicates which quality measure will be used to evaluate tetrahedral quality. The radius ratio is the size of a sphere circumscribed by a tetrahedron's 4 vertices divided by the size of a circle tangent to a tetrahedron's 4 faces. The edge ratio is the ratio of the longest edge length to the shortest edge length. The collapse ratio is the minimum ratio of height of a vertex above the triangle opposite it divided by the longest edge of the opposing triangle across all vertex/triangle pairs.


2

The value must be one of the following: Edge Ratio (0), Aspect Beta (29), Aspect Gamma (27), Aspect Frobenius (3), Aspect Ratio (1), Collapse Ratio (7), Condition (9), Distortion (15), Jacobian (25), Minimum Dihedral Angle (6), Radius Ratio (2), Relative Size Squared (12), Scaled Jacobian (10), Shape (13), Shape and Size (14), Volume (19).


Triangle Quality Measure
(TriangleQualityMeasure)

This property indicates which quality measure will be used to evaluate triangle quality. The radius ratio is the size of a circle circumscribed by a triangle's 3 vertices divided by the size of a circle tangent to a triangle's 3 edges. The edge ratio is the ratio of the longest edge length to the shortest edge length.


2

The value must be one of the following: Area (28), Aspect Ratio (1), Aspect Frobenius (3), Condition (9), Distortion (15), Edge Ratio (0), Maximum Angle (8), Minimum Angle (6), Scaled Jacobian (10), Radius Ratio (2), Relative Size Squared (12), Shape (13), Shape and Size (14).



Multicorrelative Statistics

Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.


This filter computes the covariance matrix for all the arrays you select plus the mean of each array. The model is thus a multivariate Gaussian distribution with the mean vector and variances provided. Data is assessed using this model by computing the Mahalanobis distance for each input point. This distance will always be positive.



The learned model output format is rather dense and can be confusing, so it is discussed here. The first filter output is a multiblock dataset consisting of 2 tables:


  1. Raw covariance data.
  1. Covariance matrix and its Cholesky decomposition.


The raw covariance table has 3 meaningful columns: 2 titled "Column1" and "Column2" whose entries generally refer to the N arrays you selected when preparing the filter and 1 column titled "Entries" that contains numeric values. The first row will always contain the number of observations in the statistical analysis. The next N rows contain the mean for each of the N arrays you selected. The remaining rows contain covariances of pairs of arrays.


The second table (covariance matrix and Cholesky decomposition) contains information derived from the raw covariance data of the first table. The first N rows of the first column contain the name of one array you selected for analysis. These rows are followed by a single entry labeled "Cholesky" for a total of N+1 rows. The second column, Mean contains the mean of each variable in the first N entries and the number of observations processed in the final (N+1) row.



The remaining columns (there are N, one for each array) contain 2 matrices in triangular format. The upper right triangle contains the covariance matrix (which is symmetric, so its lower triangle may be inferred). The lower left triangle contains the Cholesky decomposition of the covariance matrix (which is triangular, so its upper triangle is zero). Because the diagonal must be stored for both matrices, an additional row is required ?�� hence the N+1 rows and the final entry of the column named "Column".


Property Description Default Value(s) Restrictions
Attribute Mode
(AttributeMode)

Specify which type of field data the arrays will be drawn from.


0

Valud array names will be chosen from point and cell data.


Input
(Input)

The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


Model Input
(ModelInput)

A previously-calculated model with which to assess a separate dataset. This input is optional.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


Variables of Interest
(SelectArrays)

Choose arrays whose entries will be used to form observations for statistical analysis.


An array of scalars is required.


Task
(Task)

Specify the task to be performed: modeling and/or assessment.

  1. "Statistics of all the data," creates an output table (or tables) summarizing the entire input dataset;
  1. "Model a subset of the data," creates an output table (or tables) summarizing a randomly-chosen subset of the input dataset;
  1. "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
  1. "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset. The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training. You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting. The Training fraction setting will be ignored for tasks 1 and 3.


3

The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


Training Fraction
(TrainingFraction)

Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Normal Glyphs

Filter computing surface normals.


Filter computing surface normals.


Property Description Default Value(s) Restrictions
Maximum Number of Points
(Glyph Max. Points)

The value of this property specifies the maximum number of glyphs that should appear in the output dataset if the value of the UseMaskPoints property is 1. (See the UseMaskPoints property.)


5000

The value must be greater than or equal to 0.


Random Mode
(Glyph Random Mode)

If the value of this property is 1, then the points to glyph are chosen randomly. Otherwise the point ids chosen are evenly spaced.


1

Only the values 0 and 1 are accepted.


Set Scale Factor
(Glyph Scale Factor)

The value of this property will be used as a multiplier for scaling the glyphs before adding them to the output.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.1.


The value must lie within the range of the selected data array.


The value must lie within the range of the selected data array.


Input
(Input)

This property specifies the input to the Extract Surface filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Invert
(InvertArrow)

Inverts the arrow direction.


0

Only the values 0 and 1 are accepted.



Octree Depth Limit

This filter takes in a octree and produces a new octree which is no deeper than the maximum specified depth level.


The Octree Depth Limit filter takes in an octree and produces a new octree that is nowhere deeper than the maximum specified depth level. The attribute data of pruned leaf cells are integrated in to their ancestors at the cut level.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Octree Depth Limit filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkHyperOctree.


Maximum Level
(MaximumLevel)

The value of this property specifies the maximum depth of the output octree.


4

The value must be greater than or equal to 3 and less than or equal to 255.



Octree Depth Scalars

This filter adds a scalar to each leaf of the octree that represents the leaf's depth within the tree.


The vtkHyperOctreeDepth filter adds a scalar to each leaf of the octree that represents the leaf's depth within the tree.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Octree Depth Scalars filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkHyperOctree.



Outline

This filter generates a bounding box representation of the input.


The Outline filter generates an axis-aligned bounding box for the input dataset. This filter operates on any type of dataset and produces polygonal output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Outline filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Outline Corners

This filter generates a bounding box representation of the input. It only displays the corners of the bounding box.


The Outline Corners filter generates the corners of a bounding box for the input dataset. This filter operates on any type of dataset and produces polygonal output.


Property Description Default Value(s) Restrictions
Corner Factor
(CornerFactor)

The value of this property sets the size of the corners as a percentage of the length of the corresponding bounding box edge.


0.2

The value must be greater than or equal to 0.001 and less than or equal to 0.5.


Input
(Input)

This property specifies the input to the Outline Corners filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Outline Curvilinear DataSet

This filter generates an outline representation of the input.


The Outline filter generates an outline of the outside edges of the input dataset, rather than the dataset's bounding box. This filter operates on structured grid datasets and produces polygonal output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the outline (curvilinear) filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkStructuredGrid.



Particle Pathlines

Creates polylines representing pathlines of animating particles


Particle Pathlines takes any dataset as input, it extracts the

point locations of all cells over time to build up a polyline

trail. The point number (index) is used as the 'key' if the points

are randomly changing their respective order in the points list,

then you should specify a scalar that represents the unique

ID. This is intended to handle the output of a filter such as the

TemporalStreamTracer.


Property Description Default Value(s) Restrictions
Id Channel Array
(IdChannelArray)

Specify the name of a scalar array which will be used to fetch

the index of each point. This is necessary only if the particles

change position (Id order) on each time step. The Id can be used

to identify particles at each step and hence track them properly.

If this array is set to "Global or Local IDs", the global point

ids are used if they exist or the point index is otherwise.


Global or Local IDs

An array of scalars is required.


Input
(Input)

The input cells to create pathlines for.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Mask Points
(MaskPoints)

Set the number of particles to track as a ratio of the input.

Example: setting MaskPoints to 10 will track every 10th point.


100
Max Step Distance
(MaxStepDistance)

If a particle disappears from one end of a simulation and

reappears on the other side, the track left will be

unrepresentative. Set a MaxStepDistance{x,y,z} which acts as a

threshold above which if a step occurs larger than the value (for

the dimension), the track will be dropped and restarted after the

step. (ie the part before the wrap around will be dropped and the

newer part kept).


1 1 1
Max Track Length
(MaxTrackLength)

If the Particles being traced animate for a long time, the trails

or traces will become long and stringy. Setting the

MaxTraceTimeLength will limit how much of the trace is

displayed. Tracks longer then the Max will disappear and the

trace will apppear like a snake of fixed length which progresses

as the particle moves. This length is given with respect to

timesteps.


25
Selection
(Selection)

Set a second input, which is a selection. Particles with the same

Id in the selection as the primary input will be chosen for

pathlines Note that you must have the same IdChannelArray in the

selection as the input


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



ParticleTracer

Trace Particles through time in a vector field.


The Particle Trace filter generates pathlines in a vector field from a collection of seed points. The vector field used is selected from the Vectors menu, so the input data set is required to have point-centered vectors. The Seed portion of the interface allows you to select whether the seed points for this integration lie in a point cloud or along a line. Depending on which is selected, the appropriate 3D widget (point or line widget) is displayed along with traditional user interface controls for positioning the point cloud or line within the data set. Instructions for using the 3D widgets and the corresponding manual controls can be found in section 7.4.

This filter operates on any type of data set, provided it has point-centered vectors. The output is polygonal data containing polylines. This filter is available on the Toolbar.


Property Description Default Value(s) Restrictions
Compute Vorticity
(ComputeVorticity)

Compute vorticity and angular rotation of particles as they progress


1

Only the values 0 and 1 are accepted.


Enable Particle Writing
(EnableParticleWriting)

Turn On/Off particle writing


0

Only the values 0 and 1 are accepted.


Force Reinjection Every NSteps
(ForceReinjectionEveryNSteps)
1
Ignore Pipeline Time
(IgnorePipelineTime)

Ignore the TIME_ requests made by the pipeline and only use the TimeStep set manually


0

Only the values 0 and 1 are accepted.


Initial Integration Step
(InitialIntegrationStep)
0.25
Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 3 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


Particle File Name
(ParticleFileName)

Provide a name for the particle file generated if writing is enabled


/project/csvis/biddisco/ptracer/run-1
Select Input Vectors
(SelectInputVectors)

An array of vectors is required.


Source
(Source)

The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Static Mesh
(StaticMesh)

Force the use of static mesh optimizations


0

Only the values 0 and 1 are accepted.


Static Seeds
(StaticSeeds)

Force the use of static seed optimizations


1

Only the values 0 and 1 are accepted.


Term. Speed
(TerminalSpeed)

If at any point the speed is below the value of this property, the integration is terminated.


1e-12
Termination Time
(TerminationTime)
0
Termination Time Unit
(TerminationTimeUnit)

The termination time may be specified as TimeSteps or Simulation time


1

The value must be one of the following: Simulation Time (0), TimeSteps (1).


Time Step
(TimeStep)
0


Plot Data


This filter prepare arbitrary data to be plotted in any of the plots.

By default the data is shown in a XY line plot.


Property Description Default Value(s) Restrictions
Input
(Input)

The input.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.



Plot Global Variables Over Time

Extracts and plots data in field data over time.


This filter extracts the variables that reside in a dataset's field data and are

defined over time. The output is a 1D rectilinear grid where the x coordinates

correspond to time (the same array is also copied to a point array named Time or

TimeData (if Time exists in the input)).


Property Description Default Value(s) Restrictions
Input
(Input)

The input from which the selection is extracted.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Plot On Intersection Curves

Extracts the edges in a 2D plane and plots them


Extracts the surface, intersect it with a 2D plane.

Plot the resulting polylines.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Extract Surface filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Slice Type
(Slice Type)

This property sets the parameters of the slice function.


The value must be set to one of the following: Plane, Box, Sphere.



Plot On Sorted Lines

Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Plot Edges filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.



Plot Over Line

Sample data attributes at the points along a line. Probed lines will be displayed in a graph of the attributes.


The Plot Over Line filter samples the data set attributes of the current

data set at the points along a line. The values of the point-centered variables

along that line will be displayed in an XY Plot. This filter uses interpolation

to determine the values at the selected point, whether or not it lies at an

input point. The Probe filter operates on any type of data and produces

polygonal output (a line).


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the dataset from which to obtain probe values.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.


Pass Partial Arrays
(PassPartialArrays)

When dealing with composite datasets, partial arrays are common i.e.

data-arrays that are not available in all of the blocks. By default,

this filter only passes those point and cell data-arrays that are

available in all the blocks i.e. partial array are removed. When

PassPartialArrays is turned on, this behavior is changed to take a

union of all arrays present thus partial arrays are passed as well.

However, for composite dataset input, this filter still produces a

non-composite output. For all those locations in a block of where a

particular data array is missing, this filter uses vtkMath::Nan() for

double and float arrays, while 0 for all other types of arrays i.e

int, char etc.


1

Only the values 0 and 1 are accepted.


Probe Type
(Source)

This property specifies the dataset whose geometry will be used in determining positions to probe.


The selected object must be the result of the following: sources (includes readers).


The value must be set to one of the following: HighResLineSource.



Plot Selection Over Time

Extracts selection over time and then plots it.


This filter extracts the selection over time, i.e. cell and/or point

variables at a cells/point selected are extracted over time

The output multi-block consists of 1D rectilinear grids where the x coordinate

corresponds to time (the same array is also copied to a point array named

Time or TimeData (if Time exists in the input)).

If selection input is a Location based selection then the point values are

interpolated from the nearby cells, ie those of the cell the location

lies in.


Property Description Default Value(s) Restrictions
Input
(Input)

The input from which the selection is extracted.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkTable, vtkCompositeDataSet.


Selection
(Selection)

The input that provides the selection object.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkSelection.



Point Data to Cell Data

Create cell attributes by averaging point attributes.


The Point Data to Cell Data filter averages the values of the point attributes of the points of a cell to compute cell attributes. This filter operates on any type of dataset, and the output dataset is the same type as the input.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Point Data to Cell Data filter.


Once set, the input dataset type cannot be changed.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Pass Point Data
(PassPointData)

The value of this property controls whether the input point data will be passed to the output. If set to 1, then the input point data is passed through to the output; otherwise, only generated cell data is placed into the output.


0

Only the values 0 and 1 are accepted.



Principal Component Analysis

Compute a statistical model of a dataset and/or assess the dataset with a statistical model.


This filter either computes a statistical model of a dataset or takes such a model as its second input. Then, the model (however it is obtained) may optionally be used to assess the input dataset.


This filter performs additional analysis above and beyond the multicorrelative filter. It computes the eigenvalues and eigenvectors of the covariance matrix from the multicorrelative filter. Data is then assessed by projecting the original tuples into a possibly lower-dimensional space.



Since the PCA filter uses the multicorrelative filter's analysis, it shares the same raw covariance table specified in the multicorrelative documentation. The second table in the multiblock dataset comprising the model output is an expanded version of the multicorrelative version.



As with the multicorrlative filter, the second model table contains the mean values, the upper-triangular portion of the symmetric covariance matrix, and the non-zero lower-triangular portion of the Cholesky decomposition of the covariance matrix. Below these entries are the eigenvalues of the covariance matrix (in the column labeled "Mean") and the eigenvectors (as row vectors) in an additional NxN matrix.


Property Description Default Value(s) Restrictions
Attribute Mode
(AttributeMode)

Specify which type of field data the arrays will be drawn from.


0

Valud array names will be chosen from point and cell data.


Basis Energy
(BasisEnergy)

The minimum energy to use when determining the dimensionality of the new space into which the assessment will project tuples.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.


Basis Scheme
(BasisScheme)

When reporting assessments, should the full eigenvector decomposition be used to project the original vector into the new space (Full basis), or should a fixed subset of the decomposition be used (Fixed-size basis), or should the projection be clipped to preserve at least some fixed "energy" (Fixed-energy basis)?


As an example, suppose the variables of interest were {A,B,C,D,E} and that the eigenvalues of the covariance matrix for these were {5,2,1.5,1,.5}. If the "Full basis" scheme is used, then all 5 components of the eigenvectors will be used to project each {A,B,C,D,E}-tuple in the original data into a new 5-components space.



If the "Fixed-size" scheme is used and the "Basis Size" property is set to 4, then only the first 4 eigenvector components will be used to project each {A,B,C,D,E}-tuple into the new space and that space will be of dimension 4, not 5.



If the "Fixed-energy basis" scheme is used and the "Basis Energy" property is set to 0.8, then only the first 3 eigenvector components will be used to project each {A,B,C,D,E}-tuple into the new space, which will be of dimension 3. The number 3 is chosen because 3 is the lowest N for which the sum of the first N eigenvalues divided by the sum of all eigenvalues is larger than the specified "Basis Energy" (i.e., (5+2+1.5)/10 = 0.85 > 0.8).


0

The value must be one of the following: Full basis (0), Fixed-size basis (1), Fixed-energy basis (2).


Basis Size
(BasisSize)

The maximum number of eigenvector components to use when projecting into the new space.


2

The value must be greater than or equal to 1.


Input
(Input)

The input to the filter. Arrays from this dataset will be used for computing statistics and/or assessed by a statistical model.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkImageData, vtkStructuredGrid, vtkPolyData, vtkUnstructuredGrid, vtkTable, vtkGraph.


Model Input
(ModelInput)

A previously-calculated model with which to assess a separate dataset. This input is optional.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable, vtkMultiBlockDataSet.


Normalization Scheme
(NormalizationScheme)

Before the eigenvector decomposition of the covariance matrix takes place, you may normalize each (i,j) entry by sqrt( cov(i,i) * cov(j,j) ). This implies that the variance of each variable of interest should be of equal importance.


2

The value must be one of the following: No normalization (0), Normalize using covariances (3).


Variables of Interest
(SelectArrays)

Choose arrays whose entries will be used to form observations for statistical analysis.


An array of scalars is required.


Task
(Task)

Specify the task to be performed: modeling and/or assessment.

  1. "Statistics of all the data," creates an output table (or tables) summarizing the entire input dataset;
  1. "Model a subset of the data," creates an output table (or tables) summarizing a randomly-chosen subset of the input dataset;
  1. "Assess the data with a model," adds attributes to the first input dataset using a model provided on the second input port; and
  1. "Model and assess the same data," is really just operations 2 and 3 above applied to the same input dataset. The model is first trained using a fraction of the input data and then the entire dataset is assessed using that model.


When the task includes creating a model (i.e., tasks 2, and 4), you may adjust the fraction of the input dataset used for training. You should avoid using a large fraction of the input data for training as you will then not be able to detect overfitting. The Training fraction setting will be ignored for tasks 1 and 3.


3

The value must be one of the following: Statistics of all the data (0), Model a subset of the data (1), Assess the data with a model (2), Model and assess the same data (3).


Training Fraction
(TrainingFraction)

Specify the fraction of values from the input dataset to be used for model fitting. The exact set of values is chosen at random from the dataset.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Probe Location

Sample data attributes at the points in a point cloud.


The Probe filter samples the data set attributes of the current data set at the points in a point cloud. The Probe filter uses interpolation to determine the values at the selected point, whether or not it lies at an input point. The Probe filter operates on any type of data and produces polygonal output (a point cloud).


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the dataset from which to obtain probe values.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.


Probe Type
(Source)

This property specifies the dataset whose geometry will be used in determining positions to probe.


The selected object must be the result of the following: sources (includes readers).


The value must be set to one of the following: FixedRadiusPointSource.



Process Id Scalars

This filter uses colors to show how data is partitioned across processes.


The Process Id Scalars filter assigns a unique scalar value to each piece of the input according to which processor it resides on. This filter operates on any type of data when ParaView is run in parallel. It is useful for determining whether your data is load-balanced across the processors being used. The output data set type is the same as that of the input.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Process Id Scalars filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Random Mode
(RandomMode)

The value of this property determines whether to use random id values for the various pieces. If set to 1, the unique value per piece will be chosen at random; otherwise the unique value will match the id of the process.


0

Only the values 0 and 1 are accepted.



Programmable Filter

Executes a user supplied python script on its input dataset to produce an output dataset.


This filter will execute a python script to produce an output dataset.

The filter keeps a copy of the python script in Script, and creates

Interpretor, a python interpretor to run the script upon the first

execution.


Property Description Default Value(s) Restrictions
Copy Arrays
(CopyArrays)

If this property is set to true, all the cell and point arrays from

first input are copied to the output.


0

Only the values 0 and 1 are accepted.


RequestInformation Script
(InformationScript)

This property is a python script that is executed during the RequestInformation pipeline pass. Use this to provide information such as WHOLE_EXTENT to the pipeline downstream.


Input
(Input)

This property specifies the input(s) to the programmable filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Output Data Set Type
(OutputDataSetType)

The value of this property determines the dataset type for the output of the programmable filter.


8

The value must be one of the following: Same as Input (8), vtkPolyData (0), vtkStructuredGrid (2), vtkRectilinearGrid (3), vtkUnstructuredGrid (4), vtkImageData (6), vtkUniformGrid (10), vtkMultiblockDataSet (13), vtkHierarchicalBoxDataSet (15), vtkTable (19).


Python Path
(PythonPath)

A semi-colon (;) separated list of directories to add to the python library

search path.


Script
(Script)

This property contains the text of a python program that the programmable filter runs.


RequestUpdateExtent Script
(UpdateExtentScript)

This property is a python script that is executed during the RequestUpdateExtent pipeline pass. Use this to modify the update extent that your filter ask up stream for.



Python Calculator

This filter evaluates a Python expression


This filter uses Python to calculate an expression.

It depends heavily on the numpy and paraview.vtk modules.

To use the parallel functions, mpi4py is also necessary. The expression

is evaluated and the resulting scalar value or numpy array is added

to the output as an array. See numpy and paraview.vtk documentation

for the list of available functions.


This filter tries to make it easy for the user to write expressions

by defining certain variables. The filter tries to assign each array

to a variable of the same name. If the name of the array is not a

valid Python variable, it has to be accessed through a dictionary called

arrays (i.e. arrays['array_name']). The points can be accessed using the

points variable.


Property Description Default Value(s) Restrictions
Array Association
(ArrayAssociation)

This property controls the association of the output array as well as

which arrays are defined as variables.


0

The value must be one of the following: Point Data (0), Cell Data (1).


Array Name
(ArrayName)

The name of the output array.


result
Copy Arrays
(CopyArrays)

If this property is set to true, all the cell and point arrays from

first input are copied to the output.


1

Only the values 0 and 1 are accepted.


Expression
(Expression)

The Python expression evaluated during execution.


Input
(Input)

Set the input of the filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Quadric Clustering

This filter is the same filter used to generate level of detail for ParaView. It uses a structured grid of bins and merges all points contained in each bin.


The Quadric Clustering filter produces a reduced-resolution polygonal approximation of the input polygonal dataset. This filter is the one used by ParaView for computing LODs. It uses spatial binning to reduce the number of points in the data set; points that lie within the same spatial bin are collapsed into one representative point.


Property Description Default Value(s) Restrictions
Copy Cell Data
(CopyCellData)

If this property is set to 1, the cell data from the input will be copied to the output.


1

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Quadric Clustering filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Number of Dimensions
(NumberOfDivisions)

This property specifies the number of bins along the X, Y, and Z axes of the data set.


50 50 50
Use Feature Edges
(UseFeatureEdges)

If this property is set to 1, feature edge quadrics will be used to maintain the boundary edges along processor divisions.


0

Only the values 0 and 1 are accepted.


Use Feature Points
(UseFeaturePoints)

If this property is set to 1, feature point quadrics will be used to maintain the boundary points along processor divisions.


0

Only the values 0 and 1 are accepted.


Use Input Points
(UseInputPoints)

If the value of this property is set to 1, the representative point for each bin is selected from one of the input points that lies in that bin; the input point that produces the least error is chosen. If the value of this property is 0, the location of the representative point is calculated to produce the least error possible for that bin, but the point will most likely not be one of the input points.


1

Only the values 0 and 1 are accepted.


Use Internal Triangles
(UseInternalTriangles)

If this property is set to 1, triangles completely contained in a spatial bin will be included in the computation of the bin's quadrics. When this property is set to 0, the filters operates faster, but the resulting surface may not be as well-behaved.


0

Only the values 0 and 1 are accepted.



Random Vectors

This filter creates a new 3-component point data array and sets it as the default vector array. It uses a random number generator to create values.


The Random Vectors filter generates a point-centered array of random vectors. It uses a random number generator to determine the components of the vectors. This filter operates on any type of data set, and the output data set will be of the same type as the input.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Random Vectors filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Maximum Speed
(MaximumSpeed)

This property specifies the maximum length of the random point vectors generated.


1

The value must be greater than or equal to 0.


Minimum Speed
(MinimumSpeed)

This property specifies the minimum length of the random point vectors generated.


0

The value must be greater than or equal to 0.



Rectilinear Grid Connectivity

Parallel fragments extraction and attributes integration on rectilinear grids.


Extracts material fragments from multi-block vtkRectilinearGrid datasets

based on the selected volume fraction array(s) and a fraction isovalue and

integrates the associated attributes.


Property Description Default Value(s) Restrictions
Double Volume Arrays
(AddDoubleVolumeArrayName)

This property specifies the name(s) of the volume fraction array(s) for generating parts.


An array of scalars is required.


Float Volume Arrays
(AddFloatVolumeArrayName)

This property specifies the name(s) of the volume fraction array(s) for generating parts.


An array of scalars is required.


Unsigned Character Volume Arrays
(AddUnsignedCharVolumeArrayName)

This property specifies the name(s) of the volume fraction array(s) for generating parts.


An array of scalars is required.


Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkRectilinearGrid, vtkCompositeDataSet.


Volume Fraction Value
(VolumeFractionSurfaceValue)

The value of this property is the volume fraction value for the surface.


0.1

The value must be greater than or equal to 0 and less than or equal to 1.



Reflect

This filter takes the union of the input and its reflection over an axis-aligned plane.


The Reflect filter reflects the input dataset across the specified plane. This filter operates on any type of data set and produces an unstructured grid output.


Property Description Default Value(s) Restrictions
Center
(Center)

If the value of the Plane property is X, Y, or Z, then the value of this property specifies the center of the reflection plane.


0
Copy Input
(CopyInput)

If this property is set to 1, the output will contain the union of the input dataset and its reflection. Otherwise the output will contain only the reflection of the input data.


1

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Reflect filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Plane
(Plane)

The value of this property determines which plane to reflect across. If the value is X, Y, or Z, the value of the Center property determines where the plane is placed along the specified axis. The other six options (X Min, X Max, etc.) place the reflection plane at the specified face of the bounding box of the input dataset.


0

The value must be one of the following: X Min (0), Y Min (1), Z Min (2), X Max (3), Y Max (4), Z Max (5), X (6), Y (7), Z (8).



Resample With Dataset

Sample data attributes at the points of a dataset.


Probe is a filter that computes point attributes at specified point positions. The filter has two inputs: the Input and Source. The Input geometric structure is passed through the filter. The point attributes are computed at the Input point positions by interpolating into the source data. For example, we can compute data values on a plane (plane specified as Input) from a volume (Source). The cell data of the source data is copied to the output based on in which source cell each input point is. If an array of the same name exists both in source's point and cell data, only the one from the point data is probed.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the dataset from which to obtain probe values.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet, vtkCompositeDataSet.


Source
(Source)

This property specifies the dataset whose geometry will be used in determining positions to probe.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Ribbon

This filter generates ribbon surface from lines. It is useful for displaying streamlines.


The Ribbon filter creates ribbons from the lines in the input data set. This filter is useful for visualizing streamlines. Both the input and output of this filter are polygonal data. The input data set must also have at least one point-centered vector array.


Property Description Default Value(s) Restrictions
Angle
(Angle)

The value of this property specifies the offset angle (in degrees) of the ribbon from the line normal.


0

The value must be greater than or equal to 0 and less than or equal to 360.


Default Normal
(DefaultNormal)

The value of this property specifies the normal to use when the UseDefaultNormal property is set to 1 or the input contains no vector array (SelectInputVectors property).


0 0 1
Input
(Input)

This property specifies the input to the Ribbon filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Scalars
(SelectInputScalars)

The value of this property indicates the name of the input scalar array used by this filter. The width of the ribbons will be varied based on the values in the specified array if the value of the Width property is 1.


An array of scalars is required.


Vectors
(SelectInputVectors)

The value of this property indicates the name of the input vector array used by this filter. If the UseDefaultNormal property is set to 0, the normal vectors for the ribbons come from the specified vector array.


1

An array of vectors is required.


Use Default Normal
(UseDefaultNormal)

If this property is set to 0, and the input contains no vector array, then default ribbon normals will be generated (DefaultNormal property); if a vector array has been set (SelectInputVectors property), the ribbon normals will be set from the specified array. If this property is set to 1, the default normal (DefaultNormal property) will be used, regardless of whether the SelectInputVectors property has been set.


0

Only the values 0 and 1 are accepted.


Vary Width
(VaryWidth)

If this property is set to 1, the ribbon width will be scaled according to the scalar array specified in the SelectInputScalars property.

Toggle the variation of ribbon width with scalar value.


0

Only the values 0 and 1 are accepted.


Width
(Width)

If the VaryWidth property is set to 1, the value of this property is the minimum ribbon width. If the VaryWidth property is set to 0, the value of this property is half the width of the ribbon.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.01.



Rotational Extrusion

This filter generates a swept surface while translating the input along a circular path.


The Rotational Extrusion filter forms a surface by rotating the input about the Z axis. This filter is intended to operate on 2D polygonal data. It produces polygonal output.


Property Description Default Value(s) Restrictions
Angle
(Angle)

This property specifies the angle of rotation in degrees. The surface is swept from 0 to the value of this property.


360
Capping
(Capping)

If this property is set to 1, the open ends of the swept surface will be capped with a copy of the input dataset. This works property if the input is a 2D surface composed of filled polygons. If the input dataset is a closed solid (e.g., a sphere), then either two copies of the dataset will be drawn or no surface will be drawn. No surface is drawn if either this property is set to 0 or if the two surfaces would occupy exactly the same 3D space (i.e., the Angle property's value is a multiple of 360, and the values of the Translation and DeltaRadius properties are 0).


1

Only the values 0 and 1 are accepted.


Delta Radius
(DeltaRadius)

The value of this property specifies the change in radius during the sweep process.


0
Input
(Input)

This property specifies the input to the Rotational Extrusion filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Resolution
(Resolution)

The value of this property controls the number of intermediate node points used in performing the sweep (rotating from 0 degrees to the value specified by the Angle property.


12

The value must be greater than or equal to 1.


Translation
(Translation)

The value of this property specifies the total amount of translation along the Z axis during the sweep process. Specifying a non-zero value for this property allows you to create a corkscrew (value of DeltaRadius > 0) or spring effect.


0


Scatter Plot

Creates a scatter plot from a dataset.


This filter creates a scatter plot from a dataset. In point data mode,

it uses the X point coordinates as the default X array. All other arrays

are passed to the output and can be used in the scatter plot. In cell

data mode, the first single component array is used as the default X

array.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Shrink

This filter shrinks each input cell so they pull away from their neighbors.


The Shrink filter causes the individual cells of a dataset to break apart from each other by moving each cell's points toward the centroid of the cell. (The centroid of a cell is the average position of its points.) This filter operates on any type of dataset and produces unstructured grid output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Shrink filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Shrink Factor
(ShrinkFactor)

The value of this property determines how far the points will move. A value of 0 positions the points at the centroid of the cell; a value of 1 leaves them at their original positions.


0.5

The value must be greater than or equal to 0 and less than or equal to 1.



Slice

This filter slices a data set with a plane. Slicing is similar to a contour. It creates surfaces from volumes and lines from surfaces.


This filter extracts the portion of the input dataset that lies along the specified plane. The Slice filter takes any type of dataset as input. The output of this filter is polygonal data.


Property Description Default Value(s) Restrictions
Slice Offset Values
(ContourValues)

The values in this property specify a list of current offset values. This can be used to create multiple slices with different centers. Each entry represents a new slice with its center shifted by the offset value.


Determine the length of the dataset's diagonal. The value must lie within -diagonal length to +diagonal length.


Slice Type
(CutFunction)

This property sets the parameters of the slice function.


The value must be set to one of the following: Plane, Box, Sphere.


Input
(Input)

This property specifies the input to the Slice filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Smooth

This filter smooths a polygonal surface by iteratively moving points toward their neighbors.


The Smooth filter operates on a polygonal data set by iteratively adjusting the position of the points using Laplacian smoothing. (Because this filter only adjusts point positions, the output data set is also polygonal.) This results in better-shaped cells and more evenly distributed points.


The Convergence slider limits the maximum motion of any point. It is expressed as a fraction of the length of the diagonal of the bounding box of the data set. If the maximum point motion during a smoothing iteration is less than the Convergence value, the smoothing operation terminates.


Property Description Default Value(s) Restrictions
Convergence
(Convergence)

The value of this property limits the maximum motion of any point. It is expressed as a fraction of the length of the diagonal of the bounding box of the input dataset. If the maximum point motion during a smoothing iteration is less than the value of this property, the smoothing operation terminates.


0

The value must be greater than or equal to 0 and less than or equal to 1.


Input
(Input)

This property specifies the input to the Smooth filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Number of Iterations
(NumberOfIterations)

This property sets the maximum number of smoothing iterations to perform. More iterations produce better smoothing.


20

The value must be greater than or equal to 0.



Stream Tracer

Integrate streamlines in a vector field.


The Stream Tracer filter generates streamlines in a vector field from a collection of seed points. Production of streamlines terminates if a streamline crosses the exterior boundary of the input dataset. Other reasons for termination are listed for the MaximumNumberOfSteps, TerminalSpeed, and MaximumPropagation properties. This filter operates on any type of dataset, provided it has point-centered vectors. The output is polygonal data containing polylines.


Property Description Default Value(s) Restrictions
Initial Step Length
(InitialIntegrationStep)

This property specifies the initial integration step size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4), it is fixed (always equal to this initial value) throughout the integration. For an adaptive integrator (Runge-Kutta 4-5), the actual step size varies such that the numerical error is less than a specified threshold.


0.2
Input
(Input)

This property specifies the input to the Stream Tracer filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 3 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Integration Direction
(IntegrationDirection)

This property determines in which direction(s) a streamline is generated.


2

The value must be one of the following: FORWARD (0), BACKWARD (1), BOTH (2).


Integration Step Unit
(IntegrationStepUnit)

This property specifies the unit for Minimum/Initial/Maximum integration step size. The Length unit refers to the arc length that a particle travels/advects within a single step. The Cell Length unit represents the step size as a number of cells.


2

The value must be one of the following: Length (1), Cell Length (2).


Integrator Type
(IntegratorType)

This property determines which integrator (with increasing accuracy) to use for creating streamlines.


2

The value must be one of the following: Runge-Kutta 2 (0), Runge-Kutta 4 (1), Runge-Kutta 4-5 (2).


Interpolator Type
(InterpolatorType)

This property determines which interpolator to use for evaluating the velocity vector field. The first is faster though the second is more robust in locating cells during streamline integration.


0

The value must be one of the following: Interpolator with Point Locator (0), Interpolator with Cell Locator (1).


Maximum Error
(MaximumError)

This property specifies the maximum error (for Runge-Kutta 4-5) tolerated throughout streamline integration. The Runge-Kutta 4-5 integrator tries to adjust the step size such that the estimated error is less than this threshold.


1e-06
Maximum Step Length
(MaximumIntegrationStep)

When using the Runge-Kutta 4-5 ingrator, this property specifies the maximum integration step size.


0.5
Maximum Steps
(MaximumNumberOfSteps)

This property specifies the maximum number of steps, beyond which streamline integration is terminated.


2000
Maximum Streamline Length
(MaximumPropagation)

This property specifies the maximum streamline length (i.e., physical arc length), beyond which line integration is terminated.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 1.


Minimum Step Length
(MinimumIntegrationStep)

When using the Runge-Kutta 4-5 ingrator, this property specifies the minimum integration step size.


0.01
Vectors
(SelectInputVectors)

This property contains the name of the vector array from which to generate streamlines.


An array of vectors is required.


Seed Type
(Source)

The value of this property determines how the seeds for the streamlines will be generated.


The selected object must be the result of the following: sources (includes readers).


The value must be set to one of the following: PointSource, HighResLineSource.


Terminal Speed
(TerminalSpeed)

This property specifies the terminal speed, below which particle advection/integration is terminated.


1e-12


Stream Tracer With Custom Source

Integrate streamlines in a vector field.


The Stream Tracer filter generates streamlines in a vector field from a collection of seed points. Production of streamlines terminates if a streamline crosses the exterior boundary of the input dataset. Other reasons for termination are listed for the MaximumNumberOfSteps, TerminalSpeed, and MaximumPropagation properties. This filter operates on any type of dataset, provided it has point-centered vectors. The output is polygonal data containing polylines. This filter takes a Source input that provides the seed points.


Property Description Default Value(s) Restrictions
Initial Step Length
(InitialIntegrationStep)

This property specifies the initial integration step size. For non-adaptive integrators (Runge-Kutta 2 and Runge-Kutta 4), it is fixed (always equal to this initial value) throughout the integration. For an adaptive integrator (Runge-Kutta 4-5), the actual step size varies such that the numerical error is less than a specified threshold.


0.2
Input
(Input)

This property specifies the input to the Stream Tracer filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 3 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Integration Direction
(IntegrationDirection)

This property determines in which direction(s) a streamline is generated.


2

The value must be one of the following: FORWARD (0), BACKWARD (1), BOTH (2).


Integration Step Unit
(IntegrationStepUnit)

This property specifies the unit for Minimum/Initial/Maximum integration step size. The Length unit refers to the arc length that a particle travels/advects within a single step. The Cell Length unit represents the step size as a number of cells.


2

The value must be one of the following: Length (1), Cell Length (2).


Integrator Type
(IntegratorType)

This property determines which integrator (with increasing accuracy) to use for creating streamlines.


2

The value must be one of the following: Runge-Kutta 2 (0), Runge-Kutta 4 (1), Runge-Kutta 4-5 (2).


Maximum Error
(MaximumError)

This property specifies the maximum error (for Runge-Kutta 4-5) tolerated throughout streamline integration. The Runge-Kutta 4-5 integrator tries to adjust the step size such that the estimated error is less than this threshold.


1e-06
Maximum Step Length
(MaximumIntegrationStep)

When using the Runge-Kutta 4-5 ingrator, this property specifies the maximum integration step size.


0.5
Maximum Steps
(MaximumNumberOfSteps)

This property specifies the maximum number of steps, beyond which streamline integration is terminated.


2000
Maximum Streamline Length
(MaximumPropagation)

This property specifies the maximum streamline length (i.e., physical arc length), beyond which line integration is terminated.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 1.


Minimum Step Length
(MinimumIntegrationStep)

When using the Runge-Kutta 4-5 ingrator, this property specifies the minimum integration step size.


0.01
Vectors
(SelectInputVectors)

This property contains the name of the vector array from which to generate streamlines.


An array of vectors is required.


Source
(Source)

This property specifies the input used to obtain the seed points.


The selected object must be the result of the following: sources (includes readers).


Terminal Speed
(TerminalSpeed)

This property specifies the terminal speed, below which particle advection/integration is terminated.


1e-12


Subdivide

This filter iteratively divide triangles into four smaller triangles. New points are placed linearly so the output surface matches the input surface.


The Subdivide filter iteratively divides each triangle in the input dataset into 4 new triangles. Three new points are added per triangle -- one at the midpoint of each edge. This filter operates only on polygonal data containing triangles, so run your polygonal data through the Triangulate filter first if it is not composed of triangles. The output of this filter is also polygonal.


Property Description Default Value(s) Restrictions
Input
(Input)

This parameter specifies the input to the Subdivide filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Number of Subdivisions
(NumberOfSubdivisions)

The value of this property specifies the number of subdivision iterations to perform.


1

The value must be greater than or equal to 1 and less than or equal to 4.



Surface Flow

This filter integrates flow through a surface.


The flow integration fitler integrates the dot product of a point flow vector field and surface normal. It computes the net flow across the 2D surface. It operates on any type of dataset and produces an unstructured grid output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Surface Flow filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 3 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Select Input Vectors
(SelectInputVectors)

The value of this property specifies the name of the input vector array containing the flow vector field.


An array of vectors is required.



Surface Vectors

This filter constrains vectors to lie on a surface.


The Surface Vectors filter is used for 2D data sets. It constrains vectors to lie in a surface by removing components of the vectors normal to the local surface.


Property Description Default Value(s) Restrictions
Constraint Mode
(ConstraintMode)

This property specifies whether the vectors will be parallel or perpendicular to the surface. If the value is set to PerpendicularScale (2), then the output will contain a scalar array with the dot product of the surface normal and the vector at each point.


0

The value must be one of the following: Parallel (0), Perpendicular (1), PerpendicularScale (2).


Input
(Input)

This property specifies the input to the Surface Vectors filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 3 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Select Input Vectors
(SelectInputVectors)

This property specifies the name of the input vector array to process.


An array of vectors is required.



Table To Points

Converts table to set of points.


The TableToPolyData filter converts a vtkTable to a set of points in a

vtkPolyData. One must specifies the columns in the input table to use as

the X, Y and Z coordinates for the points in the output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input..


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable.


X Column
(XColumn)

An array of scalars is required.


Y Column
(YColumn)

An array of scalars is required.


Z Column
(ZColumn)

An array of scalars is required.



Table To Structured Grid

Converts to table to structured grid.


The TableToStructuredGrid filter converts a vtkTable to a

vtkStructuredGrid. One must specifies the columns in the input table to

use as the X, Y and Z coordinates for the points in the output, and the

whole extent.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input..


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkTable.


Whole Extent
(WholeExtent)
0 0 0 0 0 0
X Column
(XColumn)

An array of scalars is required.


Y Column
(YColumn)

An array of scalars is required.


Z Column
(ZColumn)

An array of scalars is required.



Temporal Cache

Saves a copy of the data set for a fixed number of time steps.


The Temporal Cache can be used to save multiple copies of a data set at different time steps to prevent thrashing in the pipeline caused by downstream filters that adjust the requested time step. For example, assume that there is a downstream Temporal Interpolator filter. This filter will (usually) request two time steps from the upstream filters, which in turn (usually) causes the upstream filters to run twice, once for each time step. The next time the interpolator requests the same two time steps, they might force the upstream filters to re-evaluate the same two time steps. The Temporal Cache can keep copies of both of these time steps and provide the requested data without having to run upstream filters.


Property Description Default Value(s) Restrictions
Cache Size
(CacheSize)

The cache size determines the number of time steps that can be cached at one time. The maximum number is 10. The minimum is 2 (since it makes little sense to cache less than that).


2

The value must be greater than or equal to 2 and less than or equal to 10.


Input
(Input)

This property specifies the input of the Temporal Cache filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.



Temporal Interpolator

Interpolate between time steps.


The Temporal Interpolator converts data that is defined at discrete time steps to one that is defined over a continuum of time by linearly interpolating the data's field data between two adjacent time steps. The interpolated values are a simple approximation and should not be interpreted as anything more. The Temporal Interpolator assumes that the topology between adjacent time steps does not change.


Property Description Default Value(s) Restrictions
Discrete Time Step Interval
(DiscreteTimeStepInterval)

If Discrete Time Step Interval is set to 0, then the Temporal Interpolator will provide a continuous region of time on its output. If set to anything else, then the output will define a finite set of time points on its output, each spaced by the Discrete Time Step Interval. The output will have (time range)/(discrete time step interval) time steps. (Note that the time range is defined by the time range of the data of the input filter, which may be different from other pipeline objects or the range defined in the animation inspector.) This is a useful option to use if you have a dataset with one missing time step and wish to 'file-in' the missing data with an interpolated value from the steps on either side.


0
Input
(Input)

This property specifies the input of the Temporal Interpolator.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.



Temporal Shift Scale

Shift and scale time values.


The Temporal Shift Scale filter linearly transforms the time values of a pipeline object by applying a shift and then scale. Given a data at time t on the input, it will be transformed to time t*Shift + Scale on the output. Inversely, if this filter has a request for time t, it will request time (t-Shift)/Scale on its input.


Property Description Default Value(s) Restrictions
Input
(Input)

The input to the Temporal Shift Scale filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


Maximum Number Of Periods
(MaximumNumberOfPeriods)
1

The value must be greater than or equal to 0 and less than or equal to 100.


Periodic
(Periodic)
0

Only the values 0 and 1 are accepted.


Periodic End Correction
(PeriodicEndCorrection)
1

Only the values 0 and 1 are accepted.


Post Shift
(PostShift)

The amount of time the input is shifted.


0
Pre Shift
(PreShift)
0
Scale
(Scale)

The factor by which the input time is scaled.


1


Temporal Snap-to-Time-Step

Modifies the time range/steps of temporal data.


This file modifies the time range or time steps of

the data without changing the data itself. The data is not resampled

by this filter, only the information accompanying the data is modified.


Property Description Default Value(s) Restrictions
Input
(Input)

The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataObject.


Snap Mode
(SnapMode)

Determine which time step to snap to.


0

The value must be one of the following: Nearest (0), NextBelowOrEqual (1), NextAboveOrEqual (2).



Temporal Statistics

Loads in all time steps of a data set and computes some statistics about how each point and cell variable changes over time.


Given an input that changes over time, vtkTemporalStatistics looks

at the data for each time step and computes some statistical

information of how a point or cell variable changes over time. For

example, vtkTemporalStatistics can compute the average value of

"pressure" over time of each point.


Note that this filter will require the upstream filter to be run on

every time step that it reports that it can compute. This may be a

time consuming operation.


vtkTemporalStatistics ignores the temporal spacing. Each timestep

will be weighted the same regardless of how long of an interval it

is to the next timestep. Thus, the average statistic may be quite

different from an integration of the variable if the time spacing

varies.


Property Description Default Value(s) Restrictions
Compute Average
(ComputeAverage)

Compute the average of each point and cell variable over time.


1

Only the values 0 and 1 are accepted.


Compute Maximum
(ComputeMaximum)

Compute the maximum of each point and cell variable over time.


1

Only the values 0 and 1 are accepted.


Compute Minimum
(ComputeMinimum)

Compute the minimum of each point and cell variable over time.


1

Only the values 0 and 1 are accepted.


Compute Standard Deviation
(ComputeStandardDeviation)

Compute the standard deviation of each point and cell variable over time.


1

Only the values 0 and 1 are accepted.


Input
(Input)

Set the input to the Temporal Statistics filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Tessellate

Tessellate nonlinear curves, surfaces, and volumes with lines, triangles, and tetrahedra.


The Tessellate filter tessellates cells with nonlinear geometry and/or scalar fields into a simplicial complex with linearly interpolated field values that more closely approximate the original field. This is useful for datasets containing quadratic cells.


Property Description Default Value(s) Restrictions
Chord Error
(ChordError)

This property controls the maximum chord error allowed at any edge midpoint in the output tessellation. The chord error is measured as the distance between the midpoint of any output edge and the original nonlinear geometry.


0.001
Field Error
(FieldError2)

This proeprty controls the maximum field error allowed at any edge midpoint in the output tessellation. The field error is measured as the difference between a field value at the midpoint of an output edge and the value of the corresponding field in the original nonlinear geometry.


Input
(Input)

This property specifies the input to the Tessellate filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData, vtkDataSet, vtkUnstructuredGrid.


Maximum Number of Subdivisions
(MaximumNumberOfSubdivisions)

This property specifies the maximum number of times an edge may be subdivided. Increasing this number allows further refinement but can drastically increase the computational and storage requirements, especially when the value of the OutputDimension property is 3.


3

The value must be greater than or equal to 0 and less than or equal to 8.


Merge Points
(MergePoints)

If the value of this property is set to 1, coincident vertices will be merged after tessellation has occurred. Only geometry is considered during the merge and the first vertex encountered is the one whose point attributes will be used. Any discontinuities in point fields will be lost. On the other hand, many operations, such as streamline generation, require coincident vertices to be merged.

Toggle whether to merge coincident vertices.


1

Only the values 0 and 1 are accepted.


Output Dimension
(OutputDimension)

The value of this property sets the maximum dimensionality of the output tessellation. When the value of this property is 3, 3D cells produce tetrahedra, 2D cells produce triangles, and 1D cells produce line segments. When the value is 2, 3D cells will have their boundaries tessellated with triangles. When the value is 1, all cells except points produce line segments.


3

The value must be greater than or equal to 1 and less than or equal to 3.



Tetrahedralize

This filter converts 3-d cells to tetrahedrons and polygons to triangles. The output is always of type unstructured grid.


The Tetrahedralize filter converts the 3D cells of any type of dataset to tetrahedrons and the 2D ones to triangles. This filter always produces unstructured grid output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Tetrahedralize filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Texture Map to Cylinder

Generate texture coordinates by mapping points to cylinder.


This is a filter that generates 2D texture coordinates by mapping input

dataset points onto a cylinder. The cylinder is generated automatically.

The cylinder is generated automatically by computing the axis of the

cylinder. Note that the generated texture coordinates for the s-coordinate

ranges from (0-1) (corresponding to angle of 0->360 around axis), while the

mapping of the t-coordinate is controlled by the projection of points along

the axis.


Property Description Default Value(s) Restrictions
Input
(Input)

Set the input to the Texture Map to Cylinder filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Prevent Seam
(PreventSeam)

Control how the texture coordinates are generated. If Prevent Seam

is set, the s-coordinate ranges from 0->1 and 1->0 corresponding

to the theta angle variation between 0->180 and 180->0

degrees. Otherwise, the s-coordinate ranges from 0->1 between

0->360 degrees.


1

Only the values 0 and 1 are accepted.



Texture Map to Plane

Generate texture coordinates by mapping points to plane.


TextureMapToPlane is a filter that generates 2D texture coordinates by

mapping input dataset points onto a plane. The plane is generated

automatically. A least squares method is used to generate the plane

automatically.


Property Description Default Value(s) Restrictions
Input
(Input)

Set the input to the Texture Map to Plane filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.



Texture Map to Sphere

Generate texture coordinates by mapping points to sphere.


This is a filter that generates 2D texture coordinates by mapping input

dataset points onto a sphere. The sphere is generated automatically. The

sphere is generated automatically by computing the center i.e. averaged

coordinates, of the sphere. Note that the generated texture coordinates

range between (0,1). The s-coordinate lies in the angular direction around

the z-axis, measured counter-clockwise from the x-axis. The t-coordinate

lies in the angular direction measured down from the north pole towards

the south pole.


Property Description Default Value(s) Restrictions
Input
(Input)

Set the input to the Texture Map to Sphere filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Prevent Seam
(PreventSeam)

Control how the texture coordinates are generated. If Prevent Seam

is set, the s-coordinate ranges from 0->1 and 1->0 corresponding

to the theta angle variation between 0->180 and 180->0

degrees. Otherwise, the s-coordinate ranges from 0->1 between

0->360 degrees.


1

Only the values 0 and 1 are accepted.



Threshold

This filter extracts cells that have point or cell scalars in the specified range.


The Threshold filter extracts the portions of the input dataset whose scalars lie within the specified range. This filter operates on either point-centered or cell-centered data. This filter operates on any type of dataset and produces unstructured grid output.


To select between these two options, select either Point Data or Cell Data from the Attribute Mode menu. Once the Attribute Mode has been selected, choose the scalar array from which to threshold the data from the Scalars menu. The Lower Threshold and Upper Threshold sliders determine the range of the scalars to retain in the output. The All Scalars check box only takes effect when the Attribute Mode is set to Point Data. If the All Scalars option is checked, then a cell will only be passed to the output if the scalar values of all of its points lie within the range indicated by the Lower Threshold and Upper Threshold sliders. If unchecked, then a cell will be added to the output if the specified scalar value for any of its points is within the chosen range.


Property Description Default Value(s) Restrictions
All Scalars
(AllScalars)

If the value of this property is 1, then a cell is only included in the output if the value of the selected array for all its points is within the threshold. This is only relevant when thresholding by a point-centered array.


1

Only the values 0 and 1 are accepted.


Input
(Input)

This property specifies the input to the Threshold filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point or cell array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkDataSet.


Scalars
(SelectInputScalars)

The value of this property contains the name of the scalar array from which to perform thresholding.


An array of scalars is required.


Valud array names will be chosen from point and cell data.


Threshold Range
(ThresholdBetween)

The values of this property specify the upper and lower bounds of the thresholding operation.


0 0

The value must lie within the range of the selected data array.



Transform

This filter applies transformation to the polygons.


The Transform filter allows you to specify the position, size, and orientation of polygonal, unstructured grid, and curvilinear data sets.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Transform filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Transform
(Transform)

The values in this property allow you to specify the transform (translation, rotation, and scaling) to apply to the input dataset.


The selected object must be the result of the following: transforms.


The value must be set to one of the following: Transform3.



Triangle Strips

This filter uses a greedy algorithm to convert triangles into triangle strips


The Triangle Strips filter converts triangles into triangle strips and lines into polylines. This filter operates on polygonal data sets and produces polygonal output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Triangle Strips filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Maximum Length
(MaximumLength)

This property specifies the maximum number of triangles/lines to include in a triangle strip or polyline.


1000

The value must be greater than or equal to 4 and less than or equal to 100000.



Triangulate

This filter converts polygons and triangle strips to basic triangles.


The Triangulate filter decomposes polygonal data into only triangles, points, and lines. It separates triangle strips and polylines into individual triangles and lines, respectively. The output is polygonal data. Some filters that take polygonal data as input require that the data be composed of triangles rather than other polygons, so passing your data through this filter first is useful in such situations. You should use this filter in these cases rather than the Tetrahedralize filter because they produce different output dataset types. The filters referenced require polygonal input, and the Tetrahedralize filter produces unstructured grid output.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Triangulate filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.



Tube

Convert lines into tubes. Normals are used to avoid cracks between tube segments.


The Tube filter creates tubes around the lines in the input polygonal dataset. The output is also polygonal.


Property Description Default Value(s) Restrictions
Capping
(Capping)

If this property is set to 1, endcaps will be drawn on the tube. Otherwise the ends of the tube will be open.


1

Only the values 0 and 1 are accepted.


Default Normal
(DefaultNormal)

The value of this property specifies the normal to use when the UseDefaultNormal property is set to 1 or the input contains no vector array (SelectInputVectors property).


0 0 1
Input
(Input)

This property specifies the input to the Tube filter.


The selected object must be the result of the following: sources (includes readers), filters.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPolyData.


Number of Sides
(NumberOfSides)

The value of this property indicates the number of faces around the circumference of the tube.


6

The value must be greater than or equal to 3.


Radius
(Radius)

The value of this property sets the radius of the tube. If the radius is varying (VaryRadius property), then this value is the minimum radius.


1

The value must be less than the largest dimension of the dataset multiplied by a scale factor of 0.01.


Radius Factor
(RadiusFactor)

If varying the radius (VaryRadius property), the property sets the

maximum tube radius in terms of a multiple of the minimum radius. If

not varying the radius, this value has no effect.


10
Scalars
(SelectInputScalars)

This property indicates the name of the scalar array on which to

operate. The indicated array may be used for scaling the tubes.

(See the VaryRadius property.)


An array of scalars is required.


Vectors
(SelectInputVectors)

This property indicates the name of the vector array on which to

operate. The indicated array may be used for scaling and/or

orienting the tubes. (See the VaryRadius property.)


1

An array of vectors is required.


Use Default Normal
(UseDefaultNormal)

If this property is set to 0, and the input contains no vector array, then default ribbon normals will be generated (DefaultNormal property); if a vector array has been set (SelectInputVectors property), the ribbon normals will be set from the specified array. If this property is set to 1, the default normal (DefaultNormal property) will be used, regardless of whether the SelectInputVectors property has been set.


0

Only the values 0 and 1 are accepted.


Vary Radius
(VaryRadius)

The property determines whether/how to vary the radius of the tube. If

varying by scalar (1), the tube radius is based on the point-based

scalar values in the dataset. If it is varied by vector, the vector

magnitude is used in varying the radius.


0

The value must be one of the following: Off (0), By Scalar (1), By Vector (2), By Absolute Scalar (3).



Warp By Scalar

This filter moves point coordinates along a vector scaled by a point attribute. It can be used to produce carpet plots.


The Warp (scalar) filter translates the points of the input data set along a vector by a distance determined by the specified scalars. This filter operates on polygonal, curvilinear, and unstructured grid data sets containing single-component scalar arrays. Because it only changes the positions of the points, the output data set type is the same as that of the input. Any scalars in the input dataset are copied to the output, so the data can be colored by them.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Warp (scalar) filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 1 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Normal
(Normal)

The values of this property specify the direction along which to warp the dataset if any normals contained in the input dataset are not being used for this purpose. (See the UseNormal property.)


0 0 1
Scale Factor
(ScaleFactor)

The scalar value at a given point is multiplied by the value of this property to determine the magnitude of the change vector for that point.


1
Scalars
(SelectInputScalars)

This property contains the name of the scalar array by which to warp the dataset.


An array of scalars is required.


Use Normal
(UseNormal)

If point normals are present in the dataset, the value of this property toggles whether to use a single normal value (value = 1) or the normals from the dataset (value = 0).


0

Only the values 0 and 1 are accepted.


XY Plane
(XYPlane)

If the value of this property is 1, then the Z-coordinates from the input are considered to be the scalar values, and the displacement is along the Z axis. This is useful for creating carpet plots.


0

Only the values 0 and 1 are accepted.



Warp By Vector

This filter displaces point coordinates along a vector attribute. It is useful for showing mechanical deformation.


The Warp (vector) filter translates the points of the input dataset using a specified vector array. The vector array chosen specifies a vector per point in the input. Each point is translated along its vector by a given scale factor. This filter operates on polygonal, curvilinear, and unstructured grid datasets. Because this filter only changes the positions of the points, the output dataset type is the same as that of the input.


Property Description Default Value(s) Restrictions
Input
(Input)

This property specifies the input to the Warp (vector) filter.


The selected object must be the result of the following: sources (includes readers), filters.


The dataset must contain a point array with 3 components.


The selected dataset must be one of the following types (or a subclass of one of them): vtkPointSet.


Scale Factor
(ScaleFactor)

Each component of the selected vector array will be multiplied by the value of this property before being used to compute new point coordinates.


1
Vectors
(SelectInputVectors)

The value of this property contains the name of the vector array by which to warp the dataset's point coordinates.


An array of vectors is required.