ParaView/Users Guide/Parallel Rendering

From KitwarePublic
< ParaView
Revision as of 21:07, 28 January 2011 by DaveDemarle (talk | contribs) (splitting up monolithic parallel chapter)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Parallel Rendering / Compositing

ParaView’s server performs all data-processing tasks. This includes generation of a polygonal representation of the full data set and of decimated LOD models. Rendering, however, can occur either on the server or on the client depending on which is most efficient.

In many cases, the polygonal representation of the data set is much smaller than the original data set. (In an extreme case, a simple outline may be used to represent a very large structured mesh.) In these cases, it may be better to transmit the polygonal representation from the server to the client, and then let the client render it. The client can render the data repeatedly, when the viewpoint is changed for instance, without causing additional network traffic. Only when the data changes will network traffic occur. If the client workstation has high-performance rendering hardware, it can sometimes render even large data sets interactively in this way.

The second option is to have each node of the server render its geometry and send the resulting images to the client for display. There is a penalty per rendered frame for compositing images and sending the image across the network. However, ParaView’s image compositing and delivery is very fast, and there are many options to ensure interactive rendering in this mode. Therefore, although small models may be collected and rendered on the client interactively, ParaView’s distributed rendering can render models of all sizes interactively.

ParaView automatically chooses a rendering strategy to achieve the best rendering performance. You can control the rendering strategy explicitly, forcing rendering to occur entirely on the server or entirely on the client for example, by choosing Settings… from the Edit menu of ParaView. Double click on Render View from the window on the left-hand side of the Options dialog, and then click on Server. The rendering strategy parameters shown in Figure 10 will now be visible.

ParaView UsersGuide ParallelRenderParameters.png
Figure 10. Parallel rendering parameters

Remote Render Threshold: This slider determines how large the data set must be in order for parallel rendering with image compositing and delivery to be used (as opposed to collecting the geometry to the client). The value of this slider is measured in megabytes. Only when the entire data set consumes more memory than this value will compositing of images occur. If the check box beside the Remote Render Threshold slider is unmarked, then compositing will not happen; the geometry will always be collected. This is only a reasonable option when you can be sure the data set you are using is very small. In general, it is safer to move the slider to the right than to uncheck the box.

ParaView uses IceT to perform image compositing. IceT is a parallel rendering library that takes multiple images rendered from different portions of the geometry and combines them into a single image. IceT employs several image-compositing algorithms, all of which are designed to work well on distributed memory machines. Examples of two such image-compositing algorithms are depicted in Figure 11 and Figure 12. IceT will automatically choose a compositing algorithm based on the current workload and available computing resources.

ParaView UsersGuide TreeComposite.png
Figure 11. Tree compositing on four processes.
ParaView UsersGuide BinarySwap.png
Figure 12. Binary swap on four processes.

Disable Ordered Compositing: By default, depth information is used to composite images together. As part of its normal operation, graphics hardware keeps a depth buffer containing the relative depth of each pixel from the camera. For compositing, this depth buffer is retrieved and used to choose which version of the pixel is closest to the camera.

Choosing the closest pixel color is fine when the original geometry is opaque, but when the original geometry comprises transparent polygons or volumes, this compositing operation produces incorrect results. For proper compositing of translucent geometry, the colors must be blended in front to back order. When the Disable Ordered Compositing flag is off, IceT will composite the images in this order.

In general, a collection of polygons or polyhedra has no true front-to-back order. When Disable Ordered Compositing is off, ParaView will redistribute the data to ensure a proper visibility order. The distribution remains fixed during viewpoint manipulations, but it needs to be recomputed whenever a parameter of a filter changes, causing the data to change. Because redistribution is a potentially lengthy operation, you may want to turn Disable Ordered Compositing on if you are not rendering any translucent objects. This can sometimes speed up the parallel rendering process.

Subsample Rate: The time it takes to composite and deliver images is directly proportional to the size of the images. The overhead of parallel rendering can be reduced by simply reducing the size of the images. ParaView has the ability to subsample images before they are composited and inflate them after they have been composited. The Subsample Rate slider specifies the amount by which images are subsampled. This is measured in pixels, and the subsampling is the same in both the horizontal and vertical directions. Thus a subsample rate of 2 will result in an image that is ¼ the size of the original image. The image is scaled to full size before it is displayed on the user interface, so the higher the subsample rate, the more obviously pixilated the image will be during interaction as demonstrated in Figure 13. When the user is not interacting with the data, no subsampling will be used. If you want subsampling to always be off, unmark the check box beside the Subsample Rate slider.


ParaView UsersGuide NoSubsampling.png
ParaView UsersGuide TwoPixelSubsampling.png
ParaView UsersGuide EightPixelSubsampling.png
No Subsampling Subsample Rate: 2 pixels Subsample Rate: 8 pixels

Figure 13. The effect of subsampling on image quality


Squirt Compression: When ParaView is run in client/server mode, ParaView uses image compression to optimize the image transfer. The compression uses an encoding algorithm optimized for images called SQUIRT (developed at Sandia National Laboratories).

SQUIRT uses simple run-length encoding for its compression. A run-length image encoder will find sequences of pixels that are all the same color and encode them as a single run length (the count of pixels repeated) and the color value. ParaView represents colors as 24-bit values, but SQUIRT will optionally apply a bit mask to the colors before comparing them. Although information is lost when this mask is applied, the sizes of the run lengths are increased, and the compression gets better. The bit masks used by SQUIRT are carefully chosen to match the color sensitivity of the human visual system. A 19-bit mask employed by SQUIRT greatly improves compression with little or no noticeable image artifacts. Reducing the number of bits further can improve compression even more, but it can lead to more noticeable color-banding artifacts.

The Squirt Compression slider determines the bit mask used during interactive rendering (i.e., rendering that occurs while the user is changing the camera position or otherwise interacting with the data). During still rendering (when the user is not interacting with the data), lossless compression is always used. The check box to the left of the Squirt Compression slider toggles whether the SQUIRT compression algorithm is used at all.

The options in the Tile Display Parameters portion of the dialog are discussed in section 1.7.

Offscreen Rendering

When running ParaView in a parallel mode, it may be helpful for the remote rendering processes to do their rendering in offscreen buffers. For example, other windows may be displayed on the node(s) where you are rendering; if these windows cover part of the rendering window, they may be captured as part of the display results from that node. A similar situation could occur if more than one rendering process is assigned to a single machine, and the processes share a display. Also, in some cases the remote rendering nodes are not directly connected to a display.

To use offscreen rendering in ParaView, give pvserver (or pvrenderserver) the --use-offscreen-rendering command-line option. Alternatively, set the PV_OFFSCREEN environment variable on the server to 1. On some systems, depending on the graphics hardware and drivers that are available, you may need to compile ParaView with Mesa support (for software rendering) and with the OSMESA library to enable offscreen rendering.