https://public.kitware.com/Wiki/api.php?action=feedcontributions&user=Lokman&feedformat=atomKitwarePublic - User contributions [en]2024-03-28T23:52:22ZUser contributionsMediaWiki 1.38.6https://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60892ParaView: Build And Install on Grid5000 testbed2016-09-26T14:33:00Z<p>Lokman: Removed until content page is updated</p>
<hr />
<div></div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView/ParaView_And_Mesa_3D&diff=60891ParaView/ParaView And Mesa 3D2016-09-26T14:28:31Z<p>Lokman: Removed until referenced page is updated</p>
<hr />
<div>= Read this first =<br />
ParaView and VTK make use of OpenGL for rendering. Most operating systems provide a default OpenGL library. If you're running a Windows or Mac OSX operating system, this page is not for you. On Windows download and install the latest drivers for your system's graphics hardware. On Apple Mac OSX just use the default OS provided drivers. On Linux the default OpenGL drivers are typically provided by Mesa. Mesa is an open source OpenGL implementation that supports a wide range of graphics hardware each with it's own back-end called a renderer. Mesa also provides several software based renderers for use on systems without graphics hardware. '''If you're running a Linux OS on a system without graphics hardware then this page is for you.''' <br />
<br />
= Mesa for your GPU =<br />
Before getting into details of Mesa graphics hardware drivers, a disclaimer. '''If you have graphics hardware Mesa is ''not'' your best option.''' To get the best performance from your graphics card you'll need to install the vendor provided driver. Vendor provided OpenGL drivers are typically much faster of a much higher quality than Mesa's which can be quite buggy. Many distros now provide thridparty non-opensource source drivers through specialized package repositories. The details of installing vendor provided drivers in beyond the scope of this document. Please consult distro and/or vendor specific documentation.<br />
<br />
If you're running a Linux OS with X11 and graphics hardware and have not installed any OpenGL libraries you're probably already using Mesa! Note that the windowing system, in this case X11, is required for on-screen interactive rendering. On Linux systems the easiest way to install Mesa is through your distro's package manager. Most distros also provide packages for Mesa's software based renderers as well. Unfortunately, some OpenGL features may be disabled by your distro's package maintainers to avoid patent or other licensing restrictions. If you find that this is the case then you'll likely want to build Mesa from source. Given the depth and breadth of the universe of hardware that Mesa supports, building Mesa for use on systems with graphics hardware is beyond the scope of this article. Please consult the Mesa documentation directly.<br />
<br />
Configuring VTK and ParaView with X11 or hardware accelerated Mesa only requires pointing VTK to the desired Mesa install and ensuring that LD_LIBRARY_PATH includes your Mesa install. <br />
<br />
= OSMesa, Mesa without graphics hardware =<br />
On Unix/Linux systems lacking graphics hardware one of Mesa's OSMesa(a collection of off screen renderers) renderer can provide OpenGL. Here we will discus two of OSMesa's renderers: ''Gallium llvmpipe'' and ''classic''. Mesa's Gallium llvmpipe renderer is one of the exciting recent developments in Mesa. It is a threaded software based OpenGL which uses LLVM and clang for JIT compilation of GLSL shaders. Gallium llvmpipe support for OSMesa was added in the 2013 9.2.0 release. It's currently the best software based OpenGL option for ParaView and VTK in terms of both OpenGL features and performance. OSMesa classic, the legacy implementation, may still be useful in the case of a bug in the llvmpipe renderer and provides a stable fallback option.<br />
<br />
== Installing OSMesa Gallium llvmpipe state-tracker ==<br />
The Mesa 9.2.2 OSMesa Gallium llvmpipe state-tracker is the preferred Mesa back-end renderer for ParaView and VTK. The following shows how to configure it with system installed LLVM. Our strategy is to configure Mesa with the minimal options needed for OSMesa. This greatly simplifies the build as many of the other drivers/renderers depend on X11 or other libraries. The following set of options are from Mesa v9.2.2 release, older or newer releases may require slightly different options, consult ''./configure --help'' for the details.<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
make -j4 distclean # if in an existing build<br />
<br />
autoreconf -fi<br />
<br />
./configure \<br />
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
--disable-xvmc \<br />
--disable-glx \<br />
--disable-dri \<br />
--with-dri-drivers="" \<br />
--with-gallium-drivers="swrast" \<br />
--enable-texture-float \<br />
--disable-shared-glapi \<br />
--disable-egl \<br />
--with-egl-platforms="" \<br />
--enable-gallium-osmesa \<br />
--enable-gallium-llvm=yes \<br />
--with-llvm-shared-libs \<br />
--prefix=/work/apps/mesa/9.2.2/llvmpipe<br />
<br />
make -j2 <br />
make -j4 install<br />
</source><br />
<br />
Some explanation of these options:<br />
<br />
; DEFAULT_SOFTWARE_DEPTH_BITS=31<br />
: This sets the internal depth buffer precision for the OSMesa rendering context. In our experrience this is necessary to avoid z-buffer fighting during parallel rendering. Note that we've used this in-place of ''--with-osmesa-bits=32'', which sets both depthbuffer and colorbuffers to 32 bit precision. Because of a bug in Mesa this introduces over 80 ctest regression failures in VTK related to line drawing. <br />
; --enable-texture-float<br />
: Floating point textures are disabled by default due to patent restrictions. This must be enabled for many advanced VTK algorithms.<br />
<br />
== Installing Classic OSMesa ==<br />
Classic OSMesa is the legacy back-end renderer. It may be useful if there's a bug in the Gallium llvmpipe renderer prevent a rendering feature that you really need. However note that it's much slower and does not support all of the OpenGL features that llvmpipe Gallium does.<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
make -j4 distclean # if in an existing build<br />
<br />
autoreconf -fi<br />
<br />
./configure \<br />
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
--disable-xvmc \<br />
--disable-glx \<br />
--disable-dri \<br />
--with-dri-drivers="" \<br />
--with-gallium-drivers="" \<br />
--enable-texture-float \<br />
--disable-shared-glapi \<br />
--disable-egl \<br />
--with-egl-platforms="" \<br />
--enable-osmesa \<br />
--enable-gallium-llvm=no \<br />
--prefix=/work/apps/mesa/9.2.2/classic<br />
<br />
make -j2<br />
make -j4 install<br />
</source><br />
<br />
Note that these are the configure options from Mesa v9.2.2, the current release as of this writing. These may be slightly different in older or newer releases, consult ''./configure --help'' for the specific details.<br />
<br />
= A comparison of OSMesa Gallium llvmpipe, OSMesa classic and, GPU Accelerated Rendering Performance =<br />
The following chart shows the run time of VTK's Rendering ctests with OSMesa classic, Gallium llvmpipe OSMesa state-tracker, and an ATI Radeon HD 7870. The CPU on the test sytsem is a 4 core(8 thread) Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz. The ATI driver used is AMD Catalyst 13.11-beta6. Run times were obtained by running each ctest twice consecutively discarding the first time. Tests that took less than 1 second were discarded, as were tests that failed on any one of the renderers. Note results are reported in cases where Classic OSMesa doesn't provide all the extensions, but Gallium llvmpipe does. For example classic OSMesa doesn't provide render buffer float. In these cases the run time reported for the classic OSMesa is ~0.<br />
<br />
The results show that Gallium llvmpipe OSMesa state tracker is quite a bit better across the board and supports more rendering algorithms than OSMesa classic. This especially so for GPU accelerated volume rendering.<br />
<br />
[[File:Osmesa-rendering-sm.png]]<br />
<br />
= Configuring ParaView for use with OSMesa =<br />
Configure ParaView as described on [[ParaView:Build And Install]]. The only cmake variables that need to be updated are:<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
cmake \<br />
...<br />
-DPARAVIEW_BUILD_QT_GUI=OFF \<br />
-DVTK_USE_X=OFF \<br />
-DOPENGL_INCLUDE_DIR={MESA_INSTALL_PREFIX}/include \<br />
-DOPENGL_gl_LIBRARY={MESA_INSTALL_PREFIX}/lib/libOSMesa.[so|a] \<br />
-DOPENGL_glu_LIBRARY={MESA_INSTALL_PREFIX}/lib/libGLU.[so|a] \<br />
-DVTK_OPENGL_HAS_OSMESA=ON \<br />
-DOSMESA_INCLUDE_DIR={MESA_INSTALL_PREFIX}/include \<br />
-DOSMESA_LIBRARY={MESA_INSTALL_PREFIX}/lib/libOSMesa.[so|a] \<br />
$*<br />
<br />
make -j32<br />
make -j32 install<br />
</source><br />
<br />
The rest on the configure and build process for ParaView remains as described on [[ParaView:Build And Install]]. Note that all of these build options with the exception of PARAVIEW_BUILD_QT_GUI are VTK options, thus this also describes how to configure VTK without ParaView.<br />
<br />
libGLU is no longer packaged as part of Mesa (since early 2012, approximately Mesa-8 [[http://lists.freedesktop.org/archives/mesa-dev/2012-January/017894.html (Mesa-dev post)]]). As such, if it does not exist on your system, it can be separately downloaded and installed from the [[ftp://ftp.freedesktop.org/pub/mesa/glu/ libGLU freedesktop.org ftp]].<br />
<br />
<br />
== MPI-Parallel rendering with OSMesa Gallium llvmpipe state-tracker ==<br />
When running ParaView in parallel with MPI one should consider how best to set the number of rendering threads used by the llvmpipe renderer. The number of rendering threads may be set using the ''LP_NUM_THREADS'' environment variable. Our [http://www.hpcvis.com/vis/images/xsede13/mesa-os-mesa-llvmpipe-edison.pdf parallel benchmarks of the surface LIC painter] on one NERSC Edison compute node showed that the best performance was achieved when the total number of rendering threads per node was equal to the number of available hyper threads. As of Mesa 9.2.2 only the fragment pipeline is threaded. This has some important implications for parallel rendering of large datasets. First, it's important to use a mix of MPI parallelism as well as rendering threads because MPI parallelism is the only option for speeding up vertex operations. Keep in mind that generally speaking with large datasets the vertex pipeline will have more work to do than the fragment pipeline. Also, note that only algorithms that have heavy fragment shader use, such as GPU accelerated volume rendering, surface LIC, depth peeling, etc, will see a substantial benefits when using large numbers of threads. Fixed-function algorithms will not generally see the same benefit. Finally note that Mesa has a hard coded compile time limit on the number of threads set by the ''LP_MAX_THREADS'' macro, which in v9.2.2 is 16 threads.<br />
<br />
{|<br />
|[[File:Xsede-fig-llvmpipe-edison-1node-1rank-threads-sm.png|frame|'''Fig 1a''': Surface LIC benchmark 1 MPI process with between 1 and 16 rendering threads. The plot shows that vertex operations, shown in teal, aren't threaded, but account for roughly 1/2 of the serial rendering time. The other colors represent fragment operations that benefit from additional rendering threads.]]<br />
|[[File:Xsede-fig-llvmpipe-edison-1node-nrank-16threads-sm.png|frame|'''Fig 1b''': Surface LIC benchmark with between 1 and 16 MPI processes and 16 rendering threads. The plot shows that MPI parallelism can be used to speedup vertex operations. Note that using more rendering threads than the number of hyperthreads available on the node slows down the fragment operations.]]<br />
|}<br />
<br />
== Can ParaView be built with X11/GPU accelerated OpenGL and OSMesa in the same build? ==<br />
No, this is not currently possible due to library symbol conflicts. You need to choose one or the other for each build. Note that it is possible to use OSMesa rendering in the server and hardware accelerated OpenGL in the client, by having two builds. In this scenario, the client configured for X11/GPU accelerated OpenGL, and the second for server configured to use OSMesa OpenGL. You'll need to run in client-server mode.<br />
<br />
= Link to the old page =<br />
I'd like to remove this link. [[ParaView/ParaView_And_Mesa_3D_tmp]]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60889ParaView: Build And Install on Grid5000 testbed2016-09-24T08:44:39Z<p>Lokman: Correct cookies</p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 testbed[http://www.grid5000.fr]. OSMesa is the OpenGL implementation.<br />
<br />
=Build Configuration=<br />
The build configuration is similar to [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- Base image<br />
- all the software are built and installed on a debian-8 jessie image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of complete file systems as images <br />
- Recipe<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/develop]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed<br />
- Usage<br />
- the new built image (with paraview installed) can be deployed to run experiments on or as a base image to further Kameleon recipes<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView/ParaView_And_Mesa_3D&diff=60888ParaView/ParaView And Mesa 3D2016-09-20T09:27:26Z<p>Lokman: cookies</p>
<hr />
<div>= Read this first =<br />
ParaView and VTK make use of OpenGL for rendering. Most operating systems provide a default OpenGL library. If you're running a Windows or Mac OSX operating system, this page is not for you. On Windows download and install the latest drivers for your system's graphics hardware. On Apple Mac OSX just use the default OS provided drivers. On Linux the default OpenGL drivers are typically provided by Mesa. Mesa is an open source OpenGL implementation that supports a wide range of graphics hardware each with it's own back-end called a renderer. Mesa also provides several software based renderers for use on systems without graphics hardware. '''If you're running a Linux OS on a system without graphics hardware then this page is for you.''' <br />
<br />
= Mesa for your GPU =<br />
Before getting into details of Mesa graphics hardware drivers, a disclaimer. '''If you have graphics hardware Mesa is ''not'' your best option.''' To get the best performance from your graphics card you'll need to install the vendor provided driver. Vendor provided OpenGL drivers are typically much faster of a much higher quality than Mesa's which can be quite buggy. Many distros now provide thridparty non-opensource source drivers through specialized package repositories. The details of installing vendor provided drivers in beyond the scope of this document. Please consult distro and/or vendor specific documentation.<br />
<br />
If you're running a Linux OS with X11 and graphics hardware and have not installed any OpenGL libraries you're probably already using Mesa! Note that the windowing system, in this case X11, is required for on-screen interactive rendering. On Linux systems the easiest way to install Mesa is through your distro's package manager. Most distros also provide packages for Mesa's software based renderers as well. Unfortunately, some OpenGL features may be disabled by your distro's package maintainers to avoid patent or other licensing restrictions. If you find that this is the case then you'll likely want to build Mesa from source. Given the depth and breadth of the universe of hardware that Mesa supports, building Mesa for use on systems with graphics hardware is beyond the scope of this article. Please consult the Mesa documentation directly.<br />
<br />
Configuring VTK and ParaView with X11 or hardware accelerated Mesa only requires pointing VTK to the desired Mesa install and ensuring that LD_LIBRARY_PATH includes your Mesa install. <br />
<br />
= OSMesa, Mesa without graphics hardware =<br />
On Unix/Linux systems lacking graphics hardware one of Mesa's OSMesa(a collection of off screen renderers) renderer can provide OpenGL. Here we will discus two of OSMesa's renderers: ''Gallium llvmpipe'' and ''classic''. Mesa's Gallium llvmpipe renderer is one of the exciting recent developments in Mesa. It is a threaded software based OpenGL which uses LLVM and clang for JIT compilation of GLSL shaders. Gallium llvmpipe support for OSMesa was added in the 2013 9.2.0 release. It's currently the best software based OpenGL option for ParaView and VTK in terms of both OpenGL features and performance. OSMesa classic, the legacy implementation, may still be useful in the case of a bug in the llvmpipe renderer and provides a stable fallback option.<br />
<br />
== Installing OSMesa Gallium llvmpipe state-tracker ==<br />
The Mesa 9.2.2 OSMesa Gallium llvmpipe state-tracker is the preferred Mesa back-end renderer for ParaView and VTK. The following shows how to configure it with system installed LLVM. Our strategy is to configure Mesa with the minimal options needed for OSMesa. This greatly simplifies the build as many of the other drivers/renderers depend on X11 or other libraries. The following set of options are from Mesa v9.2.2 release, older or newer releases may require slightly different options, consult ''./configure --help'' for the details.<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
make -j4 distclean # if in an existing build<br />
<br />
autoreconf -fi<br />
<br />
./configure \<br />
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
--disable-xvmc \<br />
--disable-glx \<br />
--disable-dri \<br />
--with-dri-drivers="" \<br />
--with-gallium-drivers="swrast" \<br />
--enable-texture-float \<br />
--disable-shared-glapi \<br />
--disable-egl \<br />
--with-egl-platforms="" \<br />
--enable-gallium-osmesa \<br />
--enable-gallium-llvm=yes \<br />
--with-llvm-shared-libs \<br />
--prefix=/work/apps/mesa/9.2.2/llvmpipe<br />
<br />
make -j2 <br />
make -j4 install<br />
</source><br />
<br />
Some explanation of these options:<br />
<br />
; DEFAULT_SOFTWARE_DEPTH_BITS=31<br />
: This sets the internal depth buffer precision for the OSMesa rendering context. In our experrience this is necessary to avoid z-buffer fighting during parallel rendering. Note that we've used this in-place of ''--with-osmesa-bits=32'', which sets both depthbuffer and colorbuffers to 32 bit precision. Because of a bug in Mesa this introduces over 80 ctest regression failures in VTK related to line drawing. <br />
; --enable-texture-float<br />
: Floating point textures are disabled by default due to patent restrictions. This must be enabled for many advanced VTK algorithms.<br />
<br />
== Installing Classic OSMesa ==<br />
Classic OSMesa is the legacy back-end renderer. It may be useful if there's a bug in the Gallium llvmpipe renderer prevent a rendering feature that you really need. However note that it's much slower and does not support all of the OpenGL features that llvmpipe Gallium does.<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
make -j4 distclean # if in an existing build<br />
<br />
autoreconf -fi<br />
<br />
./configure \<br />
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
--disable-xvmc \<br />
--disable-glx \<br />
--disable-dri \<br />
--with-dri-drivers="" \<br />
--with-gallium-drivers="" \<br />
--enable-texture-float \<br />
--disable-shared-glapi \<br />
--disable-egl \<br />
--with-egl-platforms="" \<br />
--enable-osmesa \<br />
--enable-gallium-llvm=no \<br />
--prefix=/work/apps/mesa/9.2.2/classic<br />
<br />
make -j2<br />
make -j4 install<br />
</source><br />
<br />
Note that these are the configure options from Mesa v9.2.2, the current release as of this writing. These may be slightly different in older or newer releases, consult ''./configure --help'' for the specific details.<br />
<br />
= A comparison of OSMesa Gallium llvmpipe, OSMesa classic and, GPU Accelerated Rendering Performance =<br />
The following chart shows the run time of VTK's Rendering ctests with OSMesa classic, Gallium llvmpipe OSMesa state-tracker, and an ATI Radeon HD 7870. The CPU on the test sytsem is a 4 core(8 thread) Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz. The ATI driver used is AMD Catalyst 13.11-beta6. Run times were obtained by running each ctest twice consecutively discarding the first time. Tests that took less than 1 second were discarded, as were tests that failed on any one of the renderers. Note results are reported in cases where Classic OSMesa doesn't provide all the extensions, but Gallium llvmpipe does. For example classic OSMesa doesn't provide render buffer float. In these cases the run time reported for the classic OSMesa is ~0.<br />
<br />
The results show that Gallium llvmpipe OSMesa state tracker is quite a bit better across the board and supports more rendering algorithms than OSMesa classic. This especially so for GPU accelerated volume rendering.<br />
<br />
[[File:Osmesa-rendering-sm.png]]<br />
<br />
= Configuring ParaView for use with OSMesa =<br />
Configure ParaView as described on [[ParaView:Build And Install]]. The only cmake variables that need to be updated are:<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
cmake \<br />
...<br />
-DPARAVIEW_BUILD_QT_GUI=OFF \<br />
-DVTK_USE_X=OFF \<br />
-DOPENGL_INCLUDE_DIR={MESA_INSTALL_PREFIX}/include \<br />
-DOPENGL_gl_LIBRARY={MESA_INSTALL_PREFIX}/lib/libOSMesa.[so|a] \<br />
-DOPENGL_glu_LIBRARY={MESA_INSTALL_PREFIX}/lib/libGLU.[so|a] \<br />
-DVTK_OPENGL_HAS_OSMESA=ON \<br />
-DOSMESA_INCLUDE_DIR={MESA_INSTALL_PREFIX}/include \<br />
-DOSMESA_LIBRARY={MESA_INSTALL_PREFIX}/lib/libOSMesa.[so|a] \<br />
$*<br />
<br />
make -j32<br />
make -j32 install<br />
</source><br />
<br />
The rest on the configure and build process for ParaView remains as described on [[ParaView:Build And Install]]. Note that all of these build options with the exception of PARAVIEW_BUILD_QT_GUI are VTK options, thus this also describes how to configure VTK without ParaView.<br />
<br />
libGLU is no longer packaged as part of Mesa (since early 2012, approximately Mesa-8 [[http://lists.freedesktop.org/archives/mesa-dev/2012-January/017894.html (Mesa-dev post)]]). As such, if it does not exist on your system, it can be separately downloaded and installed from the [[ftp://ftp.freedesktop.org/pub/mesa/glu/ libGLU freedesktop.org ftp]].<br />
<br />
A complete example of building ParaView (and Catalyst) with OSMesa can be found in [[ParaView:_Build_And_Install_on_Grid5000_testbed]]<br />
<br />
== MPI-Parallel rendering with OSMesa Gallium llvmpipe state-tracker ==<br />
When running ParaView in parallel with MPI one should consider how best to set the number of rendering threads used by the llvmpipe renderer. The number of rendering threads may be set using the ''LP_NUM_THREADS'' environment variable. Our [http://www.hpcvis.com/vis/images/xsede13/mesa-os-mesa-llvmpipe-edison.pdf parallel benchmarks of the surface LIC painter] on one NERSC Edison compute node showed that the best performance was achieved when the total number of rendering threads per node was equal to the number of available hyper threads. As of Mesa 9.2.2 only the fragment pipeline is threaded. This has some important implications for parallel rendering of large datasets. First, it's important to use a mix of MPI parallelism as well as rendering threads because MPI parallelism is the only option for speeding up vertex operations. Keep in mind that generally speaking with large datasets the vertex pipeline will have more work to do than the fragment pipeline. Also, note that only algorithms that have heavy fragment shader use, such as GPU accelerated volume rendering, surface LIC, depth peeling, etc, will see a substantial benefits when using large numbers of threads. Fixed-function algorithms will not generally see the same benefit. Finally note that Mesa has a hard coded compile time limit on the number of threads set by the ''LP_MAX_THREADS'' macro, which in v9.2.2 is 16 threads.<br />
<br />
{|<br />
|[[File:Xsede-fig-llvmpipe-edison-1node-1rank-threads-sm.png|frame|'''Fig 1a''': Surface LIC benchmark 1 MPI process with between 1 and 16 rendering threads. The plot shows that vertex operations, shown in teal, aren't threaded, but account for roughly 1/2 of the serial rendering time. The other colors represent fragment operations that benefit from additional rendering threads.]]<br />
|[[File:Xsede-fig-llvmpipe-edison-1node-nrank-16threads-sm.png|frame|'''Fig 1b''': Surface LIC benchmark with between 1 and 16 MPI processes and 16 rendering threads. The plot shows that MPI parallelism can be used to speedup vertex operations. Note that using more rendering threads than the number of hyperthreads available on the node slows down the fragment operations.]]<br />
|}<br />
<br />
== Can ParaView be built with X11/GPU accelerated OpenGL and OSMesa in the same build? ==<br />
No, this is not currently possible due to library symbol conflicts. You need to choose one or the other for each build. Note that it is possible to use OSMesa rendering in the server and hardware accelerated OpenGL in the client, by having two builds. In this scenario, the client configured for X11/GPU accelerated OpenGL, and the second for server configured to use OSMesa OpenGL. You'll need to run in client-server mode.<br />
<br />
= Link to the old page =<br />
I'd like to remove this link. [[ParaView/ParaView_And_Mesa_3D_tmp]]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView/ParaView_And_Mesa_3D&diff=60867ParaView/ParaView And Mesa 3D2016-09-17T00:04:29Z<p>Lokman: Add link to wiki providing a recipe for building ParaView with OSMesa</p>
<hr />
<div>= Read this first =<br />
ParaView and VTK make use of OpenGL for rendering. Most operating systems provide a default OpenGL library. If you're running a Windows or Mac OSX operating system, this page is not for you. On Windows download and install the latest drivers for your system's graphics hardware. On Apple Mac OSX just use the default OS provided drivers. On Linux the default OpenGL drivers are typically provided by Mesa. Mesa is an open source OpenGL implementation that supports a wide range of graphics hardware each with it's own back-end called a renderer. Mesa also provides several software based renderers for use on systems without graphics hardware. '''If you're running a Linux OS on a system without graphics hardware then this page is for you.''' <br />
<br />
= Mesa for your GPU =<br />
Before getting into details of Mesa graphics hardware drivers, a disclaimer. '''If you have graphics hardware Mesa is ''not'' your best option.''' To get the best performance from your graphics card you'll need to install the vendor provided driver. Vendor provided OpenGL drivers are typically much faster of a much higher quality than Mesa's which can be quite buggy. Many distros now provide thridparty non-opensource source drivers through specialized package repositories. The details of installing vendor provided drivers in beyond the scope of this document. Please consult distro and/or vendor specific documentation.<br />
<br />
If you're running a Linux OS with X11 and graphics hardware and have not installed any OpenGL libraries you're probably already using Mesa! Note that the windowing system, in this case X11, is required for on-screen interactive rendering. On Linux systems the easiest way to install Mesa is through your distro's package manager. Most distros also provide packages for Mesa's software based renderers as well. Unfortunately, some OpenGL features may be disabled by your distro's package maintainers to avoid patent or other licensing restrictions. If you find that this is the case then you'll likely want to build Mesa from source. Given the depth and breadth of the universe of hardware that Mesa supports, building Mesa for use on systems with graphics hardware is beyond the scope of this article. Please consult the Mesa documentation directly.<br />
<br />
Configuring VTK and ParaView with X11 or hardware accelerated Mesa only requires pointing VTK to the desired Mesa install and ensuring that LD_LIBRARY_PATH includes your Mesa install. <br />
<br />
= OSMesa, Mesa without graphics hardware =<br />
On Unix/Linux systems lacking graphics hardware one of Mesa's OSMesa(a collection of off screen renderers) renderer can provide OpenGL. Here we will discus two of OSMesa's renderers: ''Gallium llvmpipe'' and ''classic''. Mesa's Gallium llvmpipe renderer is one of the exciting recent developments in Mesa. It is a threaded software based OpenGL which uses LLVM and clang for JIT compilation of GLSL shaders. Gallium llvmpipe support for OSMesa was added in the 2013 9.2.0 release. It's currently the best software based OpenGL option for ParaView and VTK in terms of both OpenGL features and performance. OSMesa classic, the legacy implementation, may still be useful in the case of a bug in the llvmpipe renderer and provides a stable fallback option.<br />
<br />
== Installing OSMesa Gallium llvmpipe state-tracker ==<br />
The Mesa 9.2.2 OSMesa Gallium llvmpipe state-tracker is the preferred Mesa back-end renderer for ParaView and VTK. The following shows how to configure it with system installed LLVM. Our strategy is to configure Mesa with the minimal options needed for OSMesa. This greatly simplifies the build as many of the other drivers/renderers depend on X11 or other libraries. The following set of options are from Mesa v9.2.2 release, older or newer releases may require slightly different options, consult ''./configure --help'' for the details.<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
make -j4 distclean # if in an existing build<br />
<br />
autoreconf -fi<br />
<br />
./configure \<br />
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
--disable-xvmc \<br />
--disable-glx \<br />
--disable-dri \<br />
--with-dri-drivers="" \<br />
--with-gallium-drivers="swrast" \<br />
--enable-texture-float \<br />
--disable-shared-glapi \<br />
--disable-egl \<br />
--with-egl-platforms="" \<br />
--enable-gallium-osmesa \<br />
--enable-gallium-llvm=yes \<br />
--with-llvm-shared-libs \<br />
--prefix=/work/apps/mesa/9.2.2/llvmpipe<br />
<br />
make -j2 <br />
make -j4 install<br />
</source><br />
<br />
Some explanation of these options:<br />
<br />
; DEFAULT_SOFTWARE_DEPTH_BITS=31<br />
: This sets the internal depth buffer precision for the OSMesa rendering context. In our experrience this is necessary to avoid z-buffer fighting during parallel rendering. Note that we've used this in-place of ''--with-osmesa-bits=32'', which sets both depthbuffer and colorbuffers to 32 bit precision. Because of a bug in Mesa this introduces over 80 ctest regression failures in VTK related to line drawing. <br />
; --enable-texture-float<br />
: Floating point textures are disabled by default due to patent restrictions. This must be enabled for many advanced VTK algorithms.<br />
<br />
== Installing Classic OSMesa ==<br />
Classic OSMesa is the legacy back-end renderer. It may be useful if there's a bug in the Gallium llvmpipe renderer prevent a rendering feature that you really need. However note that it's much slower and does not support all of the OpenGL features that llvmpipe Gallium does.<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
make -j4 distclean # if in an existing build<br />
<br />
autoreconf -fi<br />
<br />
./configure \<br />
CXXFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
CFLAGS="-O2 -g -DDEFAULT_SOFTWARE_DEPTH_BITS=31" \<br />
--disable-xvmc \<br />
--disable-glx \<br />
--disable-dri \<br />
--with-dri-drivers="" \<br />
--with-gallium-drivers="" \<br />
--enable-texture-float \<br />
--disable-shared-glapi \<br />
--disable-egl \<br />
--with-egl-platforms="" \<br />
--enable-osmesa \<br />
--enable-gallium-llvm=no \<br />
--prefix=/work/apps/mesa/9.2.2/classic<br />
<br />
make -j2<br />
make -j4 install<br />
</source><br />
<br />
Note that these are the configure options from Mesa v9.2.2, the current release as of this writing. These may be slightly different in older or newer releases, consult ''./configure --help'' for the specific details.<br />
<br />
= A comparison of OSMesa Gallium llvmpipe, OSMesa classic and, GPU Accelerated Rendering Performance =<br />
The following chart shows the run time of VTK's Rendering ctests with OSMesa classic, Gallium llvmpipe OSMesa state-tracker, and an ATI Radeon HD 7870. The CPU on the test sytsem is a 4 core(8 thread) Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz. The ATI driver used is AMD Catalyst 13.11-beta6. Run times were obtained by running each ctest twice consecutively discarding the first time. Tests that took less than 1 second were discarded, as were tests that failed on any one of the renderers. Note results are reported in cases where Classic OSMesa doesn't provide all the extensions, but Gallium llvmpipe does. For example classic OSMesa doesn't provide render buffer float. In these cases the run time reported for the classic OSMesa is ~0.<br />
<br />
The results show that Gallium llvmpipe OSMesa state tracker is quite a bit better across the board and supports more rendering algorithms than OSMesa classic. This especially so for GPU accelerated volume rendering.<br />
<br />
[[File:Osmesa-rendering-sm.png]]<br />
<br />
= Configuring ParaView for use with OSMesa =<br />
Configure ParaView as described on [[ParaView:Build And Install]]. The only cmake variables that need to be updated are:<br />
<br />
<source lang="bash"><br />
#!/bin/bash<br />
<br />
cmake \<br />
...<br />
-DPARAVIEW_BUILD_QT_GUI=OFF \<br />
-DVTK_USE_X=OFF \<br />
-DOPENGL_INCLUDE_DIR={MESA_INSTALL_PREFIX}/include \<br />
-DOPENGL_gl_LIBRARY={MESA_INSTALL_PREFIX}/lib/libOSMesa.[so|a] \<br />
-DOPENGL_glu_LIBRARY={MESA_INSTALL_PREFIX}/lib/libGLU.[so|a] \<br />
-DVTK_OPENGL_HAS_OSMESA=ON \<br />
-DOSMESA_INCLUDE_DIR={MESA_INSTALL_PREFIX}/include \<br />
-DOSMESA_LIBRARY={MESA_INSTALL_PREFIX}/lib/libOSMesa.[so|a] \<br />
$*<br />
<br />
make -j32<br />
make -j32 install<br />
</source><br />
<br />
The rest on the configure and build process for ParaView remains as described on [[ParaView:Build And Install]]. Note that all of these build options with the exception of PARAVIEW_BUILD_QT_GUI are VTK options, thus this also describes how to configure VTK without ParaView.<br />
<br />
libGLU is no longer packaged as part of Mesa (since early 2012, approximately Mesa-8 [[http://lists.freedesktop.org/archives/mesa-dev/2012-January/017894.html (Mesa-dev post)]]). As such, if it does not exist on your system, it can be separately downloaded and installed from the [[ftp://ftp.freedesktop.org/pub/mesa/glu/ libGLU freedesktop.org ftp]].<br />
<br />
An example io building ParaView (and Catalyst) with OSMesa can be found in [[ParaView:_Build_And_Install_on_Grid5000_testbed]]<br />
<br />
== MPI-Parallel rendering with OSMesa Gallium llvmpipe state-tracker ==<br />
When running ParaView in parallel with MPI one should consider how best to set the number of rendering threads used by the llvmpipe renderer. The number of rendering threads may be set using the ''LP_NUM_THREADS'' environment variable. Our [http://www.hpcvis.com/vis/images/xsede13/mesa-os-mesa-llvmpipe-edison.pdf parallel benchmarks of the surface LIC painter] on one NERSC Edison compute node showed that the best performance was achieved when the total number of rendering threads per node was equal to the number of available hyper threads. As of Mesa 9.2.2 only the fragment pipeline is threaded. This has some important implications for parallel rendering of large datasets. First, it's important to use a mix of MPI parallelism as well as rendering threads because MPI parallelism is the only option for speeding up vertex operations. Keep in mind that generally speaking with large datasets the vertex pipeline will have more work to do than the fragment pipeline. Also, note that only algorithms that have heavy fragment shader use, such as GPU accelerated volume rendering, surface LIC, depth peeling, etc, will see a substantial benefits when using large numbers of threads. Fixed-function algorithms will not generally see the same benefit. Finally note that Mesa has a hard coded compile time limit on the number of threads set by the ''LP_MAX_THREADS'' macro, which in v9.2.2 is 16 threads.<br />
<br />
{|<br />
|[[File:Xsede-fig-llvmpipe-edison-1node-1rank-threads-sm.png|frame|'''Fig 1a''': Surface LIC benchmark 1 MPI process with between 1 and 16 rendering threads. The plot shows that vertex operations, shown in teal, aren't threaded, but account for roughly 1/2 of the serial rendering time. The other colors represent fragment operations that benefit from additional rendering threads.]]<br />
|[[File:Xsede-fig-llvmpipe-edison-1node-nrank-16threads-sm.png|frame|'''Fig 1b''': Surface LIC benchmark with between 1 and 16 MPI processes and 16 rendering threads. The plot shows that MPI parallelism can be used to speedup vertex operations. Note that using more rendering threads than the number of hyperthreads available on the node slows down the fragment operations.]]<br />
|}<br />
<br />
== Can ParaView be built with X11/GPU accelerated OpenGL and OSMesa in the same build? ==<br />
No, this is not currently possible due to library symbol conflicts. You need to choose one or the other for each build. Note that it is possible to use OSMesa rendering in the server and hardware accelerated OpenGL in the client, by having two builds. In this scenario, the client configured for X11/GPU accelerated OpenGL, and the second for server configured to use OSMesa OpenGL. You'll need to run in client-server mode.<br />
<br />
= Link to the old page =<br />
I'd like to remove this link. [[ParaView/ParaView_And_Mesa_3D_tmp]]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:Build_And_Install_Grid5000_testbed&diff=60866ParaView:Build And Install Grid5000 testbed2016-09-16T23:58:47Z<p>Lokman: Lokman moved page ParaView:Build And Install Grid5000 testbed to ParaView: Build And Install on Grid5000 testbed</p>
<hr />
<div>#REDIRECT [[ParaView: Build And Install on Grid5000 testbed]]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60865ParaView: Build And Install on Grid5000 testbed2016-09-16T23:58:47Z<p>Lokman: Lokman moved page ParaView:Build And Install Grid5000 testbed to ParaView: Build And Install on Grid5000 testbed</p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 testbed[http://www.grid5000.fr]. OSMesa is the OpenGL implementation.<br />
<br />
=Build Configuration=<br />
The build configuration is the similar to [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- Base image<br />
- all the software are built and installed on a debian-8 jessie image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of complete file systems as images <br />
- Recipe<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/develop]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed<br />
- Usage<br />
- the new built image (with paraview installed) can be deployed to run experiments on or as a base image to further Kameleon recipes<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60864ParaView: Build And Install on Grid5000 testbed2016-09-16T23:58:03Z<p>Lokman: updates</p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 testbed[http://www.grid5000.fr]. OSMesa is the OpenGL implementation.<br />
<br />
=Build Configuration=<br />
The build configuration is the similar to [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- Base image<br />
- all the software are built and installed on a debian-8 jessie image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of complete file systems as images <br />
- Recipe<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/develop]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed<br />
- Usage<br />
- the new built image (with paraview installed) can be deployed to run experiments on or as a base image to further Kameleon recipes<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60863ParaView: Build And Install on Grid5000 testbed2016-09-16T22:38:48Z<p>Lokman: </p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 testbed[http://www.grid5000.fr]. OSMesa is the OpenGL implementation.<br />
<br />
=Build Configuration=<br />
The build configuration is the similar to [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- Base image<br />
- all the software are built and installed on a debian-8 jessie image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of *complete system images* *chose more precise description* <br />
- Recipe<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/develop]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed\<br />
- Usage<br />
- the new built image (with paraview installed) can be deployed to run experiments on or as a base image to further Kameleon recipes<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60862ParaView: Build And Install on Grid5000 testbed2016-09-16T22:25:58Z<p>Lokman: </p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 testbed[http://www.grid5000.fr]. OSMesa is the OpenGL implementation.<br />
<br />
=Build Configuration=<br />
The build configuration is the same one as in [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- Base image<br />
- all the software are built and installed on a debian-7 wheezy image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of *complete system images* *chose more precise description* <br />
- Recipe<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/master]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed\<br />
- Usage<br />
- the new built image (with paraview installed) can be deployed to run experiments on or as a base image to further Kameleon scripts<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].<br />
<br />
For build configuration specific to this recipe check [https://github.com/lrahmani/paraview-catalyst-g5k/tree/lrahmani-readme-draft]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60833ParaView: Build And Install on Grid5000 testbed2016-08-03T09:06:30Z<p>Lokman: </p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 testbed [http://www.grid5000.fr]. <br />
<br />
=Build Configuration=<br />
The build configuration is the same one as in [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- Base image<br />
- all the software are built and installed on a debian-7 wheezy image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of *complete system images* *chose more precise description* <br />
- Recipe<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/master]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed\<br />
- Usage<br />
- the new built image (with paraview installed) can be deployed to run experiments on or as a base image to further Kameleon scripts<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].<br />
<br />
For build configuration specific to this recipe check [https://github.com/lrahmani/paraview-catalyst-g5k/tree/lrahmani-readme-draft]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60473ParaView: Build And Install on Grid5000 testbed2016-07-03T06:24:15Z<p>Lokman: </p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 french testbed [http://www.grid5000.fr]. <br />
<br />
=Build Configuration=<br />
The build configuration is the same one as in [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproduciblility, Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- all the software are built and installed on a debian-7 wheezy image <br />
- Grid'5000 offers reproducible base images (configuration script available here [http://tobe.set])<br />
- Grid'5000 allows the deployment of complete system images<br />
- kameleon recipe available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/master]<br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed<br />
- the new built image (with paraview installed) can be deployed to run experiments with<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].<br />
<br />
For build configuration specific to this recipe check [https://github.com/lrahmani/paraview-catalyst-g5k/tree/lrahmani-readme-draft]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView:_Build_And_Install_on_Grid5000_testbed&diff=60472ParaView: Build And Install on Grid5000 testbed2016-07-03T06:18:30Z<p>Lokman: This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 french testbed</p>
<hr />
<div>=Introduction=<br />
This wiki page describes steps to build and install ParaView with Catalyst on the Grid'5000 french testbed [www.grid5000.fr]. <br />
<br />
=Build Configuration=<br />
The build configuration is the same one as in [http://www.paraview.org/Wiki/ParaView:Build_And_Install#Download_And_Install_MESA_3D_libraries] and [http://www.paraview.org/Wiki/ParaView_And_Mesa_3D]<br />
<br />
In an effort to ensure reproducibility Kameleon's Grid5000's community tool [http://kameleon.imag.fr/] is used.<br />
Level of reproducibility:<br />
- all the software are built and installed on a debian-7 wheezy image <br />
- Grid'5000 offers reproducible images (configuration script available here [https://github.com/lrahmani/paraview-catalyst-g5k/tree/master])<br />
- Grid'5000 allows the deployment of complete system images<br />
- kameleon recipe available here <br />
- Software installed used apt-get is reproducible as far as the the apt-get source list is not changed<br />
- the new built image (with paraview installed) can be deployed to run experiments with<br />
<br />
=Build And Install=<br />
To build the image follow the steps described in kameleon's official documentation [http://kameleon.imag.fr/installation.html] and [http://kameleon.imag.fr/getting_started.html].<br />
<br />
For build configuration specific to this recipe check [https://github.com/lrahmani/paraview-catalyst-g5k/tree/lrahmani-readme-draft]</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=User:Lokman&diff=60471User:Lokman2016-07-03T06:00:49Z<p>Lokman: </p>
<hr />
<div>PhD Student at ENS Rennes, working at Kerdata Irisa/Inria team.<br />
I am working on in situ processing for scientific simulations.</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView/Data_formats&diff=59321ParaView/Data formats2016-04-07T08:05:08Z<p>Lokman: The first sentence had no meaning</p>
<hr />
<div>=Introduction=<br />
This page describes different data formats that ParaView can read and gives guidance over how to use them.<br />
<br />
=The full list of file formats=<br />
<br />
The file formats that ParaView understands includes at least the ones described in the in tool help (Help->ParaView->Readers). See http://paraview.org/Wiki/ParaView/Users_Guide/List_of_readers for a listing.<br />
<br />
Note that many plugins add new file formats at runtime. Once loaded, new file formats are displayed in the "Files of type" dropdown in the File->Open dialog. <br />
<br />
File formats that are readable via plugins distributed with the ParaView source code as of version 3.8 include:<br />
* NIH Analyze/Nifti <br />
* H5part HDF5 Particle files <br />
* .tlp tulip graphs <br />
* .xml tree structures <br />
* netdmf <br />
* sql table interface <br />
* prism sesame files<br />
<br />
=CSV (Comma Separated Variable) files=<br />
CSV files can be read by ParaView, and are a good quick and dirty format. This data can be converted into points or structured grids. This data is just a number of rows, each row representing a point in space. The columns should include X, Y, Z and any other data. An example follows. Cut and paste this block of data into a file named test.csv.<br />
<br />
::x coord, y coord, z coord, scalar<br />
::0, 0, 0, 0<br />
::1, 0, 0, 1<br />
::0, 1, 0, 2<br />
::1, 1, 0, 3<br />
::-0.5, -0.5, 1, 4<br />
::0.5, -0.5, 1, 5<br />
::-0.5, 0.5, 1, 6<br />
::0.5, 0.5, 1, 7<br />
<br />
==Read a CSV file into Paraview==<br />
Start ParaView, and read in this data. Note that the default settings should be used: <br />
**Detect Numeric Columns ON<br />
**Use String Delimiter ON<br />
**Have Headers ON<br />
**Field Delimiter Characters should be a comma - ',' <br />
The data should show up as a table.<br />
<br />
Next, we need to tell ParaView what this data means. There are two ways to do this - as a structured grid or as points.<br />
<br />
<< NOTE - As of 3.98.1, there is a bug in the delimited text reader. Make sure there are no spaces after numbers and before commas or carriage returns in your data file (such as may occur if you cut and paste the sample files below) >><br />
<br />
==Displaying data as points==<br />
*Run the filter Filters/ Alphabetical/ Table To Points.<br />
*Tell ParaView what columns are the X, Y and Z coordinate. Be sure to not skip this step. Apply.<br />
*ParaView probably didn't open up a 3d window (this is a bug).<br />
**Split screen Horizontal (Icon, top right). <br />
**3D View<br />
**Turn visibility on for the Table to Points filter (click on the eyeball in the Pipeline Browser)<br />
**If desired, color by your variable.<br />
**If desired, run the glyph filter on these points. Be sure to change Glyph Type to Sphere.<br />
<br />
==Displaying data as structured grid==<br />
*You may want to delete the Table to Points filter listed above. <br />
*Run the filter Filters/ Alphabetical/ Table To Structured Grid.<br />
*Tell ParaView what extent, or array sizes, your data is in. For instance, the data above has 8 points, forming a leaning cube. Points arrays are in X == size 2, Y == size 2, and Z == size 2. In this example we will use C indexing for the arrays, thus they go from 0 to 1 (2 entries).<br />
** Whole extent is as follows:<br />
** 0 1<br />
** 0 1<br />
** 0 1<br />
*Tell ParaView what columns are the X, Y and Z coordinate. Be sure to not skip this step. Apply.<br />
*ParaView probably didn't open up a 3d window (this is a bug).<br />
**Split screen Horizontal (Icon, top right). <br />
**3D View<br />
**Turn visibility on for the Table to Points filter (click on the eyeball in the Pipeline Browser)<br />
**If desired, change representation to solid, and color by your variable.<br />
<br />
==CSV time series==<br />
*You can also hold multiple time steps as CSV files. You put each time step into it's own file, and label the files as someName.csv.[0-n]<br />
*Here is an example of three timesteps. Enter the following data into three files in the same directory, named as follows.<br />
<br />
** test.csv.0:<br />
::x coord, y coord, z coord, scalar<br />
::0, 0, 0, 0<br />
::1, 0, 0, 1<br />
::0, 1, 0, 2<br />
::1, 1, 0, 3<br />
::-0.5, -0.5, 1, 4<br />
::0.5, -0.5, 1, 5<br />
::-0.5, 0.5, 1, 6<br />
::0.5, 0.5, 1, 7<br />
<br />
** test.csv.1:<br />
::x coord, y coord, z coord, scalar<br />
::0, 0, 0, 0<br />
::1, 0, 0, 1<br />
::0, 1, 0, 2<br />
::1, 1, 0, 3<br />
::0.5, 0.5, 1, 4<br />
::1.5, 0.5, 1, 5<br />
::0.5, 1.5, 1, 6<br />
::1.5, 1.5, 1, 7<br />
<br />
** test.csv.2:<br />
::x coord, y coord, z coord, scalar<br />
::0, 0, 0, 0<br />
::1, 0, 0, 1<br />
::0, 1, 0, 2<br />
::1, 1, 0, 3<br />
::1.5, 1.5, 1, 4<br />
::2.5, 1.5, 1, 5<br />
::1.5, 2.5, 1, 6<br />
::2.5, 2.5, 1, 7<br />
<br />
**Use the directions above to read in the CSV files and display them as points.<br />
<br />
=Raw files=<br />
Raw data files are binary files of one or more data variables, in an X by Y by Z layout. The spacial locations of the data points are implicit. Raw data files are a good format for voxel data, or datasets that are huge. The ParaView raw data reader will automatically spread your file among all of the ParaView servers that are running.<br />
<br />
An example 2X2X2 file would look like this (obviously, with the data written as binary data):<br />
<br />
15 16 17 18 19 20 21 22<br />
<br />
It would be represented as follows:<br />
<br />
15 16<br />
<br />
17 18<br />
<br />
<br />
19 20<br />
<br />
21 22<br />
<br />
There are numerous raw files located here: http://www.gris.uni-tuebingen.de/edu/areas/scivis/volren/datasets/datasets.html<br />
<br />
==Read a Raw file into Paraview==<br />
Start ParaView, and read in your raw data. You will need to know the layout of your data. Items that you will need to input:<br />
*Data Scalar Type - is your data 8 bit, 16 bit, unsigned, etc?<br />
*Data Byte Order - probably dependent on the machine that wrote it. Probably BigEndian.<br />
*File Dimensionality. Probably 2 or 3.<br />
*Data Extent. This needs to be 0 based. Thus, for a cube of data 256X256X256, use a Data Extent of 0-255, 0-255, 0-255. For 2d data, set the last item to 0-0.<br />
<br />
Note - for byte sized data, ParaView uses the data itself as a gray scale component. To use the user selected color table, select Display/ Map Scalars.<br />
<br />
==Reading a time varying Raw file into Paraview==<br />
<br />
One way to read time varying raw files is to leverage the [http://vis.computer.org/vis2004contest/data.html XDMF] meta file format. To do so, put a set of grids that use the binary data storage backend inside a temporal collection. An example file that loads part of the [http://vis.computer.org/vis2004contest/data.html 2004 IEEE Visualization] data set is:<br />
<br />
<syntaxhighlight lang="xml"> <br />
<?xml version="1.0" ?><br />
<!DOCTYPE Xdmf SYSTEM "Xdmf.dtd" []><br />
<Xdmf xmlns:xi="http://www.w3.org/2001/XInclude" Version="2.0"><br />
<Domain><br />
<Topology name="topo" TopologyType="3DCoRectMesh"<br />
Dimensions="100 500 500"><br />
</Topology><br />
<Geometry name="geo" Type="ORIGIN_DXDYDZ"><br />
<!-- Origin --><br />
<DataItem Format="XML" Dimensions="3"><br />
0.0 0.0 0.0<br />
</DataItem><br />
<!-- DxDyDz --><br />
<DataItem Format="XML" Dimensions="3"><br />
1.0 1.0 1.0<br />
</DataItem><br />
</Geometry><br />
<br />
<Grid Name="TimeSeries" GridType="Collection" CollectionType="Temporal"><br />
<Time TimeType="HyperSlab"><br />
<DataItem Format="XML" NumberType="Float" Dimensions="3"><br />
0.0 1.0 2<br />
</DataItem><br />
</Time><br />
<br />
<Grid Name="T1" GridType="Uniform"><br />
<Topology Reference="/Xdmf/Domain/Topology[1]"/><br />
<Geometry Reference="/Xdmf/Domain/Geometry[1]"/><br />
<Attribute Name="CLOUDf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
CLOUDf01.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Pf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Pf01.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="TCf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
TCf01.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Uf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Uf01.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Vf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Vf01.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Wf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Wf01.bin<br />
</DataItem><br />
</Attribute><br />
</Grid><br />
<br />
<Grid Name="T2" GridType="Uniform"><br />
<Topology Reference="/Xdmf/Domain/Topology[1]"/><br />
<Geometry Reference="/Xdmf/Domain/Geometry[1]"/><br />
<Attribute Name="CLOUDf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
CLOUDf02.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Pf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Pf02.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="TCf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
TCf02.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Uf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Uf02.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Vf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Vf02.bin<br />
</DataItem><br />
</Attribute><br />
<Attribute Name="Wf" Center="Node"><br />
<DataItem Format="Binary" <br />
DataType="Float" Precision="4" Endian="Big"<br />
Dimensions="100 500 500"><br />
Wf02.bin<br />
</DataItem><br />
</Attribute><br />
</Grid><br />
<br />
</Grid><br />
</Domain><br />
</Xdmf><br />
<br />
</syntaxhighlight><br />
<br />
=VTK(Visualization ToolKit) files=<br />
VTK file format, along with all of it's relatives, are a preferred format for ParaView. These file formats are fairly complex, but are also very powerful. This file format can be found here: [http://www.vtk.org/VTK/img/file-formats.pdf standard VTK file formats]. Example files are available upon request.<br />
<br />
<br />
= PVD File Format =<br />
<br />
A ParaView Data (PVD) File Format<br />
ParaView’s native data file format (PVD) supports any type of data set that can be loaded or created in ParaView (polygonal, uniform rectilinear, nonuniform rectilinear, curvilinear, or unstructured), including spatially partitioned, multi-block, and time series data. This file format is XML-based. <br />
The PVD file actually provides pointers to the collection of data files required to store the various components of the current data set. Each of the data files in the collection uses the XML-based VTK file format (either the serial or parallel version - but NOT legacy ".vtk" file format files).<br />
The first line in the PVD file specifies the XML version (currently "1.0"). Following that is the VTKFile element. The attributes of this element are as follows.<br />
* type: This attribute is set to "Collection", indicating that loading this data file requires loading a group of data files.<br />
* version: The version attribute lists the version of the vtkXMLWriter used in writing this file. The current version is "0.1". This attribute is for informational purposes only; it is not required.<br />
* byte_order: Because this is an ASCII file, this attribute is not required. If present, set to "BigEndian" or "LittleEndian", depending on the byte order of the platform where this file is being written. Intel CPUs (most commodity laptops and desktops) use little endian byte order; PowerPC CPUs (older Macintosh machines and IBM clusters and supercomputers) use big endian. <br />
* compressor: This attribute should be set to "vtkZLibDataCompressor". This attribute is not required.<br />
The immediate XML sub-element of VTKFile is Collection. The <Collection> </Collection> tags surround the DataSet elements listing the individual data files in this collection. The DataSet elements (one per sub-file) support the following XML attributes.<br />
* timestep: This attribute is only required for storing time-varying data sets. Its value is a floating point value.<br />
* group: The group attribute lists the unique ParaView-assigned identifier of the source, reader, or filter that created this data set. This attribute is not required; it is only for informational purposes.<br />
* part: This attribute’s value is an identification number for this part of the current data set. It is an integer value greater than or equal to 0.<br />
* file: This attribute contains the file name of one of the sub-files in this data set. If the sub-file is not in the same directory as the .pvd file, this attribute will contain a relative path to the sub-file from the location of the .pvd file.<br />
An example .pvd file is shown below. It contains five time steps of a time-varying data set.<br />
<br />
<syntaxhighlight lang="xml"> <br />
<?xml version="1.0"?><br />
<VTKFile type="Collection" version="0.1"<br />
byte_order="LittleEndian"<br />
compressor="vtkZLibDataCompressor"><br />
<Collection><br />
<DataSet timestep="0" group="" part="0"<br />
file="examplePVD/examplePVD_T0000.vtp"/><br />
<DataSet timestep="1" group="" part="0"<br />
file="examplePVD/examplePVD_T0001.vtp"/><br />
<DataSet timestep="2" group="" part="0"<br />
file="examplePVD/examplePVD_T0002.vtp"/><br />
<DataSet timestep="3" group="" part="0"<br />
file="examplePVD/examplePVD_T0003.vtp"/><br />
<DataSet timestep="4" group="" part="0"<br />
file="examplePVD/examplePVD_T0004.vtp"/><br />
</Collection><br />
</VTKFile><br />
</syntaxhighlight><br />
<br />
=Acknowledgements=<br />
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=ParaView/Catalyst/Overview&diff=59260ParaView/Catalyst/Overview2016-03-24T14:54:40Z<p>Lokman: smaller in size not in time to save</p>
<hr />
<div><center>[[File:CatalystLogo.png|500px]]</center><br />
== Background ==<br />
Several factors are driving the growth of simulations. Computational power of supercomputers and computer clusters is growing, while the price of individual computers is decreasing. Distributed computing techniques allow hundreds, or even thousands, of computer nodes to participate in a single simulation. The benefit of this computational power is that simulations are becoming more accurate and useful for predicting complex phenomena. The downside to this growth is the enormous amounts of data that need to be saved and analyzed to determine the results of the simulation. Unfortunately, the growth of IO capabilities has not kept up with the growth of processing power in these machines. Thus, the ability to generate data has outpaced our ability to save and analyze the data. This bottleneck is throttling our ability to benefit from our improved computing resources. For example, simulations often save their states infrequently to minimize storage requirements. <br />
<br />
Such coarse temporal sampling makes it difficult to notice some complex behavior. To address this issue, ParaView can now be easily used to integrate concurrent analysis and visualization directly into simulation codes. This functionality is often referred to as co-processing, ''in situ'' processing or co-visualization.<br />
This feature is available through ParaView Catalyst (previously called ParaView Co-Processing). The workflows comparing the old to the new simulation workflow using ParaView Catalyst can be seen in the figure below.<br />
{|<br />
|[[Image:FullWorkFlow.png|thumb|800px|Full Workflow]]<br />
|}<br />
{|<br />
|[[Image:CatalystWorkFlow.png|thumb|800px|Workflow With Co-Processing]]<br />
|}<br />
<br />
== Technical Objectives ==<br />
<br />
The main objective of the co-processing toolset is to facilitate integration of easy to use, core data processing into the simulation to enable scalable data analysis. The toolset has two main parts:<br />
<br />
* '''An extensible and flexible library''': ParaView Catalyst was designed to be flexible enough to be embedded in various simulation codes with relative ease and minimal footprint. This flexibility is critical, as a library that requires significant effort to embed cannot be successfully deployed in a large number of simulations. The co-processing library is also easily-extended so that users can deploy new analysis and visualization techniques to existing co-processing installations. The minimal footpring is through using the Catalyst configuration tools (see directions for [[Generating_Catalyst_Source_Tree|generating source]] and [[Build_Directions| building]]) to reduce the overall amount of ParaView and VTK libraries that a simulation code needs to link to.<br />
<br />
* '''Configuration tools for ParaView Catalyst output''': It is important for users to be able to configure the Catalyst output using graphical user interfaces that are part of their daily work-flow. <br />
<br />
Note: All of this can be done for large data. The Catalyst library will often be used on a distributed system. For the largest simulations, the visualization of extracts may also require a distributed system (i.e. a visualization cluster).<br />
<br />
== Details ==<br />
<br />
Using ParaView Catalyst is a fundamental change in to the way that simulation results are obtained. The entire<br />
goal is to reduce the time to gaining insight into the problem being simulated. Figure 1 shows<br />
the computational time perform a full workflow using Sandia's CTH simulation code for various problem sizes and process counts.<br />
This time includes both simulation and post-processing simulation time. Figure 2 shows the execution time for gaining the same results with CTH while using Catalyst for ''in situ'' analysis and visualization.<br />
<br />
{|<br />
|[[Image:CTHFullWorkflow.png|thumb|400px|Figure 1: Classical workflow.]] || [[Image:CTHCatalystWorkflow.png|thumb|400px|Figure 2: Catalyst workflow.]]<br />
|}<br />
<br />
Note that as the problem size increases as well as the number of processes increases, the benefits of using Catalyst<br />
become more apparent. This is due in a large part because the computing system resources are being stretched to<br />
their limit and inefficiencies become more apparent. This is detailed in Sandia's SAND2010-6118 technical report which is referenced below. One possible workflow that ParaView's co-processing tools<br />
enables is demonstrated more fully in Figure 3.<br />
<br />
{|<br />
|[[Image:CatalystFullWorkFlow.png|650px|thumb|left|Figure 3: Full workflow.]]<br />
|}<br />
<br />
In this workflow the user creates a Python script using ParaView's plugin for creating Catalyst co-processing scripts. Here the user can choose a variety of outputs: extracted data such as polygonal output with field data, rendered images, plot information and/or statistics. The Python scripts are then used by Catalyst during the simulation run to output the simulation user's desired information. Typically, the extracted data is orders of magnitude smaller than the full raw data set. This is shown in Figure 4 for a relatively small problem for several VTK filters.<br />
Often the reduced file IO also results in faster simulation runs since in certain cases it is faster<br />
for Catalyst to compute a desired extract and save that to disk compared to just saving the full raw data<br />
to disk. Figure 5 shows the compute time for certain VTK filters compared to saving the full raw data for a small 6 process run.<br />
<br />
{|<br />
|[[Image:CatalystReduceOutputSize.png|450px|thumb|Figure 4: Extract file size compared to full raw data.]] ||<br />
[[Image:CatalystReduceRunTime.png|450px|thumb|Figure 5: Time to compute extracts compared to file IO.]]<br />
|}<br />
<br />
== Important Links ==<br />
<br />
* The [http://catalyst.paraview.org main page] for ParaView Catalyst.<br />
* The most complete information is available in the [http://www.paraview.org/files/catalyst/docs/ParaViewCatalystUsersGuide_v2.pdf ParaView Catalyst User's Guide].<br />
* [https://github.com/Kitware/ParaViewCatalystExampleCode Example code] with samples from Python, C, C++ and Fortran for creating adaptors as well as examples of hard-coded C++ Catalyst pipelines.<br />
* A [http://vimeo.com/75793492|video webinar] on ParaView Catalyst from September 26, 2013.<br />
* A [[Media:ParaViewCatalystV1Tutorial.pdf|tutorial]] on ParaView Catalyst along with [[Media:ParaViewCatalystV1TutorialFiles.tgz|sample files]].<br />
* Directions on how to [[ParaView/Catalyst/BuildCatalyst|build ParaView Catalyst]].<br />
* Sandia National Laboratories SAND2013-1122 technical report on [http://www.sandia.gov/~kmorel/documents/MilestoneFY13.pdf Data Co-Processing for Extreme Scale Analysis Level II ASC Milestone].<br />
* Sandia National Laboratories SAND2010-6118 technical report on [http://www.sandia.gov/~kmorel/documents/MilestoneFY10Sandia.pdf Visualization on Supercomputing Platform Level II ASC Milestone].<br />
<br />
Information for ParaView's original co-processing tools are still [[CoProcessing|available]] but are for versions of ParaView before 4.0.<br />
<br />
== Acknowledgements ==<br />
<br />
{|<br />
| [[Image:SandiaLogo.png]] || || Ken Moreland is the project lead for Sandia. Sandia has contributed significantly to the project both in development and vision. Sandia developers included Nathan Fabian and Ken Moreland. <br />
|-<br />
| [[Image:LANLLogo.png]] || || Jim Ahrens is the project lead at LANL. The LANL team has been integrating Catalyst with various LANL<br />
simulation codes and has contributed to the development of the library.<br />
|-<br />
| [[Image:ArmySBIRLogo.png ]] || || Mark Potsdam, from Aeroflightdynamics Directorate, was the main technical point of contact for Army SBIRs and along with Andy Wissink has contributed significantly to the vision of Catalyst.<br />
|}</div>Lokmanhttps://public.kitware.com/Wiki/index.php?title=Setting_up_a_ParaView_Server&diff=59241Setting up a ParaView Server2016-03-22T14:54:47Z<p>Lokman: Added some missing words</p>
<hr />
<div>ParaView is designed to work well in client/server mode. In this way, users can have the full advantage of using a shared remote high-performance rendering cluster without leaving their offices. This document is designed to help get you started with build and setting up your own ParaView server. It also serves as a collection point for the "tribal knowledge" acquired to make parallel rendering and other aspects of parallel and client/server processing most efficient. You may also want to look at [[Media:Cluster09_PV_Tut_Setup.pdf|Configuring ParaView for Vis Clusters]].<br />
<br />
== Compiling ==<br />
<br />
Ideally, we would like to provide precompiled binaries of ParaView for all of our users to make installing it more convenient. Unfortunately, the large variety of hardware, operating systems, and MPI implementations makes this task impossible. Thus, if you wish to use ParaView on a parallel server, you will have to compile ParaView from source.<br />
<br />
After [http://www.paraview.org/New/download.html downloading] ParaView, follow the [[ParaView:Build And Install|Building and Installation instructions]]. When following these instructions, be sure to compile in MPI support by setting the PARAVIEW_USE_MPI CMake flag to ON and setting the appropriate paths to the MPI include directory and libraries.<br />
<br />
One problem many people face when compiling with MPI is that their MPI implementation provides multiple libraries, many of which are required when compiling ParaView. If there are only two such libraries, you can add them separately in the MPI_LIBRARY and MPI_EXTRA_LIBRARY CMake variables. If you need to link in more than two libraries, you can specify multiple libraries in the MPI_LIBRARY variable by separating them with semicolons (<tt>;</tt>). You can apply the same trick to the MPI_INCLUDE_PATH to specify several include directories.<br />
<br />
Another problem sometimes encountered is the lack of graphics libraries. There are many circumstances where you would want to compile the ParaView server on a parallel computer with no graphics hardware and thus no OpenGL implementation. In this case, most people use the [http://mesa3d.org Mesa 3D Graphics Library], which is a portable, software-only implementation of the OpenGL API. A cluster built using a Linux operating system probably already has a version of Mesa installed, but otherwise you can always download the source code from http://mesa3d.org.<br />
<br />
=== OSMesa support ===<br />
<br />
One of the most difficult problems people face when installing a ParaView server is establishing [[#X Connections|XConnections]]. This whole problem can be circumvented by using the OSMesa library. However, Mesa is strictly a CPU rendering library so, '''use the OSMesa solution if and only if your server hardware does not have rendering hardware'''. If your cluster does not have graphics hardware, then compile ParaView with OSMesa support and use the --use-offscreen-rendering flag when launching the server.<br />
<br />
The first step to compiling OSMesa support is to make sure that you are compiling with the [http://mesa3d.org Mesa 3D Graphics Library]. It is difficult to tell an installation of Mesa from any other OpenGL implementation (although the existence of an osmesa.h header and a libOSMesa library is a good clue). If you are not sure, you can always download your own copy from http://mesa3d.org.<br />
<br />
Now set the CMake variable OPENGL_INCLUDE_DIR to point to the Mesa include directory (the one containing the GL subdirectory), and set the OPENGL_gl_LIBRARY and OPENGL_glu_LIBRARY to the libGL and libGLU library files, respectively. Next, change the VTK_OPENGL_HAS_OSMESA variable to ON. After you configure again you will see a new CMake variable called OSMESA_LIBRARY. Set this to the libOSMesa library file. After you configure and generate your makefiles, you should be ready to build with OSMesa support.<br />
<br />
Once again, once you build with OSMesa support, it will not take effect unless you launch the server with the --use-offscreen-rendering flag.<br />
<br />
Please be aware that OSMesa support is not the same thing as mangled Mesa (although they are often used for the same thing). Mangled Mesa is not supported with ParaView. Mangled Mesa provides a mechanism to use either hardware acceleration or CPU-only rendering. Some organizations use this to provide a single build for multiple servers, some with and some without hardware rendering. We find it easier to simply provide a separate build for each server.<br />
<br />
For more elaborate discussion on building with OSMesa support with different versions of Mesa, refer to [[ParaView And Mesa 3D]].<br />
<br />
== Running the Server ==<br />
<br />
The ParaView client is a serial application and is always run with the <tt>paraview</tt> command. The server is a parallel MPI program that must be launched as a parallel job. Different implementations of MPI may have different ways to launch parallel programs, but the most common way is to use the <tt>mpirun</tt> command. Ask your system administrator if you are not sure how to launch your MPI programs. This document will assume you are using <tt>mpirun</tt>.<br />
<br />
The ParaView server is almost always enabled with the <tt>pvserver</tt> command. Thus, the most simple configuration would have it launched as something like the following.<br />
<br />
mpirun -np 4 ./pvserver<br />
<br />
An integral part of configuring the ParaView server is setting up the client for [[starting the server]]. However, when initially configuring your server, it is best to do it in stages to better identify problems as they occur. Thus, as you are first trying to set up your server, set up your client for [[starting the server#Manual Startup|manual startup]]. That way, you can launch the server with <tt>mpirun</tt> at the command prompt. You will be able to immediatly see any output on the stdout and stderr streams and retry when something goes wrong.<br />
<br />
Note that ParaView is designed to work well when the server and client are run remotely from each other. The idea is that the client can be run locally on the user's desktop/laptop<br />
<br />
=== pvserver vs. pvrenderserver and pvdataserver ===<br />
<br />
There are two modes in which you can launch the ParaView server. In the first mode, all data processing and rendering are handled in the same parallel job. This server is launched with the <tt>pvserver</tt> command. In the second mode, data processing is handled in one parallel job and the rendering is handled in another parallel job launched with the <tt>pvdataserver</tt> and <tt>pvrenderserver</tt> programs, respectively.<br />
<br />
The point of having a separate data server and render server is the ability to use two different parallel computers, one with high performance CPUs and the other with GPU hardware. However, the server functionality split in two necessitates repartitioning and transfering the data from one to the other. This overhead is seldom much smaller than the cost of just performing both data processing and rendering in the same job.<br />
<br />
Thus, we recommend on almost all instances simply using the single <tt>pvserver</tt>. This document does not describe how to launch data server / render server jobs. Even if you really feel like this mode is right for you, it is best to first to configure your server in single server mode. From there, establishing the data server / render server should be easier.<br />
<br />
=== Connecting Through a Firewall ===<br />
<br />
Often times security policies require either the ParaView server or client to be behind a firewall or some other network limiting technology. Such a configuration will add challenges to configuring your server to connect with a client. The type of networking safeguards, as well as the policies that govern them, vary greatly. We cannot possibly provide solutions for all of them, but we can give suggestions that might help you on your way. The main goal here is to establish a socket connection between client and (the first node of the) server. This socket by default is on port 11111.<br />
<br />
Many firewalls will deny incoming connection requests but will allow outgoing connection requests. If only one side of the connection is behind such a firewall, then establishing the connection is easy. By default, the client connects to the server, so if the client is the one behind a firewall, nothing needs to be done. If the server is behind the firewall, you can reverse the connection direction: the server will connect back to the client. The server is instructed to perform a reverse connection by simply adding the <tt>-rc</tt> flag to its command line.<br />
<br />
mpirun -np 4 ./pvserver -rc --client-host=myhost.mydomain.com<br />
<br />
It is similarly straightforward to specify a reverse connection [[Starting the server|using the ParaView GUI]] or [[ParaView:Server Configuration|server configuration using XML configuration files]].<br />
<br />
If your firewall does not allow outbound connections or if client and server are each behind their own firewall, then no direct connection is possible. Now is probably a good time to talk to your system administrator about options.<br />
<br />
One option that has proven to be effective when available is to use a VPN (virtual private network) connection on the client. A proper VPN connection can make the network of the client computer behave as if it is connected behind the firewall of the server, and thus the two can connect directly. Be aware that establishing a VPN connection will make the hostname and IP address for the client machine look different to the server, which may complicate specifying the connection.<br />
<br />
If VPN is not available, you may be able to achieve a connection using the [[Reverse connection and port forwarding| port forwarding]] feature of ssh. The ssh protocol allows for forwarding a socket request from one side of the connection to the other, assuming that the configuration settings allow for this. You might be able to use this feature to "punch through" the firewall. For more information, see the [[reverse connection and port forwarding]] page.<br />
<br />
Even though X11 forwarding might be available, '''you should not run the client remotely and forward its X calls'''. ParaView will run much more efficiently if you run the client locally and you let ParaView directly handle the data transfer between local and remote machines.<br />
<br />
=== X Connections ===<br />
<br />
One of the most common problems people have with setting up the ParaView server is allowing the server processes to open windows on the graphics card on each process's node. When ParaView needs to do parallel rendering, each process will create a window that it will use to render. This window is necessary because you need the X window before you can create an OpenGL context on the graphics hardware.<br />
<br />
There is a way around this. If you are using the Mesa as your OpenGL implementation, then you can also use the supplemental OSMesa library to create an OpenGL context without an X window. However, Mesa is strictly a CPU rendering library so, '''use the OSMesa solution if and only if your server hardware does not have rendering hardware'''. If your cluster does not have graphics hardware, then compile ParaView with [[#OSMesa support|OSMesa support]] and use the --use-offscreen-rendering flag when launching the server.<br />
<br />
Assuming that your cluster '''does''' have graphics hardware, you will need to establish the following three things.<br />
# If the clusters are not ''headless'' , i.e if the nodes are operating with full-blown desktop environments (e.g if you are running pvserver on one node that has many graphics cards), then you don't have to make any changes. Just spawn many processes using mpirun; since the desktop manager is already running, the X windows should not have any trouble in getting created. If Paraview shows errors when the client connects to the pvserver(s), then you will have to follow the instructions below and create '''one''' another display where all the rendering will be done by the pvservers.<br />
# Have xdm run on each cluster node at startup. Although xdm is almost always run at startup on workstation installations, it is not as commonplace to be run on cluster nodes. Talk to your system administrators for help in setting this up.<br />
# Disable all security on the X server. That is, allow any process to open a window on the x server without having to log in. Again, talk to your system administrators for help. On some systems, the command "<code>xhost +</code>" will do this.<br />
# Use the -display flag for pvserver to make sure that each process is connecting to the display <tt>localhost:0</tt> (or just <tt>:0</tt>).<br />
<br />
To enable the last condition, you would run something like<br />
<br />
mpirun -np 4 ./pvserver -display localhost:0<br />
<br />
An easy way to test your setup is to use the <tt>glxgears</tt> program. Unlike pvserver, it will quickly tell you (or, rather, fail to start) if it cannot connect to the local X server.<br />
<br />
mpirun -np 4 /usr/X11R6/bin/glxgears -display localhost:0<br />
<br />
=== Multiple GPUs Per Node ===<br />
<br />
It is becoming commonplace to put multiple GPUs on each node in a cluster. Taking advantage of these multiple GPUs can be tricky.<br />
<br />
Typically, each of these GPUs will have its own display. For example, if you have two GPUs on a node, they are probably referenced by the displays <tt>localhost:0.0</tt> and <tt>localhost:0.1</tt>. When you run an X program with the display parameter or flag set to one of these, all X windows will open on that respective GPU and any graphics acceleration will also happen on that GPU. Thus, you can take advantage of both GPUs by launching different pvserver processes with different arguments to point to different displays.<br />
<br />
Unfortunately, the method used to invoke an MPI job (usually through mpirun) is not part of the MPI specification and varies between implementations. In particular, the syntax used to declare different command lines for different processes can vary quite a bit.<br />
<br />
OpenMPI (not to be confused with OpenMP, which is totally different) has a particularly easy way to specify multiple command lines. Simply separate the different command lines, along with the -np flag, with a colon (<tt>:</tt>). In our case, the command lines should be identical except with different display flags. We also need to use the -bynode flag, which assigns processes in a round robin fashion. Basically, this makes sure that each node is assigned a pair (or more) of processes that use the different displays. As an example, the following command line when run on an 8 node cluster launches a 16 process job with each node having two processes, each using a different display.<br />
<br />
mpirun -bynode -np 8 ./pvserver -display :0.0 : -np 8 ./pvserver -display :0.1<br />
# on 3.98-4.1 versions of ParaView the display option was missing due to a bug, the replacement is to start pvserver with /bin/env DISPLAY=localhost:0.0 <br />
# With ParaView 4.2, the option is available, once again.<br />
<font color="red">If you have set up a parallel job with multiple GPUs per node using a different MPI implementation, please contribute back by documenting it here.</font><br />
<br />
If you are trying to verify which displays each pvserver node is using, you can use the Programmable Source to identify which processes are using which displays. After connecting to your pvserver, create a Programmable Source and set the following script.<br />
<br />
<pre><br />
import os<br />
import subprocess<br />
<br />
display = os.getenv('DISPLAY')<br />
hostname = subprocess.Popen(['hostname'], stdout=subprocess.PIPE).communicate()[0].strip()<br />
print hostname, display<br />
</pre><br />
<br />
After you apply the filter, this script will run on each process and output like the following will be printed to the pvserver terminal.<br />
<br />
<pre><br />
Process id: 3 >> vs8 :0.0<br />
Process id: 4 >> vs14 :0.1<br />
Process id: 7 >> vs8 :0.1<br />
Process id: 0 >> vs14 :0.0<br />
Process id: 5 >> vs30 :0.1<br />
Process id: 1 >> vs30 :0.0<br />
Process id: 2 >> vs2 :0.0<br />
Process id: 6 >> vs2 :0.1<br />
</pre><br />
<br />
=== Sharing GPUs Amongst Processes ===<br />
<br />
Visualization clusters that have GPUs are not always built with a one-to-one correspondence between CPUs and GPUs. In fact, industry trends at the time of this writing often lead to having more CPUs than GPUs on each node. For example, our current visualization cluster contains 4 CPUs per node but only 2 GPUs.<br />
<br />
The pvserver is rather dumb about the number of GPUs. It assumes that each process has equal access to local rendering. This means that there is no special mechanism to, for example, coordinate the rendering between pairs of processes.<br />
<br />
This leaves you with two options. Either you can launch one process per GPU and let CPUs go idle or you can launch one process per CPU and let multiple processes send rendering requests to the same GPU. The first option maximizes rendering speed but performs most other operations more slowly. The second option will maximize the speed of filter processing, but will throttle the rendering speed as GPU processors and buses must be shared.<br />
<br />
There was once a time when rendering speed was the bottleneck for visualization. That, however, is no longer the case. The time spent in rendering is minimal, especially when compared to the time spent processing filters. The rendering speed can be throttled quite a bit before making a serious impact on visualization performance, even when running interactively. We thus recommend the second option, sharing GPUs.<br />
<br />
GPUs can be implicitly shared by pointing multiple processes to the same display on the same host. One problem is that many GPUs will not correctly handle two windows on top of each other. The two windows share memory space and clobber each others memory. To get around this problem, use the --use-offscreen-rendering flag. This will create each rendering context in its own offscreen buffer and guarantees that the memory will not overrun that of another rendering context. As an example, here is the mpirun command you might use on a cluster with 8 nodes, each containing 4 cores (for a total of 32) and 1 GPU.<br />
<br />
mpirun -np 32 ./pvserver -display :0.0 --use-offscreen-rendering<br />
<br />
You can still share GPUs when you have [[#Multiple GPUs Per Node]]. You use the same techniques in the section above except that you allow processes to have the same display and you add the --use-offscreen-rendering flag for each command. So, for example, if your cluster has 8 nodes, each containing 4 cores and 2 GPUs, the OpenMPI mpirun command could look like the following.<br />
<br />
mpirun -bynode -np 16 ./pvserver -display :0.0 --use-offscreen-rendering : \<br />
-np 16 ./pvserver -display :0.1 --use-offscreen-rendering<br />
<br />
=== Using a Tiled Display ===<br />
<br />
ParaView has the ability to render directly to a tiled display. Furthermore, when rendering to a tiled display ParaView uses a built in library, [http://icet.sandia.gov/ IceT], to perform the rendering in a parallel and efficient manner.<br />
<br />
To put ParaView in a tiled display mode, give pvserver (or pvrenderserver) the X and Y dimensions of the 2D grid of displays that make up the tiled display with the --tile-dimensions-x (or -tdx) and --tile-dimensions-y (or -tdy) arguments. For example, to drive a 3 X 2 tiled display, you launch the server with a command like the following.<br />
<br />
mpirun -np 16 ./pvserver -display localhost:0 -tdx=3 -tdy=2<br />
<br />
There must be at least as many processes in the MPI job as there are tiles in the display; however, adding more processes than tiles is recommended as they will all be used to perform the parallel rendering. In the above example, I arbitrarily picked 16 processes. As few as 6 processes would work, but 32 would be even better if you had that many GPUs.<br />
<br />
ParaView assumes that the first ''T'' processes have their displays connected directly to one of the tiles in a ''T'' tile display. The processes are assigned in row major order from left to right and top to bottom. For example, in a 3 X 2 display the processes are assigned as follows.<br />
<br />
{| align="center" border="1" cellpadding="25" style="width:300px; align=center"<br />
|- align="center"<br />
| 0 || 1 || 2<br />
|- align="center"<br />
| 3 || 4 || 5<br />
|}<br />
<br />
The only way to adjust which processes are connected to which tiles is to reconfigure the machines configuration of MPI.<br />
<br />
The tiled display will not be driven correctly if the server is run with the --use-offscreen-rendering flag for obvious reasons.<br />
<br />
== Pitfalls ==<br />
<br />
Here we capture the most common problems people run into with setting up client/server.<br />
<br />
=== Specifying multiple MPI include directories ===<br />
<br />
You can add multiple directories to the MPI_INCLUDE_PATH CMake variable by separating them with semicolons (<tt>;</tt>). See the [[#Compiling]] section for more details.<br />
<br />
=== Specifying multiple MPI libraries ===<br />
<br />
You can use both the MPI_LIBRARY and MPI_EXTRA_LIBRARY CMake variables for specifying MPI libraries. You can also add multiple libraries to MPI_LIBRARY by separating the files with semicolons (<tt>;</tt>). See the [[#Compiling]] section for more details.<br />
<br />
=== Mangled Mesa ===<br />
<br />
Do not bother to use Mangled Mesa. Compiling a version of Mesa is typically more trouble than it is worth and is incompatible with the [[#OSMesa support|OSMesa support]] instructions given on this page.<br />
<br />
=== ParaView does not scale ===<br />
<br />
Many a user have reported to the mailing list that ParaView failed as they tried to scale up the data size on their server. First, let me assure you that ParaView’s parallel visualization and rendering are efficient and scalable. We (at Sandia National Laboratories) have been able to use ParaView to visualize 6 billion cell grids and have clocked rendering speeds of over 8 billion polygons per second.<br />
<br />
When a user reports that ParaView is not scaling to large data sets, it is almost always because the server is misconfigured for parallel rendering. The problem is often misinterpreted as a scaling problem because ParaView will use serial rendering for small data and parallel rendering for large data. So when the server is misconfigured and cannot perform parallel rendering, it sometimes misbehaves when the data gets big enough to use parallel rendering.<br />
<br />
Parallel rendering is build right into ParaView. There is nothing special you have to compile to set this up. However, to perform parallel rendering (or any rendering, for that matter), the ParaView server needs to have an OpenGL context. This is usually done through [[#X Connections|X Connections]]. However, most parallel programs have no need to open an X window, so most clusters are not configured to allow X connections. For help on how to configure your cluster, see the [[#X Connections|X Connections]] section.<br />
<br />
Before reporting scalability problems with ParaView, please verify that parallel rendering is working correctly. You can do that with the following procedure.<br />
<br />
# Open the Settings dialog box (Edit -> Settings) and go to the Server tab.<br />
# Make sure the checkbox next to Remote Render Threshold is checked, and move the associated slider all the way to the left (0 MBytes).<br />
# Make sure the checkbox next to Subsample Rate is checked and move the slider to the right (4 Pixels or more).<br />
# Create or load any data (the cone source works fine) and rotate the data with the mouse. While rotating, the image should look pixelated (blocky). When you let go of the mouse, the full resolution picture is restored.<br />
<br />
The Remote Render Threshold option tells ParaView to always use parallel rendering. The Subsample Rate tells ParaView to render smaller images during interaction to make the GUI more responsive. This latter feature is very noticeable when the subsample rate is high and only used during parallel rendering. So if you are not seeing the subsample effect (or if something went clearly wrong before that), then your parallel rendering is not working.<br />
<br />
=== Reverse connection does not work ===<br />
<br />
A "standard" connection has the server (<tt>pvserver</tt>) listen for a connection from the client (<tt>paraview</tt>). ParaView also has the ability to perform a "reverse connection" where the client waits for the server. Creating a reverse connection is straightforward. Simply use the <tt>--reverse-connection</tt> (<tt>-rc</tt>) command line option on the server and specify a reverse connection in the ParaView GUI (you will have to add a new server in the Choose Server dialog box; see [[Starting the server]]). If you can get the standard connection to work but not the reverse connection, one of the following may be occurring.<br />
<br />
# A firewall or some other network configuration may be preventing you from connecting from server to client. To test this, try swapping the location of the server and client and test the forward connection again.<br />
# Make sure that both the client and the server are set up to do a reverse connection. Make sure that the server is being launched with the reverse connection flag and that the GUI is configured to connect with a reverse connection.<br />
# Make sure that the client is started first and ready to receive a connection before starting the server. When doing a reverse connection, the client must already be started and waiting for a connection before starting the server. If you try to start the server before the client is ready, it may fail to connect and then give up before the client starts waiting for the connection. If you are starting the server from the client GUI, this should not be an issue.<br />
<br />
=== Cannot launch paraview with mpirun ===<br />
<br />
Occasionally users report problems with trying to run the ParaView client (<tt>paraview</tt>) with mpirun like this:<br />
<br />
mpirun paraview [args]<br />
<br />
'''Don't do this!''' The ParaView client is a serial application. It is not meant to be run under mpirun. Only run the server (<tt>pvserver</tt> with mpirun.<br />
<br />
=== The client only connects to one node on the server ===<br />
<br />
Users sometimes ask how to get the client to make a socket connection to every process on the server. You don't. ParaView is not supposed to run like that. When running in client/server mode, ParaView connects to process 0 of the server, and nothing else. All communication with the client goes through process 0, and process 0 of the server uses the MPI interconnect to pass data to and from the other nodes in the server.<br />
<br />
ParaView is implemented in this way for convenience and scalability. It is not scalable to have every process in the server to connect to the client because all communication will eventually have to go through the same network interface on the client side. Also, the MPI interconnect on the server is almost always faster than the socket communication between client and server.<br />
<br />
=== Server processes always have 100% CPU usage ===<br />
<br />
It has often been observed that when running <tt>pvserver</tt> under mpirun, many of the processes that are launched for this job always use a 100% of a CPU, even when it should be sitting idle. The most common pattern is for one processes (the root process) to actually be idle while the rest are constantly running.<br />
<br />
This observed behavior is due to the implementation of the MPI layer. OpenMPI and MPICH, the two most common implementations we encounter, both exhibit this behavior. In these implementations when a process is waiting for a message (which is the case when <tt>pvserver</tt> is supposed to be sitting idle waiting for a message), the process actually sits in a busy wait loop. (The root process is the single exception as it is waiting for a message on a socket, not an MPI message.)<br />
<br />
This behavior is intentionally added by the MPI library developers (not so much the ParaView developers) for efficiency. The general idea is to keep each process "attached" to the core it is running on. Once a process goes idle, it is likely to be scheduled off that core by the OS process scheduler and then scheduled back on to a different core. This can have detrimental effects on things like memory access as each core can have its own cache hierarchy.<br />
<br />
The parameters for controlling this behavior varies based on the MPI implementation. The following links provide some documentation on turning off this behavior for OpenMPI.<br />
<br />
* http://www.open-mpi.org/faq/?category=running#oversubscribing<br />
* http://www.open-mpi.org/faq/?category=running#force-aggressive-degraded<br />
<br />
<font color="red">If you have information on disabling the busy wait using a different MPI implementation, please contribute back by documenting it here.</font><br />
<br />
=== Server reports: Failed to set up server socket ===<br />
<br />
You might get an error like the following<br />
<br />
<pre><br />
Listen on port: 11111<br />
ERROR: In /Users/kmorel/src/ParaView/Servers/Common/vtkProcessModuleConnectionManager.cxx, line 191<br />
vtkProcessModuleConnectionManager (0xb380540): Failed to set up server socket.<br />
</pre><br />
<br />
The error is meant to tell you that there is already some process using port 11111. If you are on Linux or a system that has similar command line tools (like Mac OS X), you can confirm this with<br />
<br />
$ netstat -na | grep 11111<br />
<br />
or alternatively<br />
<br />
$ lsof -i:11111<br />
<br />
The latter will tell you the name of the process that is blocking the port (the first won't). If that name is cropped and not unique try adding "+c15" to the lsof command line. Chances are there is still an old pvserver process running and waiting on that socket. Either kill the process that blocks the port or use another one yourself with<br />
<br />
$ pvserver --server-port=11112<br />
<br />
You can get the same problem on the client side when doing a reverse connection. The solution is corresponding.<br />
<br />
== Acknowledgements ==<br />
<br />
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.<br />
<br />
{{ParaView/Template/Footer}}</div>Lokman