Open Inventor Release 2024.2.0
 
Loading...
Searching...
No Matches
Overview

VolumeViz rendering is done on the GPU. What this means is that some fairly complex algorithms are implemented in the OpenGL shader language GLSL, compiled and linked by the graphics driver and executed on the GPU in response to commands sent from the CPU. VolumeViz is effectively using the GPU as a very high-performance, highly parallel computer. These programs written in GLSL are commonly called “shaders”, and their components “shader functions”, because the first programmable GPUs only allowed simple modifications of the final pixel color (technically the fragment color in OpenGL terminology). Today SoVolumeRender is running a full ray-casting program on the GPU and using the result of that computation to assign colors to pixels. The ray-caster takes care of generating rays, sampling along the ray, classifying voxels and combining the final color and opacity values. However the various steps in classification, color lookup, lighting, edge enhancement, etc, can still be usefully thought of as shader functions. VolumeViz allows these functions to be replaced, as needed, to implement specific application requirements. Similarly for the slice type primitives, rendering is initiated by sending some geometry to the GPU, but the various steps in classification of each intersected voxel can also be replaced, as needed, by the application. As much as possible the classification shader functions are the same for slice rendering and volume rendering.

Why write custom application shader functions? One of the most important uses of custom application written shader functions is to combine values or colors from multiple volumes. In seismic applications this is often called multi-attribute co-blending. Applications can also use shader functions to do computation on the GPU. Although this is not as convenient as using CUDA or OpenCL, it can be very effective for tasks directly related to visualization. For example, applying a filter to the volume data or computing a seismic attribute can be done “on the fly” without needing to use limited memory to store the entire transformed volume. Applications can also replace standard VolumeViz shader functions to implement custom color lookup or additional rendering effects. For example medical applications may wish to implement a multi-dimensional transfer function.

VolumeViz shader support is built on the core Open Inventor support for GPU shader programs. If you are not already familiar with these nodes, you may want to read that chapter of the Users Guide before proceeding. In this section we will briefly review how to use shaders in Open Inventor, but mainly focus on the aspects that are specific to VolumeViz.

It's possible to completely replace the VolumeViz shaders, but that would require a large project to effectively implement volume rendering “from scratch”. Instead VolumeViz provides:

  • A shader framework

The VolumeViz shader framework defines the steps in the rendering pipeline and has standard "slots" that can be replaced by application shader functions. This allows the application to change one step in the pipeline, for example blending two (or more) volumes together, and still take advantage of all the other builtin features like lighting, clipping and rendering effects.

  • A shader API

The VolumeViz shader API is a library of GLSL shader utility functions that application shaders can use to get information about the volume, fetch data, lookup colors, apply rendering effects and perform other common operations. This API is documented in the Open Inventor Reference Manual.

  • A virtual address space for voxels

The VolumeViz memory manager allows a shader to access any data anywhere in the volume, not just the voxel currently being rendered. In other words shaders can ignore tile “boundaries” and the fact that volume data is managed as tiles. This makes it easy to implement algorithms that require “neighbor” voxels.

  • A unified data access model

This allows, in many cases, a shader function to be implemented once and then used for both slice primitives and volume rendering. (However note that the shader must still be loaded twice, once for slice primitives and once for volume rendering.)

  • Support for "header" files in GLSL

GLSL provides some preprocessor directives, but does not support "#include". However when you use Open Inventor to load a shader function you can use this notation:

//!oiv_include <VolumeViz/vvizCombine_frag.h>

Which works like #include in C and C++. This makes it straightforward to manage the type and function declarations needed to use the VolumeViz shader API, as well as application functions. To illustrate the power and convenience of the VolumeViz shader framework and API, consider the example of multi-volume co-blending. By this we mean rendering two or more volumes simultaneously by blending the color values defined by each volume's transfer function. The co-blending algorithm can be defined in a relatively simple GLSL fragment shader function. It's relatively simple because the VolumeViz shader framework automatically synchronizes loading of the volume into data textures on the GPU and loads all the color maps into a single 2D texture. Using the VolumeViz shader API, the custom shader function is able to fetch a value from each of the volumes, lookup the corresponding colors and blend the resulting colors in just a few lines of code. A basic co-blending shader function looks like this:

//!oiv_include <VolumeViz/vvizGetData_frag.h> // The shader API lets you include
//!oiv_include <VolumeViz/vvizTransferFunction_frag.h> // declarations of shader functions
uniform VVizDataSetId data1; // Data texture of 1st volume
uniform VVizDataSetId data2; // Data texture of 2nd volume
uniform float blendFactor; // Application specified blend factor
// Method in VolumeViz shader framework to override for custom color computation
vec4
VVizComputeFragmentColor( VVIZ_DATATYPE vox, vec3 coord )
{
VVIZ_DATATYPE value1 = VVizGetData( data1, coord ); // Value from 1st volume
vec4 color1 = VVizTransferFunction( value1, 0 ); // Color for 1st volume from TF 0
VVIZ_DATATYPE value2 = VVizGetData( data2, coord ); // Value from 2nd volume
vec4 color2 = VVizTransferFunction( value2, 1 ); // Color for 2nd volume from TF 1
color2.rgb = mix( color1.rgb, color2.rgb, blendFactor ); // Linear blend of RGB colors
color2.a = max( color1.a, color2.a ); // Use the larger opacity value
return color2;
}

The rest of this section will explain the shader framework and shader API that allow this code to work. For now, notice that the application did not need to implement a complete volume rendering fragment shader program. It was only necessary to override the VVizComputeFragmentColor function in the VolumeViz shader framework. As a result all the VolumeViz rendering features such as lighting and clipping are still available.