Overview of the different analysis routines
For Volumetric Velocimetry, two primary analysis strategies exist: Particle Tracking (PTV) based routines and Reconstruction based routines. Additionally, a hybrid approach can be used which combines the two.
The PTV based technique detects each particle and its centroid in 3D space and follows these 3D centroid positions over time. The result is represented by the particles’ trajectories. The particle centroids are typically calculated by triangulation from individual cameras involved in the measurement.
Reconstruction
Reconstruction means to reproduce the particles’ gray values in a 3D voxel space without knowing anything about the particles.
The Reconstruction based algorithms first need to reconstruct gray value distribution of the 3D particles from each time step in the measurement. A 3D image consisting of gray values throughout the extent of the measured volume is created by the reconstruction algorithm. This 3D image is known as a voxel space, where a voxel can be thought of as a 3-dimensional version of a pixel. Similar to 2D PIV, individual particle locations are not known or tracked in this voxel space. Reconstruction methods simply redistribute the gray value information of the particles as they were during the acquisition.
In order to reconstruct a voxel space, DynamicStudio offers various techniques, like the Minimum Line of Sight (MinLos) or the Simultaneous Multiplicative Algebraic Reconstruction Technique (SMART).
Analysis
For all reconstructed Voxel fields, it is necessary to analyze the velocity vectors in a separate step. Typically the 3D Least Squares Matching will be used, however, 3D Particle Tracking Velocimetry is possible as well.

3D Least Squares Matching (3D LSM) calculates an average velocity for an interrogation volume (Cuboid) of voxels; this interrogation volume is analogous to an interrogation area in 2D PIV. In contrast to a cross correlation, DynamicStudio uses iterative steps in which the cuboid translates, rotates, shears, and scales to match the first and the second time step with one another. As a result, 3D LSM not only analyzes the translation, but also the full velocity gradient matrix as a direct result of the velocity analysis. The result of the 3D LSM is a set of vectors and gradients for each cuboid on a regular Eulerian grid.
3D TOMO Particle Tracking Velocimetry (3D TOMO PTV) is a hybrid method that uses a reconstruction technique as well as a tracking algorithm to calculate Lagrangian trajectories. Therefore, the algorithm needs to identify separate particles in the voxel space as a separate step after the reconstruction. The detected particle centroid positions are again tracked over time to generate the particles’ trajectories.
Reconstruction methods

Minimum Line of sight reconstruction (MinLos) is the fastest method in DynamicStudio to reconstruct a voxel space. It is suited for low to medium seeding densities (max. 0.025 ppp for a 4-camera set-up). For this technique, the gray value of each voxel is determined by the minimum gray value observed by any of the individual cameras. In the example shown here, it can be seen that a gray value is determined at every point in the voxel space where two lines of sight cross one another. As seen in the example, gray values can also be reconstructed where there are no real, physical objects (particles) during the image acquisition. The resulting artificial gray values are so called ghost intensities or ghost particles.

Simultaneous Multiplicative Algebraic Reconstruction Technique (SMART) is a reconstruction technique based on the MART presented by G. Elsinga et al. and further developed by C. Atkinson and J. Soria. Here, computational cost and accuracy is well balanced even for high seeding densities (max 0.05 ppp for a 4-camera set-up). It is an iterative technique and typically the MinLos is used to calculate a 1st guess for the voxel space. Afterwards, the gray values of each pixel from every camera is projected into the voxel space by a weighting matrix where the different values are multiplied with one another. After finishing this process, the gray value information is projected back onto a synthetic camera sensor. The synthetic images from the back projection are compared with the original images and adjustments for the next iteration of projections can be made.
Volumetric Calibration Refinement
How it works
Approach
- Standard reconstruction
- Divide voxel space into sub-volumes
- Project sub-volumes back on sensor:
- Cross correlate with initial image
- Calculate disparity vector
- Correct Calibrations
-> Correlation Based + very robust
-> On Sensor plane + Fast
Comparing results
- VV reconstruction
- Initial calibration: red vectors are typically more noisy and less consistent
Velocities:
- Green = 3D-LSM from simulated data
- Red = 3D-LSM from initial calibration
- Yellow = 3D-LSM from refined calibration
- Fewer outliers with refined calibration
- Vectors coincide very well with the ones from the simulated volume