Digital Image Correlation (DIC) is an optical, non-contact measurement technique used to determine the shape (contour), displacement and strain for experimental solid mechanic applications in materials testing. The technique can also be used to determine kinematic quantities of discrete points, such as: rotations, velocities & accelerations.
As the name implies, DIC works on the principle of correlation whereby a series of digitally acquired images are successively taken of a deforming object surface throughout a materials test. A reference image is required for a measurement, which is considered as an original state of the specimen (surface) shape. The deformation or displacement of the specimen (surface) shape is determined by comparing a series of measurement images (measurement series) relative to the reference image.
From the relative (2D) perspective of two (or more) cameras, each object point on the test object surface can be reconstructed into a 3D perspective using a projection calibration. This effectively recovers the same (one) object point in both 2D image measurement series.
This process is known as stereo-triangulation or also referred to as stereo-correlation. The intersection of the lines (rays) from either camera are resolved as an object point with an absolute 3D-space description on both camera imaging planes using a projection calibration model which is an independent process that calibrates the measurement test area volume. This general principle is akin to human vision in the sense that two imaging sensors or eyes are required to resolve and perceive a 3D environment.
Using a stereoscopic camera setup, the surface contour (shape) is measured (absolute). The displacement can be re-solved through calculating the (relative) difference in the contour states over the measurement series.
Finally, the surface strains can be retrieved from these two quantities (shape & displacement).
Loading conditions for test objects can be mechanical, thermal, vibration or pressure allowing for an array of tests to be completed, such as;
› quasi-static mechanical testing (i.e. tension, compression, bending, torsion, shear, etc.)
› fatigue & fracture mechanics testing
› high-strain rate, impact & ballistic testing
› vibration analysis, and
› thermo-mechanical testing
The process of correlation can only function through the recognition of a distinct (unique) pattern on the test object surface. The camera digitizes the perceived (stochastic) speckle pattern on each pixel as discrete grey-level value intensities. Through the definition of a local window-group of pixels (called subsets or facets), a pattern feature (like a fingerprint) can be identified in the reference image.
Each reference pattern feature is then searched for in all subsequent measurement series images and when found, then tracked and resolved as a relevant change in feature shape (correlation evaluation).
During deformation of the test object, the pattern on the surface changes and in order to look for, find and track the same reference subset/facet pattern throughout a measurement series, a search (minimization) function using grey-level intensity values is used.
Effectively, a search-area around the reference (undeformed) subset is scanned for a range of possible locations where the center of the potential deformed (match) subset could be located. A subset is then matched when the minimization function value is lowest for all search-area candidate locations.
Once the center pixel of the matched subset has been identified through the search (minimization) function, the movement & deformation of the matched (deformed) to reference (undeformed) subset is described using a (subset) shape function description and the grey-level intensity values within the subset/facet are interpolated (to allow sub-pixel measurements to be resolved).
In order to convert a 2D pixel model into a 3D physical model, a process known as a ”calibrating the projection” must be completed. This involves the acquisition of multiple images of a calibration object or more commonly called a calibration target in various perspective views (ie. poses & orientations).
A calibration target is a reference object that consists of a defined shape grid pattern manufactured to a rigid surface. The target must have a similar size as to the required measurement FoV. The projection calibration uses a contrast-search algorithm from both camera perspectives (pair of images) to identify grid intersection points (between the white/black squares). Each identified point during the process is incorporated into the development of a volumetric model of a calibrated virtual system.
Since the calibration target has a set of absolute properties (ie. grid spacing), the identified points within the space-cloud are modelled as a minimized (best-fit) projection model.