Skip to main content

Improved least-squares method for phase-to-height relationship in fringe projection profilometry



The height estimating function is very important for three-dimensional (3-D) measurement systems based on a digital light processing (DLP) projector and a camera. Sinusoidal fringe patterns of the projector are projected onto an object, and the phase of the measuring point is calculated from the camera image. Generally, the least-squares method (LSM) and look-up table(LUT) method are typically used to obtain the phase-to-height relationship.


The merit of LSM is that one equation is obtained from geometric analysis, but it has difficulty incorporating lens distortion. The LUT method can obtain an exact model that includes lens distortion, but the amount of memory needed increases with the number of pixels because each model is obtained at each pixel. This paper compares for these two methods and proposes an improved LSM using LUT.


The proposed method has one equation like the case of LSM, but the modeling result is better than LSM because lens distortion is fully considered.


Three-dimensional (3-D) measurement using optical sensors has been studied extensively for applications because of the intrinsic noncontact nature and its high speed of the method [1]. A typical 3-D measurement system based on a fringe pattern projection (FPP) system [214] is constructed using a white light projector and a charge-coupled device (CCD) camera. The projector projects sinusoidal fringe patterns onto an object, and the CCD camera acquires the patterns that are deformed by the object’s shape. The height information of the object is encoded into the deformed fringe pattern recorded by the CCD camera. In this measurement system, it is very important to obtain the phase-to-height relationship, for which there are three principal calibration methods [4, 10].

The first method is based on measuring geometric parameters. This method relies on a certain measurement setup in which the geometric parameters and the intrinsic parameters must be precisely determined in advance. Approaches for deriving the phase-to-height model usually employ various simplifying assumptions, such as simple optical arrangements, small angles, or large distances between the projector and object compared to the illuminated area [35]. Although such approaches are simple, they would be impossible to use for complex optical arrangements because it is very hard to measure the position of the focus and the direction of the center line for the camera. Particularly, even if the related parameters can be precisely measured, it is absolutely impossible to consider the error from lens distortion. Therefore, this method will be excluded in this paper because the effect of lens distortion has to be considered in the phase-to-height model.

The second method is based on the least squares method (LSM) to determine the phase-to-height relationship presented by a mathematical description [610]. This method is more flexible in terms of implementation since it allows the system to be arranged arbitrarily and avoids the problems of accurate geometric parameter determination. Du et al. proposed an equation for the phase-to-height relationship using the coordinate transformation matrix with an arbitrarily arranged fringe projection profilometry system [8]. The equation is a fractional expression with 11 parameters and 3 variables: the phase value, the horizontal coordinate, and the vertical coordinate of the camera. This method uses precise height information such as standard gauge blocks to obtain the related parameters. However, it is hard to measure an object precisely because the equation does not consider lens distortion. Lens distortion consists of radial and tangential components. Huang et al. [10] considered the radial lens distortion to improve the model by Du et al. [8], and the number of the parameters in the equation was increased to 27 for the same variables. The modeling errors were greatly improved even though, only the radial lens distortion was considered. However, it is no longer possible to improve the modeling errors using one modeling equation for the entire working volume.

The third method uses a simple look-up table (LUT) containing the relationship between phase values and heights for each camera pixel [1114]. The LUT method obtains the phase-to-height relationship for each pixel without any geometric analysis and then stores the information in the LUT, which it uses to measure the height. It is possible to obtain the exact model including all lens distortion of the camera or the projector with this method, but there are as many modeling functions as there are camera pixels. Liu et al. [11] expressed the height as a fifth order polynomial function of the phase at each pixel to consider all lens distortion and measured the related phases for 21 different heights. When the parameters were determined by the LSM, the modeling results were very good. Guo et al. [12] presented a rational function instead of a polynomial function at each pixel. Because the rational function is derived from the geometry of the measurement system, it achieved higher accuracy than the polynomial function in the measurement noise produced by the inherent divergent illumination of a projector. Jia et al. [13] formulated linear and nonlinear equations for the mapping relationship between the phase and height of the object surface. They showed that the nonlinear calibration model expressed as the rational function was superior to the linear one in terms of measurement accuracy. Most of the LUT methods use a precise z stage for the calibration of the phase-to-height model. However, Li et al. [14] proposed a checker board(a plane board with calibration squares) instead of a precise linear z stage, which made it was possible to estimate the height from the size of the squares.

Most research on the phase-to-height relationship focuses on how the modeling error can be reduced in spite of lens distortion. The LSM model uses one modeling equation for all the pixels, while the LUT method uses pixel-based models to reduce the error by lens distortion. This paper compares the modeling errors between the LSM and the LUT methods and proposes one equation for the phase-to-height relationships with the accuracy of the LUT method. The proposed method is a fusion method that obtains the phase-to-height relationships for each pixel using the LUT method and merges them into one equation to apply to all of the pixel points.

The rest of the paper is organized as follows. In section Phase-to-height relationship by LSM, two representative LSMs are introduced and compared with each other. One method neglects lens distortion, and the other considers partly it. Section Phase-to-height relationship by LUT introduces the conventional LUT method and its modeling results are compared with that of the LSM. Section Improved LSM using LUT presents the improved LSM based on the LUT, which is expressed as one fractional equation like the LSM. Conclusions are given in section Conclusions.


Phase-to-height relationship by LSM

Geometric analysis

Figure 1 illustrates a typical setup of a generalized FPP system [8]. The reference plane Oxy, camera imaging plane O’x’y’ , and projection plane O”x”y” are arbitrarily arranged. P represents an arbitrary point on the object, B is the imaging point of P, D is the original fringe point projected at P, and A and C are the lens centers of the camera and the projector, respectively. For convenience and clarification, the coordinates of a point in a coordinate system are denoted by the corresponding coordinate symbols, and the symbol of the point is chosen as the subscript. For example, point P is denoted as (x p , y p , z p ), (x p ’ , y p ’, z p ’), and (x p ”, y p ”, z p ”) in coordinate systems Oxyz, O’x’y’z’ , and O”x”y”z”, respectively. Based on the coordinate relations among points P, A, and B in the system Oxyz, we obtain:

Fig. 1
figure 1

Generalized FPP system based on the least squares method

$$ \frac{x_P-{x}_A}{x_B-{x}_A}=\frac{y_P-{y}_A}{y_B-{y}_A}=\frac{z_P-{z}_A}{z_B-{z}_A}. $$

A typical coordinate transformation of point B from system O’x’y’z’ to system Oxyz is as follows:

$$ \left[\begin{array}{c}\hfill {x}_B\hfill \\ {}\hfill {y}_B\hfill \\ {}\hfill {z}_B\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {x}_{O\hbox{'}}\hfill \\ {}\hfill {y}_{O\hbox{'}}\hfill \\ {}\hfill {z}_{O\hbox{'}}\hfill \end{array}\right]+ Rot\left(z,\gamma \right) Rot\left(y,\beta \right) Rot\left(x,\alpha \right)\left[\begin{array}{c}\hfill {x}_B^{\prime}\hfill \\ {}\hfill {y}_B^{\prime}\hfill \\ {}\hfill {z}_B^{\prime}\hfill \end{array}\right], $$

where Rot(z, γ), Rot(y, β), and Rot(x, α) are the coordinate transformation matrices, and α, β, and γ are the rotation angles of the x’ , y’ , and z’ axes based on the reference coordinate system Oxyz, respectively. Finally, the height of the object based on the least squares method is as follows [8]:

$$ {z}_p=\frac{1+{c}_1\phi +\left({c}_2+{c}_3\phi \right){x}_B^{\prime }+\left({c}_4+{c}_5\phi \right){y}_B^{\prime }}{d_0+{d}_1\phi +\left({d}_2+{d}_3\phi \right){x}_B^{\prime }+\left({d}_4+{d}_5\phi \right){y}_B^{\prime }}, $$

where z p is the out-of-reference-plane height at point (x,y,z) on the object, coefficients c 1 to c 5 and d 0 to d 5 are constant coefficients that need to be determined from geometric information such as the position and direction of the camera and the projector, and ϕ is the fringe phase at the same point. Eq. (3) cannot consider the lens distortion because it is derived from geometric analysis. Generally, if the lens distortion is not considered, it is not difficult to reduce the modeling error for the phase-to-height relationship. Because the lens distortion consists of the radial lens distortion and the tangential lens distortion, the new normalized point coordinate (x d , y d ) is defined as follows [10]:

$$ {x}_d={x}_B^{\prime }+d{x}_r+d{x}_t, $$
$$ {y}_d={y}_B^{\prime }+d{y}_r+d{y}_t, $$

where dx r and dy r are the position error caused by the radial lens distortion, while dx t and dy t are the position error from the tangential lens distortion, which are defined as:

$$ d{x}_r=\left[{k}_{c1}{R}^2+{k}_{c2}{R}^4\right]{x}_B^{\prime }, $$
$$ d{y}_r=\left[{k}_{c1}{R}^2+{k}_{c2}{R}^4\right]{y}_B^{\prime }, $$
$$ d{x}_t=2{k}_{c3}{x}_B^{\prime }{y}_B^{\prime }+{k}_{c4}\left({R}^2+2{x^{\prime}}_B^2\right), $$
$$ d{y}_t={k}_{c3}\left({R}^2+2{y^{\prime}}_B^2\right)+2{k}_{c4}{x}_B^{\prime }{y}_B^{\prime }. $$

where k c1, k c2, k c3, and k c4 are the constant coefficients of camera lens distortion, and R is defined as \( {R}^2={x^{\prime}}_B^2+{y^{\prime}}_B^2 \). Since the radial distortion is usually much larger than the tangential distortion in modern optics, k c3 and k c4 can be reasonably discarded. It is very difficult to calculate the inverse function for the variables x’ B and y’ B because the tangential distortion includes the term of the multiplication of x’ B and y’ B . Therefore, if the tangential distortion (dx t and dy t ) is ignored, the point coordinate is simplified as:

$$ {x}_B^{\prime }=\frac{x_d}{1+{k}_{c1}{R}^2+{k}_{c2}{R}^4}\approx \left[1-{k}_{c1}{R}^2-{k}_{c2}{R}^4\right]{x}_d, $$
$$ {y}_B^{\prime }=\frac{y_d}{1+{k}_{c1}{R}^2+{k}_{c2}{R}^4}\approx \left[1-{k}_{c1}{R}^2-{k}_{c2}{R}^4\right]{y}_d $$

The height of the object considering the radial distortion is rewritten as follows [10]:

$$ {z}_p=\frac{\begin{array}{l}1+{c}_1\phi +\left({c}_2+{c}_3\phi \right){x}_d+\left({c}_4+{c}_5\phi \right){y}_d+\left({c}_6+{c}_7\phi \right){R}^2{x}_d\\ {}+\left({c}_8+{c}_9\phi \right){R}^2{y}_d+\left({c}_{10}+{c}_{11}\phi \right){R}^4{x}_d+\left({c}_{12}+{c}_{13}\phi \right){R}^4{y}_d\end{array}}{\begin{array}{l}{d}_0+{d}_1\phi +\left({d}_2+{d}_3\phi \right){x}_d+\left({d}_4+{d}_5\phi \right){y}_d+\left({d}_6+{d}_7\phi \right){R}^2{x}_d\\ {}+\left({d}_8+{d}_9\phi \right){R}^2{y}_d+\left({d}_{10}+{d}_{11}\phi \right){R}^4{x}_d+\left({d}_{12}+{d}_{13}\phi \right){R}^4{y}_d\end{array}}, $$

where the coefficients c 0 to c 13 and d 0 to d 13 are constants. The coefficients should be determined in advance using the least-squares method based on the phase data for the several reference planes for which the heights are known.

Experimental results

Figure 2 shows 3-D measurement equipment using cameras and a beam projector [15]. The equipment consists of black and white CCD cameras (AOS MPX1350, 1280× 1024 pixels, and 8-bit data depth), a digital light processing (DLP) projector (LG HS200G, 800 × 600 pixels), a personal computer for image processing, and a three-axis stage for camera calibration. The stage has a repeatability accuracy of 0.001 mm, and the z-axis is only used to obtain focuses of the projector and camera during the calibration for 3-D measurement. In the experiments, the basic period of the fringe pattern was set to eight pixels, and the eight-bucket algorithm with eight different phases was used in the phase shift method. Because the horizontal resolution of the projector is 800 pixels, the gray code patterns of seven bits were necessary to distinguish 100 different periods. The measuring range was set to 100 × 100 × 50 mm for the x-y-z axis because the focal depth of the z-axis is relatively sensitive to the x- and y-axes.

Fig. 2
figure 2

3-D Measurement equipment using the camera and projection Moir

A training set is necessary for the training process of the coefficients in the LSM and is formed by training pair vectors. In the phase-to-height relationship, 3-D points are used as the training pair vectors. To obtain the related coefficients, the LSM needs more training pair vectors than the number of the coefficients. Eqs. (3) and (12) need at least 11 and 27 training pair vectors, respectively. If the training pair vectors use 5 different heights for the z-axis (every 12.5 mm from -25 to +25 mm) and 5 different positions with 200 pixel intervals for each axis of the camera, the number of the training members N is 125 (5 × 5 × 5). Using the training set, each coefficient is adjusted to minimize the modeling error by the gradient descent method. However, it is very easy to obtain the coefficients if the pseudo-inverse matrix in Matlab is used for the 125 points.

The camera coordinates (x d , y d ) are obtained by normalizing the camera pixel coordinates (u, v) for the camera center. Figure 3 shows the modeling errors of the horizontal line (v = 500) in the phase-to-height relationship obtained using Eq. (3). The modeling error is the difference between the real height and the modeling height. Because lens distortion is neglected in Eq. (3), the errors greatly change according to the u-coordinate. Eq. (3) is not suitable as a phase-to-height model because the maximum errors are as large as ±2 mm. Eq. (12) considers the radial lens distortion. Figure 4 shows the modeling errors for when 27 coefficients of Eq. (12) are obtained using 125 reference points and Matlab functions, and the errors are fairly reduced to ±0.15 mm. However, even with 6,237 (63 × 9 × 11) more reference points than 125, the modeling errors do not improve any further. For less error, it is necessary to derive an equation that includes the tangential lens distortion, which is very difficult to achieve using geometric analysis.

Fig. 3
figure 3

Modeling errors by LSM without considering lens distortion

Fig. 4
figure 4

Modeling errors by LSM with considering radial lens distortion

Phase-to-height relationship by LUT

Improvements in the modeling errors of LSM are limited because it is difficult to obtain a modeling equation that includes tangential lens distortion. However, a method that uses LUTs to consider lens distortion can reduce the errors. An equation for the LUT method can be simply derived from Eq. (12) because the normalized point coordinate (x d , y d ) is constant at each camera pixel. The LUT contains the related coefficients that appear in the phase-to-height relationship for each pixel. In this method, the height for each pixel is obtained as a fractional equation or a polynomial equation of the phase value [1114]:

$$ {z}_p=\frac{1+{c}_1\phi }{d_0+{d}_1\phi }={e}_0+{e}_1\phi +{e}_2{\phi}^2+\dots +{e}_n{\phi}^n, $$

The height can be modeled as a first-order polynomial function of the phase for each pixel, and Fig. 5 shows the resulting modeling errors according to the height and the u-coordinate of the camera. The modeling errors have a parabolic form for the height because lens distortion is absolutely not considered in the equation. The maximum errors are a little reduced to ±1 mm compared with the errors of Fig. 3 for the LSM excluding lens distortion. Figure 6 shows the modeling errors for when the height is modeled as a second-order function. The maximum errors are greatly reduced to ±0.1 mm, and the errors are a little improved compared with Fig. 4 for the LSM including lens distortion. Figure 6 shows that the height model of the second-order equation in the LUT method can partly reflect lens distortion. However, errors for the height still remain in the form of a cubic function for the height.

Fig. 5
figure 5

Modeling errors using first-order function for each pixel in LUT method

Fig. 6
figure 6

Modeling errors using second-order function for each pixel in LUT method

To reduce the errors, a third-order equation can be applied to the height model, and Fig. 7 shows that the modeling errors are reduced to ±0.05 mm. However, there are no more improvement of the modeling errors for the higher-order polynomial functions. The figure shows that the height model of the third-order function can resolve the tangential lens distortion as well as the radial lens distortion. However, if the height model of the third-order function is used in the LUT method, memory of about 5.2 million array elements is necessary for the camera with the 1280×1024 pixels because each pixel needs 4 coefficients for the modeling equation. Thus, too much memory is used in the LUT method. Figure 8 shows the modeling errors for when the fractional Eq. (13) is used. Although the fractional equation has 3 coefficients, the errors are more similar to the errors of Fig. 7 with 4 coefficients than the errors of Fig. 6 with 3 coefficients. Because the fractional equation is more effective for the height model, the memory needed can be reduced by 1.3 million array elements in the LUT method.

Fig. 7
figure 7

Modeling errors using third-order function for each in LUT method

Fig. 8
figure 8

Modeling errors using fractional function for each pixel in LUT method

Results and discussion

Improved LSM using LUT

As shown in Figs. 7 and 8, the height model using the LUT method is superior to the LSM. However, the LSM model can be expressed as one equation, while the LUT model consists of a huge numbers of equations. Therefore, it is necessary to combine the height-to-phase models obtained from the LUT method into one equation including the camera coordinates. To extend Eq. (13) for each pixel to the equation including the entire pixels like Eq. (12), it is necessary to represent all the coefficients in the LUT as a function of the u and v coordinates. If the y’-axis of camera imaging plane O’x’y’ is adjusted to be parallel to the y”-axis of projection plane O”x”y”, the camera image can be captured as shown in Fig. 9. The image shows that the fringe pattern is generally parallel to the vertical axis of the camera but is a little bent on the edge by lens distortion. Because each pair of fringe pattern means a phase difference of 2π, the phase value greatly changes for the horizontal axis while it hardly changes for the vertical axis. To combine the coefficients of the LUT, they must monotonically increase or decrease. Figure 10 shows the variation of the coefficients for the horizontal center axis when the model is obtained by the third-order equation with four coefficients. The constant and first-order terms among the four coefficients monotonically increase or decrease, while the second and third-order terms are not. Because the coefficients include rapidly changing terms, it is nearly impossible to express them as polynomial functions of the u value.

Fig. 9
figure 9

Camera image of projected fringe pattern distorted by lens

Fig. 10
figure 10

Four coefficients in polynomial model

To remove the rapidly changing terms, the number of coefficients in the phase-to-height model must be reduced. However, because the modeling errors for the second-order polynomial function are similar to those of the LSM, there are no benefits for using the LUT method. On the other hand, when a fractional equation such as Eq. (14) is used, the modeling errors are similar to those of the third-order model as shown in Figs. 7 and 8, even though the number of coefficients is reduced.

$$ {z}_p=\frac{1+{c}_1\phi }{d_0+{d}_1\phi }=\frac{1/{d}_1+\left({c}_1/{d}_1\right)\phi }{\left({d}_0/{d}_1\right)+\phi }=H+\frac{b}{a+\phi }, $$

where H = c 1/d 1, a = d 0/d 1, b = 1/d 1-(c 1/d 1)(d 0/d 1). Figure 11 shows the variation of three coefficients for the horizontal center axis when the LUT is obtained by the fractional model of Eq. (14). Because a, b, and H have no rapid change unlike the cases of Fig. 10, the LUT values at v = 500 can be combined into a polynomial function of the u value.

Fig. 11
figure 11

Three coefficients in fractional model

Figure 12 shows the modeling errors obtained using the combined functions instead of the LUT for a, b, and H. Even though each coefficient (a, b, and H) is expressed by the fifth-order polynomial functions, the modeling errors are very big and worse than those of the LSM shown in Fig. 4. To find the cause for the modeling errors, we investigate the sensitivity of the parameters a, b, and H for the height. The sensitivity is characterized by ∂z p /∂H = 1, ∂z p /∂a = b/(a + ϕ)2, and ∂z p /∂b = 1/(a + ϕ). The sensitivity ∂z p /∂H = 1 means that the error of H directly affects the height z p , while the errors of a and b are reduced by b/(a + ϕ)2 and 1/(a + ϕ), respectively. Therefore, it is very important to reduce the modeling error of H to measure the height precisely. The coefficient H shown in Fig. 11 is nearly constant, but there are many fluctuations upon closer examination. It is very difficult to represent the fluctuation as a fifth-order polynomial function. In geometric analysis, H is the height from the reference plane to the lens focus of the projector [15]. If lens distortion is ignored, H is constant, regardless of the camera coordinates as shown in Eq. (3), but it may change if lens distortion is considered. Because H is nearly constant and varies only slightly with lens distortion, Eq. (14) can be modified as an equation with a constant H. Using the average value of H, the LUT for the remaining coefficients a and b is calculated at each pixel. The coefficients a and b are then expressed by the following polynomial functions for the horizontal value u:

Fig. 12
figure 12

Modeling errors using combined functions for a, b, and H in LUT method

$$ a={a}_0+{a}_1u+{a}_2{u}^2+\cdots +{a}_n{u}^n, $$
$$ b={b}_0+{b}_1u+{b}_2{u}^2+\cdots +{b}_n{u}^n, $$

where a 0 to a n and b 0 to b n are the coefficients of the polynomial functions of a and b, respectively. Figure 13 shows the modeling errors for when the fifth-order function is used for the coefficients a and b, and the errors are very similar to those of the LUT in Fig. 8. Compared with the LSM, the error improvement is more than doubled. Thus, 2,560(2×1280) array elements of memory in the LUT are reduced to 12 coefficients for a 0 to a 5 and b 0 to b 5 at the horizontal line of v = 500.

Fig. 13
figure 13

Modeling errors using fixed H in LUT method

Next, it is necessary to obtain the quintic models of the coefficients a and b for the different horizontal lines. However, we can expect the models to be slightly different because the phase values on the vertical axis are influenced by lens distortion, as shown in Fig. 9. Table 1 shows the coefficients of the quintic modeling functions for five different vertical values (v = 100, 300, 500, 700, 900). As expected, most of the coefficients are similar to each other, as shown in the table. The coefficients of the constant and the first-order term are very similar, regardless of the vertical values. Therefore, the 12 coefficients for a 0 to a 5 and b 0 to b 5 can also be presented as a polynomial function for the vertical value v. When each coefficient in Table 1 is presented as a third-order polynomial function of v, the 12 equations are as follows:

Table 1 Modeling coefficients for sampled x-z planes
$$ a\left(u,v\right)={a}_0(v)+{a}_1(v)u+\cdots +{a}_5(v){u}^5, $$
$$ b\left(u,v\right)={b}_0(v)+{b}_1(v)u+\cdots +{b}_5(v){u}^5, $$


$$ {a}_0(v)={d}_{0,0}+{d}_{1,0}v+{d}_{2,0}{v}^2+{d}_{3,0}{v}^3,: $$
$$ {a}_5(v)={d}_{0,5}+{d}_{1,5}v+{d}_{2,5}{v}^2+{d}_{3,5}{v}^3, $$
$$ {b}_0(v)={e}_{0,0}+{e}_{1,0}v+{e}_{2,0}{v}^2+{e}_{3,0}{v}^3,: $$
$$ {b}_5(v)={e}_{0,5}+{e}_{1,5}v+{e}_{2,5}{v}^2+{e}_{3,5}{v}^3. $$

The coefficients for d 0 to d 3 and e 0 to e 3 are calculated by LSM using the values of Table 1. Table 2 shows the coefficients presented in Eqs. (17) and (18) when the values for a 0 to a 5 and b 0 to b 5 are presented as third-order polynomial functions of v. Finally, the phase-to-height relationship including the camera coordinates is as follows:

Table 2 Coefficients of a and b by LST
$$ {z}_p\left(u,v,\phi \right)=H+\frac{b\left(u,v\right)}{a\left(u,v\right)+\phi }, $$

Eq. (19) is based on the LUT method and includes the camera coordinates for the phase-to-height relationship as in Eq. (12) of the LSM. Because the equation is obtained using only five different horizontal lines, the modeling errors are not guaranteed for the other horizontal lines. To check the modeling errors for the other horizontal lines, they are examined for the line of v = 400. Figure 14 shows the modeling results obtained using Eq. (19) for the line of v = 400. Although this line was not used to derive Eq. (19), the modeling errors are similar to the results for v = 500 used in the training, as shown in Fig. 13.

As another example, Fig. 15 shows the modeling results for the line of v = 800. The modeling errors are also similar to the results of Figs. 13 and 14. Most of modeling errors from Eq. (19) are within ±0.05 mm except for some of the boundary area. However, the amount of memory for LUT is dramatically reduced from 2.6 million (2×1280×1024) to 48(2×4×6) as shown in Table 2. Compared with the errors of Fig. 4 obtained by the LSM, the modeling error improvements are still more than doubled, even though the number of coefficients is increased a little. Therefore, Eq. (19) considers both the tangential lens distortion and the radial lens distortion like the LUT method.

Fig. 14
figure 14

Modeling errors for plane of v = 400 in proposed method

Fig. 15
figure 15

Modeling errors for plane of v = 800 in proposed method

In summary, the proposed method can obtain the phase-to-height relationship and includes the image coordinates. First, the phase-to-height relationship for each pixel is obtained using the LUT method. Next, the relationships are combined into one equation for the pixels on the same horizontal line. Then, the equation includes the u-coordinate the phase for every horizontal line. Finally, the equations for the horizontal lines are combined into Eq. (19) including the u- and v-coordinates. The proposed method is a fusion of the LSM and LUT methods because it has one mapping function but the same accuracy as the LUT method for 3-D measurement.


The phase-to-height relationship in a 3-D measurement system is generally obtained by the LSM or LUT methods, which were compared and combined into an improved LSM method based on the LUT method. The LSM can express the phase-to-height relationship as one equation but it is difficult to fully consider lens distortion. LSM consists of two different models, one that neglects lens distortion and another that considers radial distortion. The latter method has ten times less modeling error.

The LUT method eliminates the effect of the distortion because each model is obtained at each pixel, but too much memory is needed. The LUT method shows better performance than the LSM. This study combined the methods into one based on the LUT method but expressed as one equation for the phase-to-height relationship. The proposed equation is expressed as one fractional equation like the case of LSM, but the modeling result is better because it considers tangential lens distortion as well as radial lens distortion.





Charge-coupled device


Digital light processing


Fringe pattern projection


Least-squares method


Look-up table


  1. Chen, F, Brown, GM, Song, M: Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 39, 10–22 (2000)

    Article  ADS  Google Scholar 

  2. Hu, Y, Xi, J, Li, E, Chicharo, J, Yang, Z: Three-dimensional profilometry based on shift estimation of projected fringe patterns. Appl. Optics 45, 678–687 (2006)

    Article  ADS  Google Scholar 

  3. Takeda, M, Mutoh, K: Fourier transform profilometry for the automatic measurement of 3-D object shapes. Appl. Optics 22, 3977–3982 (1983)

    Article  ADS  Google Scholar 

  4. Rajoub, B, Lalor, M, Burtom, D, Karout, S: A new model for measuring object shape using non-collimated fringe-pattern projections. J. Optics A: Pure Appl. Optics 9, S66–S75 (2007)

    Article  ADS  Google Scholar 

  5. Maurel, A, Cobelli, P, Pagneux, V, Petitjeans, P: Experimental and theoretical inspection of the phase-to-height relation in Fourier transform profilometry. Appl. Optics 48, 380–392 (2009)

    Article  ADS  Google Scholar 

  6. Sansoni, G, Carocci, M, Rodella, R: 3D vision based on the combination of gray code and phase shift light projection. Appl. Optics 38, 6565–6573 (1999)

    Article  ADS  Google Scholar 

  7. Hu, Q, Huang, P, Fu, Q, Chiang, F: Calibration of a three-dimensional shape measurement system. Opt. Eng. 42, 487–493 (2003)

    Article  ADS  Google Scholar 

  8. Du, H, Wang, Z: Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system. Opt. Letters 32, 2438–2440 (2007)

    Article  ADS  Google Scholar 

  9. Da, F, Gai, S: Flexible three-dimensional technique based on a digital light processing projector. Appl. Optics 47, 377–385 (2008)

    Article  ADS  Google Scholar 

  10. Huang, L, Chua, P, Asundi, A: Least-squares calibration method for fringe projection profilometry considering camera lens distortion. Appl. Optics 49, 1539–1548 (2010)

    Article  ADS  Google Scholar 

  11. Liu, H, Su, W, Reichard, K, Yin, S: Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement. Optics Commun 216, 65–80 (2003)

    Article  ADS  Google Scholar 

  12. Guo, H, He, H, Yu, Y, Chen, M: Least-squares calibration method for fringe projection profilometry. Opt. Eng. 44, 033603 (2005)

    Article  ADS  Google Scholar 

  13. Jia, P, Kofman, J, English, C: Comparison of linear and nonlinear calibration methods for phase-measuring profilometry. Opt. Eng. 46, 043601 (2007)

    Article  ADS  Google Scholar 

  14. Li, W, Fang, S, Duan, S: 3D shape measurement based on structured light projection applying polynomial interpolation technique. Optik 124, 20–27 (2013)

    Article  ADS  Google Scholar 

  15. Chung, B, Park, Y: Hybrid method for Phase-to-height relationship in 3D shape measurement using fringe pattern projection. J. Precision Eng. Manuf 15, 407–413 (2014)

    Article  Google Scholar 

Download references


This work was supported by the 2015 Yeungnam University Research Grant.

Competing interests

The author declares that he has no competing interests.

Ethics approval and consent to participate

This is not applicable.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Byeong-Mook Chung.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chung, BM. Improved least-squares method for phase-to-height relationship in fringe projection profilometry. J. Eur. Opt. Soc.-Rapid Publ. 12, 11 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: