 Research
 Open Access
 Published:
Improved leastsquares method for phasetoheight relationship in fringe projection profilometry
Journal of the European Optical SocietyRapid Publicationsvolume 12, Article number: 11 (2016)
Abstract
Background
The height estimating function is very important for threedimensional (3D) measurement systems based on a digital light processing (DLP) projector and a camera. Sinusoidal fringe patterns of the projector are projected onto an object, and the phase of the measuring point is calculated from the camera image. Generally, the leastsquares method (LSM) and lookup table(LUT) method are typically used to obtain the phasetoheight relationship.
Results
The merit of LSM is that one equation is obtained from geometric analysis, but it has difficulty incorporating lens distortion. The LUT method can obtain an exact model that includes lens distortion, but the amount of memory needed increases with the number of pixels because each model is obtained at each pixel. This paper compares for these two methods and proposes an improved LSM using LUT.
Conclusions
The proposed method has one equation like the case of LSM, but the modeling result is better than LSM because lens distortion is fully considered.
Background
Threedimensional (3D) measurement using optical sensors has been studied extensively for applications because of the intrinsic noncontact nature and its high speed of the method [1]. A typical 3D measurement system based on a fringe pattern projection (FPP) system [2–14] is constructed using a white light projector and a chargecoupled device (CCD) camera. The projector projects sinusoidal fringe patterns onto an object, and the CCD camera acquires the patterns that are deformed by the object’s shape. The height information of the object is encoded into the deformed fringe pattern recorded by the CCD camera. In this measurement system, it is very important to obtain the phasetoheight relationship, for which there are three principal calibration methods [4, 10].
The first method is based on measuring geometric parameters. This method relies on a certain measurement setup in which the geometric parameters and the intrinsic parameters must be precisely determined in advance. Approaches for deriving the phasetoheight model usually employ various simplifying assumptions, such as simple optical arrangements, small angles, or large distances between the projector and object compared to the illuminated area [3–5]. Although such approaches are simple, they would be impossible to use for complex optical arrangements because it is very hard to measure the position of the focus and the direction of the center line for the camera. Particularly, even if the related parameters can be precisely measured, it is absolutely impossible to consider the error from lens distortion. Therefore, this method will be excluded in this paper because the effect of lens distortion has to be considered in the phasetoheight model.
The second method is based on the least squares method (LSM) to determine the phasetoheight relationship presented by a mathematical description [6–10]. This method is more flexible in terms of implementation since it allows the system to be arranged arbitrarily and avoids the problems of accurate geometric parameter determination. Du et al. proposed an equation for the phasetoheight relationship using the coordinate transformation matrix with an arbitrarily arranged fringe projection profilometry system [8]. The equation is a fractional expression with 11 parameters and 3 variables: the phase value, the horizontal coordinate, and the vertical coordinate of the camera. This method uses precise height information such as standard gauge blocks to obtain the related parameters. However, it is hard to measure an object precisely because the equation does not consider lens distortion. Lens distortion consists of radial and tangential components. Huang et al. [10] considered the radial lens distortion to improve the model by Du et al. [8], and the number of the parameters in the equation was increased to 27 for the same variables. The modeling errors were greatly improved even though, only the radial lens distortion was considered. However, it is no longer possible to improve the modeling errors using one modeling equation for the entire working volume.
The third method uses a simple lookup table (LUT) containing the relationship between phase values and heights for each camera pixel [11–14]. The LUT method obtains the phasetoheight relationship for each pixel without any geometric analysis and then stores the information in the LUT, which it uses to measure the height. It is possible to obtain the exact model including all lens distortion of the camera or the projector with this method, but there are as many modeling functions as there are camera pixels. Liu et al. [11] expressed the height as a fifth order polynomial function of the phase at each pixel to consider all lens distortion and measured the related phases for 21 different heights. When the parameters were determined by the LSM, the modeling results were very good. Guo et al. [12] presented a rational function instead of a polynomial function at each pixel. Because the rational function is derived from the geometry of the measurement system, it achieved higher accuracy than the polynomial function in the measurement noise produced by the inherent divergent illumination of a projector. Jia et al. [13] formulated linear and nonlinear equations for the mapping relationship between the phase and height of the object surface. They showed that the nonlinear calibration model expressed as the rational function was superior to the linear one in terms of measurement accuracy. Most of the LUT methods use a precise z stage for the calibration of the phasetoheight model. However, Li et al. [14] proposed a checker board(a plane board with calibration squares) instead of a precise linear z stage, which made it was possible to estimate the height from the size of the squares.
Most research on the phasetoheight relationship focuses on how the modeling error can be reduced in spite of lens distortion. The LSM model uses one modeling equation for all the pixels, while the LUT method uses pixelbased models to reduce the error by lens distortion. This paper compares the modeling errors between the LSM and the LUT methods and proposes one equation for the phasetoheight relationships with the accuracy of the LUT method. The proposed method is a fusion method that obtains the phasetoheight relationships for each pixel using the LUT method and merges them into one equation to apply to all of the pixel points.
The rest of the paper is organized as follows. In section Phasetoheight relationship by LSM, two representative LSMs are introduced and compared with each other. One method neglects lens distortion, and the other considers partly it. Section Phasetoheight relationship by LUT introduces the conventional LUT method and its modeling results are compared with that of the LSM. Section Improved LSM using LUT presents the improved LSM based on the LUT, which is expressed as one fractional equation like the LSM. Conclusions are given in section Conclusions.
Methods
Phasetoheight relationship by LSM
Geometric analysis
Figure 1 illustrates a typical setup of a generalized FPP system [8]. The reference plane Oxy, camera imaging plane O’x’y’ , and projection plane O”x”y” are arbitrarily arranged. P represents an arbitrary point on the object, B is the imaging point of P, D is the original fringe point projected at P, and A and C are the lens centers of the camera and the projector, respectively. For convenience and clarification, the coordinates of a point in a coordinate system are denoted by the corresponding coordinate symbols, and the symbol of the point is chosen as the subscript. For example, point P is denoted as (x _{ p }, y _{ p }, z _{ p }), (x _{ p }’ , y _{ p }’, z _{ p }’), and (x _{ p }”, y _{ p }”, z _{ p }”) in coordinate systems Oxyz, O’x’y’z’ , and O”x”y”z”, respectively. Based on the coordinate relations among points P, A, and B in the system Oxyz, we obtain:
A typical coordinate transformation of point B from system O’x’y’z’ to system Oxyz is as follows:
where Rot(z, γ), Rot(y, β), and Rot(x, α) are the coordinate transformation matrices, and α, β, and γ are the rotation angles of the x’ , y’ , and z’ axes based on the reference coordinate system Oxyz, respectively. Finally, the height of the object based on the least squares method is as follows [8]:
where z _{ p } is the outofreferenceplane height at point (x,y,z) on the object, coefficients c _{1} to c _{5} and d _{0} to d _{5} are constant coefficients that need to be determined from geometric information such as the position and direction of the camera and the projector, and ϕ is the fringe phase at the same point. Eq. (3) cannot consider the lens distortion because it is derived from geometric analysis. Generally, if the lens distortion is not considered, it is not difficult to reduce the modeling error for the phasetoheight relationship. Because the lens distortion consists of the radial lens distortion and the tangential lens distortion, the new normalized point coordinate (x _{ d }, y _{ d }) is defined as follows [10]:
where dx _{ r } and dy _{ r } are the position error caused by the radial lens distortion, while dx _{ t } and dy _{ t } are the position error from the tangential lens distortion, which are defined as:
where k _{ c1}, k _{ c2}, k _{ c3}, and k _{ c4} are the constant coefficients of camera lens distortion, and R is defined as \( {R}^2={x^{\prime}}_B^2+{y^{\prime}}_B^2 \). Since the radial distortion is usually much larger than the tangential distortion in modern optics, k _{ c3} and k _{ c4} can be reasonably discarded. It is very difficult to calculate the inverse function for the variables x’ _{ B } and y’ _{ B } because the tangential distortion includes the term of the multiplication of x’ _{ B } and y’ _{ B }. Therefore, if the tangential distortion (dx _{ t } and dy _{ t }) is ignored, the point coordinate is simplified as:
The height of the object considering the radial distortion is rewritten as follows [10]:
where the coefficients c _{0} to c _{13} and d _{0} to d _{13} are constants. The coefficients should be determined in advance using the leastsquares method based on the phase data for the several reference planes for which the heights are known.
Experimental results
Figure 2 shows 3D measurement equipment using cameras and a beam projector [15]. The equipment consists of black and white CCD cameras (AOS MPX1350, 1280× 1024 pixels, and 8bit data depth), a digital light processing (DLP) projector (LG HS200G, 800 × 600 pixels), a personal computer for image processing, and a threeaxis stage for camera calibration. The stage has a repeatability accuracy of 0.001 mm, and the zaxis is only used to obtain focuses of the projector and camera during the calibration for 3D measurement. In the experiments, the basic period of the fringe pattern was set to eight pixels, and the eightbucket algorithm with eight different phases was used in the phase shift method. Because the horizontal resolution of the projector is 800 pixels, the gray code patterns of seven bits were necessary to distinguish 100 different periods. The measuring range was set to 100 × 100 × 50 mm for the xyz axis because the focal depth of the zaxis is relatively sensitive to the x and yaxes.
A training set is necessary for the training process of the coefficients in the LSM and is formed by training pair vectors. In the phasetoheight relationship, 3D points are used as the training pair vectors. To obtain the related coefficients, the LSM needs more training pair vectors than the number of the coefficients. Eqs. (3) and (12) need at least 11 and 27 training pair vectors, respectively. If the training pair vectors use 5 different heights for the zaxis (every 12.5 mm from 25 to +25 mm) and 5 different positions with 200 pixel intervals for each axis of the camera, the number of the training members N is 125 (5 × 5 × 5). Using the training set, each coefficient is adjusted to minimize the modeling error by the gradient descent method. However, it is very easy to obtain the coefficients if the pseudoinverse matrix in Matlab is used for the 125 points.
The camera coordinates (x _{ d }, y _{ d }) are obtained by normalizing the camera pixel coordinates (u, v) for the camera center. Figure 3 shows the modeling errors of the horizontal line (v = 500) in the phasetoheight relationship obtained using Eq. (3). The modeling error is the difference between the real height and the modeling height. Because lens distortion is neglected in Eq. (3), the errors greatly change according to the ucoordinate. Eq. (3) is not suitable as a phasetoheight model because the maximum errors are as large as ±2 mm. Eq. (12) considers the radial lens distortion. Figure 4 shows the modeling errors for when 27 coefficients of Eq. (12) are obtained using 125 reference points and Matlab functions, and the errors are fairly reduced to ±0.15 mm. However, even with 6,237 (63 × 9 × 11) more reference points than 125, the modeling errors do not improve any further. For less error, it is necessary to derive an equation that includes the tangential lens distortion, which is very difficult to achieve using geometric analysis.
Phasetoheight relationship by LUT
Improvements in the modeling errors of LSM are limited because it is difficult to obtain a modeling equation that includes tangential lens distortion. However, a method that uses LUTs to consider lens distortion can reduce the errors. An equation for the LUT method can be simply derived from Eq. (12) because the normalized point coordinate (x _{ d }, y _{ d }) is constant at each camera pixel. The LUT contains the related coefficients that appear in the phasetoheight relationship for each pixel. In this method, the height for each pixel is obtained as a fractional equation or a polynomial equation of the phase value [11–14]:
The height can be modeled as a firstorder polynomial function of the phase for each pixel, and Fig. 5 shows the resulting modeling errors according to the height and the ucoordinate of the camera. The modeling errors have a parabolic form for the height because lens distortion is absolutely not considered in the equation. The maximum errors are a little reduced to ±1 mm compared with the errors of Fig. 3 for the LSM excluding lens distortion. Figure 6 shows the modeling errors for when the height is modeled as a secondorder function. The maximum errors are greatly reduced to ±0.1 mm, and the errors are a little improved compared with Fig. 4 for the LSM including lens distortion. Figure 6 shows that the height model of the secondorder equation in the LUT method can partly reflect lens distortion. However, errors for the height still remain in the form of a cubic function for the height.
To reduce the errors, a thirdorder equation can be applied to the height model, and Fig. 7 shows that the modeling errors are reduced to ±0.05 mm. However, there are no more improvement of the modeling errors for the higherorder polynomial functions. The figure shows that the height model of the thirdorder function can resolve the tangential lens distortion as well as the radial lens distortion. However, if the height model of the thirdorder function is used in the LUT method, memory of about 5.2 million array elements is necessary for the camera with the 1280×1024 pixels because each pixel needs 4 coefficients for the modeling equation. Thus, too much memory is used in the LUT method. Figure 8 shows the modeling errors for when the fractional Eq. (13) is used. Although the fractional equation has 3 coefficients, the errors are more similar to the errors of Fig. 7 with 4 coefficients than the errors of Fig. 6 with 3 coefficients. Because the fractional equation is more effective for the height model, the memory needed can be reduced by 1.3 million array elements in the LUT method.
Results and discussion
Improved LSM using LUT
As shown in Figs. 7 and 8, the height model using the LUT method is superior to the LSM. However, the LSM model can be expressed as one equation, while the LUT model consists of a huge numbers of equations. Therefore, it is necessary to combine the heighttophase models obtained from the LUT method into one equation including the camera coordinates. To extend Eq. (13) for each pixel to the equation including the entire pixels like Eq. (12), it is necessary to represent all the coefficients in the LUT as a function of the u and v coordinates. If the y’axis of camera imaging plane O’x’y’ is adjusted to be parallel to the y”axis of projection plane O”x”y”, the camera image can be captured as shown in Fig. 9. The image shows that the fringe pattern is generally parallel to the vertical axis of the camera but is a little bent on the edge by lens distortion. Because each pair of fringe pattern means a phase difference of 2π, the phase value greatly changes for the horizontal axis while it hardly changes for the vertical axis. To combine the coefficients of the LUT, they must monotonically increase or decrease. Figure 10 shows the variation of the coefficients for the horizontal center axis when the model is obtained by the thirdorder equation with four coefficients. The constant and firstorder terms among the four coefficients monotonically increase or decrease, while the second and thirdorder terms are not. Because the coefficients include rapidly changing terms, it is nearly impossible to express them as polynomial functions of the u value.
To remove the rapidly changing terms, the number of coefficients in the phasetoheight model must be reduced. However, because the modeling errors for the secondorder polynomial function are similar to those of the LSM, there are no benefits for using the LUT method. On the other hand, when a fractional equation such as Eq. (14) is used, the modeling errors are similar to those of the thirdorder model as shown in Figs. 7 and 8, even though the number of coefficients is reduced.
where H = c _{1}/d _{1}, a = d _{0}/d _{1}, b = 1/d _{1}(c _{1}/d _{1})(d _{0}/d _{1}). Figure 11 shows the variation of three coefficients for the horizontal center axis when the LUT is obtained by the fractional model of Eq. (14). Because a, b, and H have no rapid change unlike the cases of Fig. 10, the LUT values at v = 500 can be combined into a polynomial function of the u value.
Figure 12 shows the modeling errors obtained using the combined functions instead of the LUT for a, b, and H. Even though each coefficient (a, b, and H) is expressed by the fifthorder polynomial functions, the modeling errors are very big and worse than those of the LSM shown in Fig. 4. To find the cause for the modeling errors, we investigate the sensitivity of the parameters a, b, and H for the height. The sensitivity is characterized by ∂z _{ p }/∂H = 1, ∂z _{ p }/∂a = b/(a + ϕ)^{2}, and ∂z _{ p }/∂b = 1/(a + ϕ). The sensitivity ∂z _{ p }/∂H = 1 means that the error of H directly affects the height z _{ p }, while the errors of a and b are reduced by b/(a + ϕ)^{2} and 1/(a + ϕ), respectively. Therefore, it is very important to reduce the modeling error of H to measure the height precisely. The coefficient H shown in Fig. 11 is nearly constant, but there are many fluctuations upon closer examination. It is very difficult to represent the fluctuation as a fifthorder polynomial function. In geometric analysis, H is the height from the reference plane to the lens focus of the projector [15]. If lens distortion is ignored, H is constant, regardless of the camera coordinates as shown in Eq. (3), but it may change if lens distortion is considered. Because H is nearly constant and varies only slightly with lens distortion, Eq. (14) can be modified as an equation with a constant H. Using the average value of H, the LUT for the remaining coefficients a and b is calculated at each pixel. The coefficients a and b are then expressed by the following polynomial functions for the horizontal value u:
where a _{0} to a _{n} and b _{0} to b _{n} are the coefficients of the polynomial functions of a and b, respectively. Figure 13 shows the modeling errors for when the fifthorder function is used for the coefficients a and b, and the errors are very similar to those of the LUT in Fig. 8. Compared with the LSM, the error improvement is more than doubled. Thus, 2,560(2×1280) array elements of memory in the LUT are reduced to 12 coefficients for a _{0} to a _{5} and b _{0} to b _{5} at the horizontal line of v = 500.
Next, it is necessary to obtain the quintic models of the coefficients a and b for the different horizontal lines. However, we can expect the models to be slightly different because the phase values on the vertical axis are influenced by lens distortion, as shown in Fig. 9. Table 1 shows the coefficients of the quintic modeling functions for five different vertical values (v = 100, 300, 500, 700, 900). As expected, most of the coefficients are similar to each other, as shown in the table. The coefficients of the constant and the firstorder term are very similar, regardless of the vertical values. Therefore, the 12 coefficients for a _{0} to a _{5} and b _{0} to b _{5} can also be presented as a polynomial function for the vertical value v. When each coefficient in Table 1 is presented as a thirdorder polynomial function of v, the 12 equations are as follows:
where,
The coefficients for d _{0} to d _{3} and e _{0} to e _{3} are calculated by LSM using the values of Table 1. Table 2 shows the coefficients presented in Eqs. (17) and (18) when the values for a _{0} to a _{5} and b _{0} to b _{5} are presented as thirdorder polynomial functions of v. Finally, the phasetoheight relationship including the camera coordinates is as follows:
Eq. (19) is based on the LUT method and includes the camera coordinates for the phasetoheight relationship as in Eq. (12) of the LSM. Because the equation is obtained using only five different horizontal lines, the modeling errors are not guaranteed for the other horizontal lines. To check the modeling errors for the other horizontal lines, they are examined for the line of v = 400. Figure 14 shows the modeling results obtained using Eq. (19) for the line of v = 400. Although this line was not used to derive Eq. (19), the modeling errors are similar to the results for v = 500 used in the training, as shown in Fig. 13.
As another example, Fig. 15 shows the modeling results for the line of v = 800. The modeling errors are also similar to the results of Figs. 13 and 14. Most of modeling errors from Eq. (19) are within ±0.05 mm except for some of the boundary area. However, the amount of memory for LUT is dramatically reduced from 2.6 million (2×1280×1024) to 48(2×4×6) as shown in Table 2. Compared with the errors of Fig. 4 obtained by the LSM, the modeling error improvements are still more than doubled, even though the number of coefficients is increased a little. Therefore, Eq. (19) considers both the tangential lens distortion and the radial lens distortion like the LUT method.
In summary, the proposed method can obtain the phasetoheight relationship and includes the image coordinates. First, the phasetoheight relationship for each pixel is obtained using the LUT method. Next, the relationships are combined into one equation for the pixels on the same horizontal line. Then, the equation includes the ucoordinate the phase for every horizontal line. Finally, the equations for the horizontal lines are combined into Eq. (19) including the u and vcoordinates. The proposed method is a fusion of the LSM and LUT methods because it has one mapping function but the same accuracy as the LUT method for 3D measurement.
Conclusions
The phasetoheight relationship in a 3D measurement system is generally obtained by the LSM or LUT methods, which were compared and combined into an improved LSM method based on the LUT method. The LSM can express the phasetoheight relationship as one equation but it is difficult to fully consider lens distortion. LSM consists of two different models, one that neglects lens distortion and another that considers radial distortion. The latter method has ten times less modeling error.
The LUT method eliminates the effect of the distortion because each model is obtained at each pixel, but too much memory is needed. The LUT method shows better performance than the LSM. This study combined the methods into one based on the LUT method but expressed as one equation for the phasetoheight relationship. The proposed equation is expressed as one fractional equation like the case of LSM, but the modeling result is better because it considers tangential lens distortion as well as radial lens distortion.
Abbreviations
 3D:

Threedimensional
 CCD:

Chargecoupled device
 DLP:

Digital light processing
 FPP:

Fringe pattern projection
 LSM:

Leastsquares method
 LUT:

Lookup table
References
 1.
Chen, F, Brown, GM, Song, M: Overview of threedimensional shape measurement using optical methods. Opt. Eng. 39, 10–22 (2000)
 2.
Hu, Y, Xi, J, Li, E, Chicharo, J, Yang, Z: Threedimensional profilometry based on shift estimation of projected fringe patterns. Appl. Optics 45, 678–687 (2006)
 3.
Takeda, M, Mutoh, K: Fourier transform profilometry for the automatic measurement of 3D object shapes. Appl. Optics 22, 3977–3982 (1983)
 4.
Rajoub, B, Lalor, M, Burtom, D, Karout, S: A new model for measuring object shape using noncollimated fringepattern projections. J. Optics A: Pure Appl. Optics 9, S66–S75 (2007)
 5.
Maurel, A, Cobelli, P, Pagneux, V, Petitjeans, P: Experimental and theoretical inspection of the phasetoheight relation in Fourier transform profilometry. Appl. Optics 48, 380–392 (2009)
 6.
Sansoni, G, Carocci, M, Rodella, R: 3D vision based on the combination of gray code and phase shift light projection. Appl. Optics 38, 6565–6573 (1999)
 7.
Hu, Q, Huang, P, Fu, Q, Chiang, F: Calibration of a threedimensional shape measurement system. Opt. Eng. 42, 487–493 (2003)
 8.
Du, H, Wang, Z: Threedimensional shape measurement with an arbitrarily arranged fringe projection profilometry system. Opt. Letters 32, 2438–2440 (2007)
 9.
Da, F, Gai, S: Flexible threedimensional technique based on a digital light processing projector. Appl. Optics 47, 377–385 (2008)
 10.
Huang, L, Chua, P, Asundi, A: Leastsquares calibration method for fringe projection profilometry considering camera lens distortion. Appl. Optics 49, 1539–1548 (2010)
 11.
Liu, H, Su, W, Reichard, K, Yin, S: Calibrationbased phaseshifting projected fringe profilometry for accurate absolute 3D surface profile measurement. Optics Commun 216, 65–80 (2003)
 12.
Guo, H, He, H, Yu, Y, Chen, M: Leastsquares calibration method for fringe projection profilometry. Opt. Eng. 44, 033603 (2005)
 13.
Jia, P, Kofman, J, English, C: Comparison of linear and nonlinear calibration methods for phasemeasuring profilometry. Opt. Eng. 46, 043601 (2007)
 14.
Li, W, Fang, S, Duan, S: 3D shape measurement based on structured light projection applying polynomial interpolation technique. Optik 124, 20–27 (2013)
 15.
Chung, B, Park, Y: Hybrid method for Phasetoheight relationship in 3D shape measurement using fringe pattern projection. J. Precision Eng. Manuf 15, 407–413 (2014)
Acknowledgements
This work was supported by the 2015 Yeungnam University Research Grant.
Competing interests
The author declares that he has no competing interests.
Ethics approval and consent to participate
This is not applicable.
Author information
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 Shape measurement
 Fringe pattern projection
 Phaseheight relationship
 Lens distortion