 Research
 Open Access
 Published:
Simple calibration method for dualcamera structured light system
Journal of the European Optical SocietyRapid Publications volume 14, Article number: 23 (2018)
Abstract
A dualcamera structured light system consisting of two cameras and a projector has been widely researched in threedimensional (3D) profilometry. A vital step in these systems is 3D calibration. Existing calibration methods are timeconsuming and complicated because each cameraprojector pair is calibrated separately. In this paper, an improved calibration method is proposed to decrease the calibration effort by simplifying the extrinsic calibration of one cameraprojector pair. It needs only two texture images to acquire the extrinsic parameters of the right cameraprojector pair instead of 25 images (a texture image, 12 vertical, and 12 horizontal sinusoidal fringe patterns) and more complicated computations. A variant iterative closest point (ICP) algorithm was studied to match 3D cloud data sets for each cameraprojector, and to reject outliers and invisible data automatically at each iterative step by using the proposed five criteria. Experimental results demonstrate that the proposed method is simple to operate and reaches the higher measurement accuracy of the shape data compared with the existing state of the art method.
Introduction
Threedimensional (3D) shape measurement is an active topic in various applications such as reverse engineering, industrial inspection, augmented reality, and cultural heritage recording. Existing 3D measurement techniques include stereo vision [1, 2], laser scanning [3], interferometry [4] and structured light method. Among them, structured light is a very popular 3D shape measurement technique because of the advantages of fullfield acquisition, fast data processing, low cost, and high resolution [5, 6].
The simplest structured light system based on fringe projection [7] is usually composed of a camera and a projector. The projector projects a series of fringe pattern images onto the measured object surface. From another viewpoint, the fringe patterns are deformed with regard to the object surface and captured by a camera. The 3D shape of the measured object is obtained by using a triangulation technique. However, the measuring range is limited to the intersection of the cameraprojector fields of view. Points not projected by the projector and/or observed by the camera cannot be measured. Moreover, the camera resolution has its limits. On the contrary, a dualcamera structured light system [8] consisting of two cameras and a projector can increase the sensing range and its spatial resolution using each cameraprojector pair to measure the partial area of an object.
Before performing the measurements, the dualcamera structured light system must be 3D calibrated, which builds the relationship between the phase map and shape data. System calibration can be divided into each cameraprojector pair calibration. Next, the 3D data obtained from different viewpoints must be transformed into the same coordinate system. Calibration is very important because it determines the optical and geometrical parameters of the projector and cameras. The existing calibration methods can be broadly classified into the following four categories [9]: geometric triangulation, polynomial method, inverse camera, and pseudocamera methods.
Triangulation is formed by the imaging center of the camera, projection center of the projector, and the measured object surface. Triangulation methods [10,11,12] attempt to establish the mathematical description of height using phase and the parameters of the system. These systematic parameters include the distance between the imaging center of the camera and projection center of the projector, distance between imaging center and a reference plane, angle between optical axis of the projector and the camera, and period of the fringes. These methods are simpler than other calibration methods. However, it is difficult to achieve parallelism between the line of optical centers and the reference plane. Moreover, the projection angle deflection and/or the projection lens distortion affects the measured results.
Polynomial calibration methods [13,14,15,16] build the relationship of phase and depth by fitting a polynomial through N pairs of known phases and depths for every pixel. Usually, a plate with discrete markers with known separations is positioned successively at different positions from the camera. A marked point on the first calibration plate is used as the origin of the world reference system; then the following calibration plates are chosen parallel to the first one. In this method, their displacements with respect to the first plane must be known. To obtain high accuracy, more plate positions are used to cover the measurement volumes and the order of the polynomial should be more than five, which means that there are too many coefficients to calculate. The main drawback of this calibration method comes from its practical limitations, such as its plate position restriction, the difficulty of calibrating a large measurement volume, and the long running time of processing and capturing the fringe pattern image data.
The goal of the inverse camera method [17, 18] is to obtain the intrinsic and extrinsic parameters for the camera and projector of a 3D structured light system. Both captured and projected images are generally described by a standard pinhole camera model with intrinsic parameters and extrinsic parameters from a world coordinate system to a camera or projector coordinate system. The key point of this calibration method is to consider the projector as an inverse camera (mapping intensities of a 2D image into 3D rays), so that the calibration of the projector is the same as that of a camera. Usually, this calibration method consists of the following five steps: (1) calibrating the intrinsic parameters of the camera, usually using Zhang’s method [19]; (2) recovering the calibration plane in the camera coordinate system; (3) projecting a checkerboard image on a calibration board and detecting the corners of the captured checkerboard image; (4) applying rayplane intersection to recover the 3D position of each projected corner; and (5) calibrating the projector using the correspondences between the 2D points of the projected image and the 3D projected points. The advantages of this method are that it is simple and fast. However, the disadvantage is the coupling of calibration errors between the camera and projector. The projector calibration results hence depend on the camera calibration results.
To overcome the error coupling, a pseudocamera method [20,21,22,23] was proposed that treats the projector as the pseudo camera of a digital micromirror device (DMD) image. This method needs to establish the correspondence between camera pixels and projector pixels using the absolute phase. The advantage of this method is that the camera and projector are calibrated simultaneously, so the coupling of calibration errors is avoided. The accuracy of the correspondence is one of the key factors that influences the calibration accuracy. In Zhang and Yau’s method [23], a checkerboard is used to calibrate the dualcamera structured light system. However, the corner detection of a checkerboard is sensitive to illumination condition, leading to low accuracy and reliability.
To keep balance between calibration accuracy and time complexity, an improved calibration method is proposed to decrease the complexity of the calibration procedure by simplifying the extrinsic calibration of the structured light system. A white plate with a matrix of hollow black ring markers is used to calibrate the dualcamera structured light system. The system calibration process can be divided into three steps. 1) Calibrating the right camera and the structured light system with the left camera by establishing corresponding point pairs between projector pixel coordinate and left camera pixel coordinate of discrete markers on a plate surface. The corresponding projector pixel coordinate of each marker is determined by measuring the absolute phase from projected vertical and horizontal sinusoidal fringe patterns on the plate surface. 2) Computing the transformation between left camera and right camera using intrinsic parameters of two cameras, the center of each marker coordinates in two camera images and world coordinates of the center of each marker. 3) Calculating extrinsic parameters of the structured light system with the right camera using the aforementioned obtained parameters. 3D cloud data sets for the two projectorcamera pairs obtained by the calibrated system are matched based on the variant ICP (iterative closest point) algorithm [24]. We simplified the system calibration and achieved the high measurement accuracy by using the variant ICP algorithm. The rest of this paper is organized as follows. The principle and details of the proposed calibration method are described in Section “Theories”. Experimental results are presented in Section “Experiments and Results”. Section “Discussions” presents the conclusion and remarks about future work.
Theories
A dualcamera structured light system includes the right and left cameraprojector pairs, as illustrated in Fig. 1. A classical structured light system is composed of a single projector and a single camera. The main drawback of this kind of system is the occlusion caused by the camera, as illustrated in Fig. 1. In areas A and B of the measured object, the projector is able to project structured pattern onto the surface of the measured object. However, the right camera and the left camera cannot observe areas A and B, respectively, because of the crossed optical axes of the projector and the camera. Therefore, a structured light system with a single projector and dual cameras has been developed to measure object surfaces.
This system contains two subsystems: one with the left camera and another with the right camera, called the left pair and right pair, respectively. Figure 1 illustrates how the system with the right camera cannot obtain area A of the object and the system with the left camera cannot obtain area B of the object. A projector projects the generated fringe pattern images onto the measured object’s surface. The fringe patterns are deformed with respect to the object surface and captured by two CCD cameras from different views. The absolute phase of each pixel can be calculated from the captured fringe patterns. Two point cloud images of the measured objects are obtained from the absolute phase data after the system is calibrated. The two point cloud images from the different views can then be transformed into the same coordinate system.
Processing procedure of the proposed system calibration method
To calibrate the proposed system, the following three steps are applied.

1)
Calibrating the intrinsic parameters of the proposed system. This step needs to calibrate the intrinsic parameters of the two cameras and the projector. The projector is calibrated by establishing the corresponding point pairs between the projector pixel coordinates and the left camera pixel coordinates of the discrete markers on a plate surface. The same calibration plate is used to calibrate both cameras.

2)
Computing the relationship between the two cameras using the intrinsic parameters of the two cameras, center of each marker coordinates in two camera images, and world coordinates of the center of each marker. The transformation between the left camera and the projector is computed by establishing the relationships between the world coordinate system and the projector coordinate system as well as between the same world coordinate system and the left camera coordinate system.

3)
Calculating the relationship between the right camera and projector using the obtained parameters. Threedimensional cloud data sets for the two cameraprojector pairs obtained by the calibrated system are matched based on the variant ICP algorithm.
Intrinsic parameter calibration
The intrinsic parameters of the projector and the two cameras must be calibrated before calculating the extrinsic parameters of the system. The intrinsic parameter calibration method [20] has been applied because the coupling of calibration errors can be avoided. A projector is treated as a pseudo camera to be calibrated. To calibrate the projector, a vital step is to establish the correspondence between the projector pixels and camera pixels and convert a CCD image to its corresponding DMD image.
A calibration plate with discrete markers on the surface is placed randomly in the measuring volume. At each position, 12 vertical and 12 horizontal sinusoidal fringe patterns along with white illumination are projected onto the plate surface. A CCD camera captures the fringe pattern images and a texture image under white illumination. After an absolute phase map is obtained by a standard fourstep phaseshifting algorithm in conjunction with the optimum three fringe selection method [25], a unique pointtoline mapping between the CCD and DMD pixels is established. Assume φ_{v} and φ_{h} denote the vertical and horizontal absolute phases of pixel P on the CCD image, as illustrated in Fig. 2. If the vertical fringe patterns are applied, the line that corresponds to φ_{v} is a vertical line. If horizontal fringe patterns are applied, the line that corresponds to φ_{h} is a horizontal line. If both of them are applied, the intersection of the lines on the DMD image is the pixel P′ corresponding to the point on the CCD image. This onetoone mapping between a CCD image and DMD image can be determined using above method.
Figure 3 illustrates how the circle centers of the DMD image are generated. The subpixel coordinates of each circle center on the calibration plate are accurately positioned according an ellipse fitting algorithm [26, 27] after extracting the inner and outer edges of each circle from the captured texture image. The absolute phase of the extracted circle center in the CCD image can be calculated by linear interpolation along the vertical and horizontal directions, denoted as φ_{m} and φ_{n}, respectively. The optimum threefringe number selection method is used to obtain the absolute phase data φ_{m} and φ_{n}, whose range is related to the captured fringe number. The pixel coordinates of the corresponding point (m, n) in the DMD image can be computed as follows:
where M and N are the width and height of the projected fringe patterns. In addition, T is the projected fringe numbers for phase calculation. All pixel coordinates of the corresponding points of all circle centers can be estimated for all plate positions. Then, the DMD pixel coordinates of the circle centers and the corresponding world coordinates are known, and the intrinsic parameters of the projector are determined using the MATLAB Camera Calibration Toolbox [28]. In addition, nine texture images of the calibration plates are used to extract circle centers to calibrate the intrinsic parameters of the two cameras using Heikkila’s calibration method [29].
Proposed system calibration method
Separately calibrating each cameraprojector pair is usually timeconsuming and complicated. Therefore, a novel calibration method is proposed to simplify the extrinsic calibration process of the right cameraprojector pair. Figure 4 shows the coordinate system of the dualcamera structured light system. The world coordinate system (O_{W}; X_{W}, Y_{W}, Z_{W}) is established with x and y axes on the plane and the z axis perpendicular to the plane and pointing toward the system. The projector coordinate system (O_{P}; X_{P}, Y_{P}, Z_{P}), left camera coordinate system (O_{L}; X_{L}, Y_{L}, Z_{L}), and right camera coordinate system (O_{R}; X_{R}, Y_{R}, Z_{R}) are built with x and y axes parallel to image plane and the z axes perpendicular to the image plane and pointing forward from the optical center. The dotdashed lines denote the left camera axis, the projector axis, and the right camera axis, respectively.
Assume a spatial point M is located in the left camera coordinate system X_{l} = (x_{l}, y_{l}, z_{l})^{T}, right camera coordinate system X_{r} = (x_{r}, y_{r,} z_{r})^{T}, and projector coordinate system X_{p} = (x_{p}, y_{p}, z_{p})^{T}. Here, []^{T} denotes transposition. The relationship between X_{l}, X_{p}, and X_{r} is described as follows:
where R_{L} represents the 3 × 3 rotation matrix between the left camera coordinate system and projector coordinate system, and T_{L} represents a 3 × 1 translation vector. Further, R_{0} represents the 3 × 3 rotation matrix between the left camera coordinate system and right camera coordinate system, and T_{0} represents the 3 × 1 translation vector. According to Eqs. (3) and (4), R_{R} and T_{R} between the right camera coordinate system and the projector coordinate system can be derived from the following equation:
whereR_{R} = R_{L}R_{0}
The system calibration procedure has three steps: (1) computing extrinsic parameters (R_{L}, T_{L}) of the left pair; (2) computing the relationship (R_{0}, T_{0}) between the left camera coordinate system and right camera coordinate system; and (3) determining the extrinsic parameters of the right pair. Parameters R_{L}, T_{L}, R_{0}, and T_{0} are already known from a previous calibration process, and hence extrinsic parameters (R_{R}, T_{R}) of the right pair is determined.
Calibration of the left pair
The extrinsic parameters of the left pair are calibrated using a unique world coordinate system for the two cameras and the projector. The intrinsic parameters of the projector and left camera have already been calibrated using a calibration plate with discrete markers on the surface, as described in Section 2B. The extrinsic parameters are computed using the same coordinate position of the calibration plate to guarantee they are in the same world coordinate system.
The left pair is calibrated by estimating the relationship between the projector coordinate system and world coordinate system as well as between the left camera coordinate system and the same world coordinate system. Here, X_{M} = (x_{w}, y_{w}, z_{w})^{T} denotes the 3D coordinates for point M in the world coordinate system. These relationships can be described as:
where R_{1} represents the 3 × 3 rotation matrix between the world coordinate system and the projector coordinate system, and T_{1} represents the 3 × 1 translation vector. In addition, R_{2} represents the 3 × 3 rotation matrix between the same world coordinate system and the left camera coordinate system, and T_{2} represents the 3 × 1 translation vector. The plate positioned with different poses is captured by the two cameras. A total of nine images are used to obtain intrinsic parameters of the dualcamera structured light system. The extrinsic parameters can be obtained by the same procedure as those for the intrinsic parameters of the two cameras estimation. The only difference is that only one calibration image is needed to obtain the extrinsic parameters. We choose the plate image positioned with nearly perpendicular to the DMD imaging plane to calculate R_{1}, T_{1}, R_{2,} and T_{2}. The extrinsic parameters of the left pair, denoted by (R_{L}, T_{L}), can be deduced from Eqs. (6) and (7)
where
Calibration of the relationship between the left camera and the right camera
To obtain the relationship between the left camera and right camera, two images of the calibration plate are used. The circle centers of the two images are extracted. When the intrinsic parameters of both cameras, the pixel coordinates of all circle centers in both camera images, and the world coordinates of all circle centers are known, the relationship (R_{0}, T_{0}) between the left camera and right camera can be determined using the above method.
Calibration of the right pair
After obtaining R_{L}, T_{L}, R_{0}, and T_{0} in a previous calibration process, R_{R} and T_{R} can be calculated using Eq. (5). Threedimensional cloud data sets for the two pairs are obtained using the calibrated intrinsic and extrinsic parameters of the system [20]. Assuming the intrinsic parameters of the system are unchanged, this calibration method can simplify the extrinsic calibration of the right pair. It needs only two texture images instead of 25 images (a texture image, 12 vertical, and 12 horizontal sinusoidal fringe patterns) and more complicated computations.
Data registration
Ideally, each cameraprojector pair should be in the same coordinate system for the same point. The match should then be automatic because the system is calibrated in the unique coordinate system. However, the real measured results are not in the same coordinate system and are not matched well because of calibration error and/or measurement error. Therefore, it is necessary to register 3D data sets for the two pairs. Because the initial registration of the two data sets from two pairs is already known, registration of the two data sets needs only to refine the match. We improve the registration by minimizing the sum of the square of the distances between matching points using a variant of the ICP algorithm. We introduce a method that uses multiple attributes of the sample points to get the true corresponding points. Figure 5 shows all incorrect cases that can occur during the iteration procedure. Each point denotes iteration point that may lead to the wrong alignment. It is located in threedimensional spatial coordinate system. However, to make description more comprehensible, we express all incorrect cases in twodimensional coordinate system. The following criteria must be satisfied, where p_{1,i} is in one data set and p_{2,i} is in the other data set, respectively.

1.
Point p_{1,i} and point p_{2,i} should be in exactly the same position, i.e., the distance between the two points should be zero, in theory. In practice, their positions will differ somewhat because of many kinds of errors. We use a threshold operation for selecting candidate points in the two range images. The threshold value is twice the scanner’s resolution. Two points that are far away must be discarded from the list of candidate points, as shown in Fig. 5 (1).

2.
Point p_{1,i} and point p_{2,i} should be visible from each other. Points have to be not selfocclusive or occlusive from another view, as shown in Fig. 5 (2).

3.
Point p_{1,i} and point p_{2,i} should not be at the edge. Boundary points are those points that lie on the edge of a triangle and these points will pull the mesh in the wrong direction, as shown in Fig. 5 (3).

4.
Point p_{1,i} and point p_{2,i} should have the same normal, in theory. In practice, it is also impossible. Only those point pairs where the angle between their normal is less than 45 degrees are acceptable, as shown in Fig. 5 (4).

5.
When p_{1,i} is a point in the overlapping region of the view, there should be one and only one corresponding point p_{2,i} in another view, as shown in Fig. 5 (5).
The registration procedure is described in an overall flowchart in Fig. 6, where T is a precision component of the criteria. ME denotes the mean square error of the corresponding point pairs in the two data sets.
There are two main steps of the variant ICP. The first finds the closest points using pointtopoint Euclidean distance between two data sets. The second computes the transformation between two data sets. In addition, an initial rotation matrix R_{R} and translation vector T_{R} are given before performing the registration.
Experiments and results
Experimental system
The developed measurement system is composed of two CCD cameras and a projector, as illustrated in Fig. 7. The projector (CP270, BenQ) has a onechip digital DMD. Its native resolution is 1024 × 768 pixels (XGA). The red, green, and blue colors are produced by rapidly spinning a color filter wheel in the projector and synchronously modifying the state of the DMD. The two cameras (ECO655CVGE, SVSVISTEK, Germany) have a resolution of 2448 × 2050 pixels.
System calibration results
A white plate with 9 × 12 discrete black hollow ring markers was used to calibrate the dualcamera structured light system, as shown in Fig. 3. The calibration plate was randomly placed at nine different orientations in front of the system. At each orientation, vertical and horizontal sinusoidal fringe patterns along with white illumination were projected onto the plate surface. Two CCD cameras synchronously captured the fringe pattern images and the texture images under white illumination. The center position of all markers was determined using the ellipse fitting algorithm. The absolute phase of each marker center was calculated from the captured fringe pattern images using the fourstep phaseshifting algorithm and the optimum threefringe number selection method. The corresponding point of each marker in the DMD pixel coordinate system was obtained using Eqs. (1) and (2). The nine corresponding DMD images generated by the procedures described in Section “Theories”(B) are shown in Fig. 8. In addition, a set of calibration images for each camera, the intrinsic parameters of both cameras, and the projector were calibrated using the traditional camera calibration method and MATLAB toolbox, as listed in Table 1. f_{u} and f_{v} are the effective focal lengths of the camera along u and v direction; u_{0} and v_{0} are the principal point along u and v direction on image coordinate system; k_{1} and k_{2} are the radial distortion coefficients; p_{1} and p_{2} are tangential distortion coefficients. By selecting images in the same calibration position, the extrinsic parameters could be also calibrated.
The extrinsic parameters of the left pair are
In addition, the relationship between the two cameras is
Finally, the extrinsic parameters of the right pair are
Calibration evaluation
To verify the measurement accuracy of the calibrated system, the traditional method [30, 31] is used as comparison to measure a ‘step artifact’ with a set of known variable geometric steps because it is a type of representative of the binocular vision system for 3D reconstruction. All the matched points on one step surface are fitted into a plane. The calculated distance between the neighboring steps is the average value of the distance of all the obtained points on the other step surface to the fitted plane. The actual and measured distance values between the neighboring step and the absolute error are calculated. A comparison experiment between the traditional method and the proposed method has been carried out, as listed in Table 2. The results show that the accuracy of the proposed calibration method is higher than that of the traditional method. Outliers and invisible data automatically are rejected to improve the measurement accuracy by using five proposed criteria at each iterative step because the measurement accuracy is related to alignment accuracy. Furthermore, the number of the captured images is less than that of the traditional method because the proposed method needs only two texture images to obtain the extrinsic calibration of one cameraprojector pair instead of 25 images (a texture image, 12 vertical, and 12 horizontal sinusoidal fringe patterns). The maximum value of the absolute error is 0.041 mm. The experimental results show the proposed calibration method has high accuracy.
Measurement results
A model house with a freeform surface was measured by the calibrated system. Twelve sinusoidal vertical fringe patterns with the optimum numbers of 100, 99, and 90 were projected onto the house’s surface to calculate the absolute phase. Figure 9a and c show the texture image obtained by the left camera and right cameras, respectively. Threedimensional cloud data of the model house were obtained using the improved calibration method, as shown in Fig. 9b and d. The profile of the house was not well measured by either camera because of camera occlusions, as illustrated in the region marked by a black ellipse. Figure 9e shows the 3D cloud data after using the proposed calibration method. The geometry measured by the left and right cameras can be compensated mutually to obtain a satisfactory result.
To demonstrate the convergence of the variant ICP algorithm, the root mean squared (RMS) at each iteration step is presented. Figure 10 shows the convergence of the variant ICP algorithm on the model house. It is clear that the ICP algorithm is converged monotonically. The final registration error is 0.56 mm.
Discussions
There are three advantages of the proposed calibration method for dualcamera structured light systems: (1) Simultaneity: Both camera images and the projector image, including the vertical fringe images, horizontal fringe images, and texture images, can be obtained for each calibration plate position. Therefore, the intrinsic and extrinsic parameters of the system can be calibrated simultaneously. (2) Simplification: The proposed calibration method can decrease the complexity of the calibration procedures by simplifying the extrinsic calibration of the right pair and the calibration results provide an initial estimate for ICP algorithm. (3) High accuracy: The camera calibration does not influence the projector calibration, and there is no coupling error issue because the camera and the projector are calibrated simultaneously. Moreover, a modified ICP algorithm was used to compute rigid transformation and correspondence, and to reject outliers and invisible data automatically at each iterative step by using the proposed five criteria.
The measurement accuracy is dependent on the accuracy of matching 3D cloud data sets for each cameraprojector in this paper. We have achieved an almost perfect alignment, however, the variant ICP algorithm may converge to an incorrect alignment result in some cases such as an object lacking distinguishing structural feature or under significant camera view changes. In future research, a globally optimal ICP algorithm should be developed to avoid the registration trapping into the local minimum and enhance the measurement accuracy of proposed system.
Conclusions
A dual cameraone projector structured light system has been developed to avoid the camera occlusions of a singlecamera system. A method to simplify the calibration for dualcamera structured illumination profiling system is proposed to decrease the number of captured images. For the method, the left cameraprojector pair is firstly calibrated. Then, the parameters of both cameras are determined. Subsequently, the relationship between the projector and the right camera is calculated according to the results from the previous steps. Finally, 3D results from both views are merged by the variant ICP (Iterative Closest Point) algorithm to enlarge the measuring range after giving it an initial estimate obtained by the extrinsic parameter calibration results. Five criteria have been proposed to reject outliers and invisible data automatically at each iterative step. Experiments show the performance of the proposed calibration method.
Abbreviations
 3D:

Threedimensional
 DMD:

Digital micromirror device
 ICP:

iterative closest point
References
 1.
Ren, Z., Cai, L.: Threedimensional structure measurement of diamond crowns based on stereo vision. Appl. Opt. 48(31), 5917–5932 (2009)
 2.
Wang, J., Wang, X.J., Liu, F., Gong, Y., Wang, H., Qin, Z.: Modeling of binocular stereo vision for remote coordinate measurement and fast calibration. Opt. Lasers Eng. 54(1), 269–274 (2009)
 3.
Son, S., Park, H., Lee, K.H.: Automated laser scanning system for reverse engineering and inspection. Int. J. Mach. Tools Manuf. 42(8), 889–897 (2002)
 4.
Briard, P., Saengkaew, S., Wu, X., MeunierGuttinCluzel, S., Chen, L., Cen, K., Grehan, G.: Droplet characteristic measurement in Fourier interferometry imaging and behavior at the rainbow angle. Appl Opt. 52(1), A346–A355 (2013)
 5.
Zuo, C., Chen, Q., Gu, G., Feng, S., Feng, F., Li, R., Shen, G.: Highspeed threedimensional shape measurement for dynamic scenes using bifrequency tripolar pulsewidthmodulation fringe projection. Opt. Lasers Eng. 51(8), 953–960 (2013)
 6.
Zuo, C., Huang, L., Zhang, M., Chen, Q., Asundi, A.: Temporal phase unwrapping algorithms for fringe projection profilometry: a comparative review. Opt. Lasers Eng. 85, 84–103 (2016)
 7.
Liu, X.L., Cai, Z.W., Yin, Y.K., Jiang, H., He, D., He, W.Q., Zhang, Z.H., Peng, X.: Calibration of fringe projection profilometry using an inaccurate 2D reference target. Opt. Lasers Eng. 89, 131–137 (2016)
 8.
Zhang, S., Yau, S.T.: Absolute phaseassisted threedimensional data registration for a dualcamera structured light system. Appl. Opt. 47(17), 3134–3142 (2008)
 9.
Chen, R., Xu, J., Chen, H.P., Su, J.H., Zhang, Z.H., Chen, K.: 2016 accurate calibration method for camera and projector in fringe patterns measurement system. Appl. Opt. 55(16), 4293–4300 (2016)
 10.
Jia, X., Zeng, D.: Model and error analysis for coded structured light measurement system. Opt. Eng. 49(12), 1127–1134 (2010)
 11.
Zhang, Z.H., Zhang, D., Peng, X.: Performance analysis of a 3D fullfield sensor based on fringe projection. Opt. Lasers Eng. 42(3), 341–353 (2004)
 12.
Xu, J., Douet, J., Zhao, J., Song, L., Chen, K.: A simple calibration method for structured lightbased 3D profile measurement. Opt. Laser Technol. 48(2), 187–193 (2013)
 13.
Leandry, I., Breque, C., Valle, V.: Calibration of a structuredlight projection system: development to large dimension objects. Opt. Lasers Eng. 50(3), 373–379 (2012)
 14.
Anchini, R., Leo, G.D., Liguori, C., Paolollo, A.: A new calibration procedure for 3D shape measurement system based on phaseshifting projected fringe profilometry. IEEE Trans. Instrum. Meas. 58(5), 1291–1298 (2009)
 15.
Zhang, Z.H., Ma, H.Y., Guo, T., Zhang, S.X., Chen, J.P.: Simple, flexible calibration of phase calculationbased threedimensional imaging system. Opt. Lett. 36(7), 1257–1259 (2011)
 16.
Zhang, Z.H., Huang, S.J., Meng, S.S., Gao, F., Jiang, X.Q.: A simple, flexible and automatic 3D calibration method for a phase calculationbased fringe projection imaging system. Opt. Express. 21(10), 12218–12227 (2013)
 17.
Gao, W., Wang, L., Hu, Z.: Flexible method for structured light system calibration. Opt. Eng. 47(8), 767–781 (2008)
 18.
Zhang, S., Huang, P.S.: Novel method for structured light system calibration. Opt. Eng. 45(8), 083601 (2006)
 19.
Zhang, Z.Y.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. 22(11), 1330–1334 (2000)
 20.
Li, Z.W., Shi, Y.S., Wang, C.J., Wang, Y.Y.: Accurate calibration method for structured light system. Opt. Eng. 47(5), 053604 (2008)
 21.
Chen, X., Xi, J., Jin, Y., Sun, J.: Accurate calibration for camera projector measurement system based on structured light projection. Opt. Lasers Eng. 47(3–4), 310–319 (2009)
 22.
Chen, R., Xu, J., Zhang, S., Chen, H.P., Guan, Y., Chen, K.: A selfrecalibration method based on scaleinvariant registration for structured light measurement systems. Opt. Lasers Eng. 88, 75–81 (2017)
 23.
Zhang, S., Yau, S.T.: Threedimensional shape measurement using a structured light system with dual cameras. Opt. Eng. 47(1), 013604 (2008)
 24.
Besl, P.J., Mckay, N.D.: A method for registration of 3D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)
 25.
Zhang, Z.Y., Tower, C.E., Tower, D.P.: Efficient color fringe projection system for 3D shape and color using optimum 3frequency interferometry. Opt. Express. 14(14), 64444–64455 (2006)
 26.
Fitzgibbon, A., Pilu, M., Fisher, R.B.: Direct least square fitting of ellipses. IEEE T. Pattern Anal. 21(5), 476–480 (1999)
 27.
He, D., Liu, X.L., Peng, X., Ding, Y.B., Gao, B.Z.: Eccentricity error identification and compensation for highaccuracy 3D optical measurement. Meas. Sci. Technol. 24(7), 075402 (2013)
 28.
Bouguet, J. Y.: Camera Calibration Toolbox for Matlab. http://www.vision.caltech.edu/bouguetj/calib_doc/
 29.
Heikkila, J.: Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2008)
 30.
Wu, Q., Zhang, B., Huang, L., Wu, Z., Zeng, Z.: Flexible 3D reconstruction method based on phasematching in multisensor system. Opt. Express. 24(7), 7299–7318 (2016)
 31.
Cheng, Y., Wang, X., Collins, R., Riseman, E., Hanson, A.: Threedimensional reconstruction of points and lines with unknown correspondence across images. Int. J. Comput. Vis. 45(2), 129–156 (2001)
Acknowledgements
Not applicable.
Funding
The authors would like to thank National Key R&D Program of China (2017YFF0106404); National Natural Science Foundation of China (51675160); Talents Project Training Funds in Hebei Province (A201500503); Innovative and Entrepreneurial Talent Project Supported by Jiangsu Province (2016A377); Joint Doctoral Training Foundation of HEBUT(2017GN0002); European Horizon 2020 through the Marie SklodowskaCurie Individual Fellowship Scheme (7074663DRM); the UK’s Engineering and Physical Sciences Research Council (EPSRC) funding of Future Advanced Metrology Hub (EP/P006930/1).
Availability of data and materials
Detail data has been provided in this paper.
Author information
Affiliations
Contributions
CC and NG conceived and designed the experiment. CC performed the experiments and analyzed the data under the guidance of NG and ZZ. CC and ZZ wrote the paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Authors’ information
Chao Chen is a PhD Candidate in the School of Mechanical Engineering, Hebei University of Technology, China. His current research focuses on threedimensional optical measurement and machine vision. Email Address: chenchaohebut@hotmail.com
Nan Gao is an assistant professor in the School of Mechanical Engineering, Hebei University of Technology, China. His current research directions are optical measurement and spectrum detection. Email Address: ngao@hebut.edu.cn
Zonghua Zhang is a full Professor in the School of Mechanical Engineering, Hebei University of Technology, China and a visiting Professor in University of Huddersfield, UK. His research interests include threedimensional measurement, fringe analysis and computer vision. He has published more than 130 papers, 4 book chapters, 17 patents. Email Address: zhzhang@hebut.edu.cn, zhzhangtju@hotmail.com
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Chen, C., Gao, N. & Zhang, Z. Simple calibration method for dualcamera structured light system. J. Eur. Opt. Soc.Rapid Publ. 14, 23 (2018). https://doi.org/10.1186/s414760180091y
Received:
Accepted:
Published:
Keywords
 Calibration
 Structured light system
 Dualcamera
 3D shape measurement