Skip to main content

High-speed 3D shape measurement using Fourier transform and stereo vision

Abstract

Background

In the fast 3D shape measurement, it is an important factor to use the least number of fringe patterns to get the wrapped phase and the wrapped phase is always required to be retrieved to absolute phase. But the process of phase unwrapping may affect the quality of absolute phase. Besides absolute phase retrieval is time-costing especially for high-speed 3D shape measurement.

Methods

This paper proposes a fast measurement method based on Fourier transform profilometry and stereo vision. Because of the property of stereo vision, every point can find its corresponding point in the wrapped phase. This method can reconstruct 3D surface without phase unwrapping. In order to further increase the measurement speed and overcome the gamma effect of projector, the dithering fringe pattern is used. To resist false matching, the original image matching constraint gives a rough parallax. Phase matching and sub-pixel parallax optimization are used to reduce the matching errors. In order to confirm the phase value of points which are at the edge of wrapped phase, average phase value is calculated.

Results

A white mask is measured based on the proposed method. The results from every step show the effect of different functions. To better show the effect of the proposed method, a white house is measured and the measurement is compared with LSSM and MFH. In order to further to show the advantage of the proposed method, discontinuous blocks and continuous blocks are measured. The average height, RMS, average error, maximum error are compared with the method which uses absolute phase to match the corresponding points. Experiment results verify the feasibility of the proposed method and it can measure complex objects with high speed and accuracy.

Conclusion

This paper uses Fourier transform and stereo vision to get wrapped phase and reconstruct 3D shape without phase unwrapping. The proposed method contains three part: phase matching, edge points detection and sub-pixel parallax optimization. By comparing matching precision, the proposed method can get high quality surface. Because the process of measurement only need one frame of deformed fringe pattern to get the wrapped phase and the matching process does not need phase unwrapping, the proposed method has the potential to be used in fast measurement.

Background

High-speed 3D shape measurement has been an important technology in the industrial manufacturing such as quality inspection, reverse engineering, 3-D sensing, object recognition, etc. [1]. In order to get accurate 3D shape measurement results, a lot of technologies have been researched such as Moire technique (MT) [2], phase-measuring profilometry (PMP) [3,4,5,6], and Fourier transform profilometry (FTP) [7,8,9].

In general, multi-frame fringe patterns can get more accurate and reliable results, but it is significant to use fewer fringe patterns to get the results in high-speed measurement [10]. Among these technologies, Fourier transform profilometry only need one frame of fringe pattern to get the wrapped phase, so it is suitable to be used in high-speed measurement [11].

Though FTP has the advantage, it still has shortcomings. Because FTP only need one frame of fringe pattern, it is sensitive to the quality of fringe pattern [12,13,14,15]. It is always affected by the nonlinearity of projector and the wrapped phase will exist noise which will have impact on the measurement directly [16]. Moreover, gamma calibration is required but this process will increase the computation complexity [17]. The phase error compensation is also required to remove the phase error of reconstruction results [18]. But there is not a standard method which can be used in different applications. Another problem which FTP suffers from is that the wrapped phase need to be unwrapped to get the absolute phase. However, the process of phase unwrapping will also affect the accuracy of absolute phase and it will also increase computing time of the measurement [19].

To balance the accuracy and speed, this paper proposes a high-speed 3D shape measurement method based on Fourier transform profilometry and stereo vision. Instead of using 8-bit gray sinusoidal fringe pattern, 1-bit dithering fringe pattern is used. Because it is binary pattern, it can resolve the problem caused by gamma effect by using defocusing technology. In this paper, the fringe pattern is generated based on reference [16]. By using the property of stereo vision, original image matching constraint can get a rough parallax. In order to get the accurate corresponding point, phase matching and sub-pixel parallax optimization are used to preclude false points. Phase matching is used to confirm candidate points. When points are at the edge of wrapped phase where the phase value is −πor π, there are missing points or wrong corresponding points. An average phase value is used to increase the robustness of the high-speed measurement. Sub-pixel parallax optimization is used to find the true corresponding point based on the coordinate of sub-pixel.

This paper is organized as follows. In Section Methods, the flow of the proposed method and the principle of the proposed method are introduced. Section Results and discussion verifies the feasibility and accuracy of the proposed method. By comparing measurement results, the matching precision is confirmed. Section Conclusion gives the summary of this paper.

Methods

The process of the proposed method

The whole process of the proposed method is shown in Fig.1. It mainly includes five steps.

Fig. 1
figure 1

The flowchart of the proposed method

Step 1. Preparation before measurement. It includes the generation of dithering pattern and stereo vision calibration. The dithering fringe pattern is insensitive to the gamma of projector which uses 1-bit binary instead of 8-bit gray information to approximate the sinusoidal fringe pattern. Dithering fringe pattern can be used for high speed measurement without projector calibration. Another merit of dithering fringe pattern is that it is suitable to measure objects when wide fringe pattern is used. In this paper, the dithering fringe pattern is generated based on the reference [16].

Step 2. Original image matching constraint. The original image can be captured from left and right cameras respectively. Based on the feature of stereo vision, a rough matching can be implemented. It can be used to provide a rough parallax as a constraint condition.

Step 3. Calculation of the wrapped phase. FTP is applied to the captured image from left camera and right camera respectively. Then the fundamental component is extracted by using a filter and inverse FTP is used to obtain the wrapped phase.

Step 4. The wrapped phase matching. Based on the original image matching constraint, the wrapped phase matching can be performed to find the candidate points. Because FTP uses only one frame of fringe pattern to get the phase map, perhaps there are some phase errors at the boundary of wrapped phase where the phase is or ‐πor π.

Step 5. Sub-pixel parallax optimization. In order to find the corresponding points precisely, the sub-pixel parallax optimization is used. The optimized parallax will correspond the target point. Once the stereo vision system is calibrated, the height of object can be calculated.

In this paper, Step 4 and Step 5 are the mainly proposed method so the principle of them will be introduced in the following part.

The principle of the proposed method

Traditionally, wrapped phase cannot be used to find the corresponding points, because it ranges from ‐πtoπ with periodical change. Because the dithering fringe pattern is used in this system, the gamma effect of projector can be neglected. The original image matching constraint narrows the range of candidate points within the epipolar line. Without original image matching constraint, the false points will be considered and reconstructed which will occupy the process time [21]. Traditionally the absolute phase is required to find the correct corresponding points. The algorithm of retrieving absolute phase can be classified into spatial method and temporal method. But both methods have demerits. Spatial method can not be used to retrieve the phase of isolate objects and the phase error will spread along the direction of phase unwrapping. Temporal method need multiple frames to unwrap the phase which will occupy the measurement speed. The process of wrapped phase matching without any absolute phase is shown in Fig.2. Because absolute phase is not used, there are some false corresponding points with the same phase value. These points can be defined as candidate points which are shown in Fig.2a.

Fig. 2
figure 2

The process of wrapped phase matching without absolute phase. (a): the true point and false points in the wrapped phase, (b): the rough parallax calculation between the images from left camera and right camera

The target of matching is to find the corresponding points in two cameras. In the theory of stereo vision, the rough parallax can be obtained by making a difference between the two images which are captured from left camera and right camera. Then the rough parallax is applied to the phase matching. Because structure light increases the texture features, it can confirm the corresponding points accurately. For the stereo vision system, the point (xL, yL) in the left camera corresponds the point (xR, yR) in the right camera, as shown in Fig.2b. The original image matching constraint provides a rough corresponding parallax Par, which can be expressed as:

$$ Par={x}_R-{x}_L $$
(1)

The parallax is used in the wrapped phase to find the target phase. When the phase of point (xL, yL) in the left wrapped phase is PhaseL(xL, yL), the corresponding phase of right wrapped phase is PhaseR(xR, yR), as shown in Eq.(2):

$$ {Phase}_L\left({x}_L,{y}_L\right)={Phase}_R\left({x}_R,{y}_R\right)={Phase}_R\left({x}_L+ Par,{y}_R\right) $$
(2)

In the proposed method, the point (xR, yR) and adjacent points are set as candidate points. Considering the computation complexity, adjacent candidate points are shown in Fig.3.

Fig. 3
figure 3

The determination of boundary points based on the average phase. (a): the phase value of corresponding point is larger than zero, (b): the phase value of corresponding point is smaller than zero

In order to find the true point, the phase of left point PhaseL(xL, yL) subtracts the phase of these candidate points PhaseR(xR + s, yR) and the absolute difference can be expressed as:

$$ \Delta Phase=\left|{Phase}_L\left({x}_L,{y}_L\right)-{Phase}_R\left({x}_R+s,{y}_R\right)\right| $$
(3)

Where s is an integer and s [−2, 2]. ΔPhase is the difference of the points phase. The least difference ΔPhase(xR _ min, yR _ min) is used and (xR _ min, yR _ min) is the coordinate of candidate point which has the least phase difference.

As mentioned above, because two cameras capture the deformed fringe patterns, the wrapped phase can be generated with deformed information. Based on the principle of stereo vision, the parallax can be used in the wrapped phase, as shown in Eq.(2). When the parallax is applied into the phase matching, a basic point can be obtained. The proposed method chooses the basic matching point and its two left-and-right neighborhood points as candidate matching points. The wrapped phase is monotonous in a period so these candidate points have different phase value. Theoretically the basic point and true point should have the closest phase value so the point with the least difference is used as the optimal point, as shown in Eq.(3).

Once the optimal point can be obtained, the parallax Parmin can be calculated as:

$$ {Par}_{\mathrm{min}}=x{}_{R\_\min }-{x}_L $$
(4)

Phase value matching belongs to pixel matching and the parallax of sub-pixel is required.

Although the phase matching can reject most candidate points, there are still some challenges. When the points are at the edge of wrapped phase where the phase is −π or π, it is difficult to find the matching points precisely based on ΔPhase. In order to find the corresponding points accurately, the average phase value is used. As shown in Fig.3, the middle point (xR, yR) is on the edge of wrapped phase and the phase PhaseR(xR, yR) is required to be calculated. The average phase value Phaseave(xR, yR) can be described as:

$$ {Phase}_{ave}\left({x}_R,{y}_R\right)=\frac{1}{5}\sum \limits_{i=-2}^2 phase\left({x}_R+i,{y}_R\right) $$
(5)

The phase value is monotonic in a period. When the phase value Phaseave(xR, yR) is greater than zero, it can be designed as π otherwise it can be designed as −π which can be expressed as:

$$ Phase\left({x}_R,{y}_R\right)=\left\{\begin{array}{c}-\pi \kern1em if\kern0.5em {Phase}_{ave}<0\\ {}\pi \kern1em if\kern0.5em {Phase}_{ave}>0\end{array}\right\} $$
(6)

In the wrapped phase, the phase value range from −π to π and it changes periodically. By using this feature, the coordinate of sub-pixel can be obtained. If the coordinate of original image matching point is (xR _ o, yR), the coordinate of sub-pixel (xR _ sub, yR) can be obtained. Because the phase value is monotone in every period, the coordinate of sub-pixel (xR _ sub, yR) is located between the best corresponding point (xR _ min, yR _ min) and candidate point (xR _ o, yR). The phase value of the sub-pixel in the right wrapped phase can be set to PhaseR(xR _ sub, yR). Because PhaseR(xR _ sub, yR) represents the best matching point in the wrapped phase, it should be equivalent to the phase PhaseL(xL, yL) in the left wrapped phase. Then it can be expressed as:

$$ {Phase}_R\left({x}_{R\_ sub},{y}_R\right)={Phase}_L\left({x}_L,{y}_L\right) $$
(7)

As shown in Fig.4a, when the original image matching point (xR _ o, yR) is on the left of the best point (xR _ min, yR _ min), it means xR _ o < xR _ min. Because phase information is robust to the noise from the texture on the surface, the phase difference and ratio is used to find the coordinate of sub-pixel. The phase difference between (xR _ sub, yR)and (xR _ o, yR) can be calculated as PhaseR(xR _ sub, yR) − PhaseR(xR _ o, yR). Similarly the phase difference between (xR _ min, yR) and (xR _ o, yR) can also be expressed as PhaseR(xR _ min, yR) − PhaseR(xR _ o, yR). Instead of calculating the ratio between the different x-coordinate, the ratio of the phase difference is used. Based on the relationship, the coordinate of sub-pixel can be expressed as:

$$ {x}_{R\_ sub}=\frac{Phase_R\left({x}_{R\_ sub},{y}_R\right)-{Phase}_R\left({x}_{R\_o},{y}_R\right)}{Phase_R\left({x}_{R\_\min },{y}_R\right)-{Phase}_R\left({x}_{R\_o},{y}_R\right)}\left({x}_{R\_\min }-{x}_{R\_o}\right)+{x}_{R\_o} $$
(8)
Fig. 4
figure 4

The coordinate calculation of sub-pixel point. (a): the original image matching point is on the left of the best point, (b): the original image matching point and the best point have the same coordinate, (c): the original image matching point is on the right of the best point

When the point (xR _ o, yR) has the same x-coordinate of the best point (xR _ min, yR _ min), it means xR _ o = xR _ min. As shown in Fig.4b, the coordinate of sub-pixel can be expressed as:

$$ {x}_{R\_ sub}={x}_{R\_\min } $$
(9)

When the original image matching point (xR _ o, yR) is on the right of the best point (xR _ min, yR _ min), it means xR _ o > xR _ min, as shown in Fig.4c.

Then coordinate of sub-pixel can be expressed as:

$$ {x}_{R\_ sub}=\frac{Phase_R\left({x}_{R\_ sub},{y}_R\right)-{Phase}_R\left({x}_{R\_\min },{y}_R\right)}{Phase_R\left({x}_{R\_o},{y}_R\right)-{Phase}_R\left({x}_{R\_\min },{y}_R\right)}\left({x}_{R\_o}-{x}_{R\_\min}\right)+{x}_{R\_\min } $$
(10)

The sub-pixel parallax ParPhasesub can be obtained as:

$$ {ParPhase}_{sub}={x}_{sub}-{x}_L $$
(11)

Based on the baseline and sub-pixel parallax, the height of the object can be calculated.

Results and Discussion

In order to verify the proposed method, a 3D shape measurement system is developed. The system contains a projector (Samsung SP-P310MEMX) and two digital CCD cameras (Daheng MER-500-14U3M/C-L). Every camera is attached with a 16 mm focal length lens (Computar M1614-MP) and every camera resolution is 1024 × 768. The projector resolution is 800 × 600 and it has 0.49–2.80 m projection distance.

The first experiment is to measure a white mask. Fig.5a and b show the object without fringe patterns from left camera and right camera respectively. The distorted fringe patterns from left camera and right camera are shown in Fig.5c and d. Based on the theory of Ref. [10], the wrapped phase can be calculated which are shown in Fig.5e and f.

Fig. 5
figure 5

The calculation of wrapped phase from left and right cameras respectively. (a) and (b): the original images from left and right camera, (c) and (d): the deformed images from left and right camera, (e) and (f): the wrapped phase from left and right camera

Figure 6 shows the measurement results based on the proposed method. Fig.6a shows the reconstructed result when only phase matching is used. Fig.6b shows the reconstructed result when phase matching and the average phase calculation are used. Fig.6c shows the reconstructed result from Fig.6b when the sub-pixel parallax optimization is used.

Fig. 6
figure 6

The measurement results based on the proposed method. (a): the reconstructed result when only phase matching is used, (b): the reconstructed result when phase matching and the average phase calculation are used, (c): the reconstructed result when the sub-pixel parallax optimization is used

To better show the reconstructed surface, the details of reconstructed mouths are shown in Fig.7. By comparing the measured surface, the effect of every step process is shown. Because the system contains two digital cameras, the stereo vision can be calibrated to reconstruct the measured object surface by using phase matching. The measurement result is shown in Fig.7a . But it can be found that there are wrong corresponding points so the surface is coarse. Besides there are missing points at the edge of the wrapped phase. By adding edge detection into the phase matching, the missing points and blank vertical lines are removed, as shown in Fig.7b . Compared with the Fig.7a and b has more accurate matching results and less noise. It has smoother surface. But it also can be found that there are still noise. When the sub-pixel parallax optimization is used, the surface becomes smoother, as shown in Fig.6c and Fig.7c.

Fig. 7
figure 7

The details comparison of reconstructed mouths. (a): the measured object surface by using phase matching, (b): the measured object surface by adding edge detection into the phase matching, (c): the measured object surface by using the sub-pixel parallax optimization

To further illustrate the effect of proposed method, a white house with more complicated surface is measured. The measurement result from the proposed method is compared with the results from LSSM [20] and MFH [21] . LSSM does not need require a projector to project structure light and it finds the corresponding points based on the surface features. MFH is similar to the proposed method and it uses absolute phase to increase the texture features so that the matching precision can be increased. The measurement results are shown in Fig.8. Figure 8a shows the deformed fringe patterns. Figure 8b shows the measurement results when the wrapped phase is retrieved. Figure 8c shows the results based on the proposed method.

Fig. 8
figure 8

The measurement results comparison among LSSM, MFH and the proposed method. (a): the deformed fringe patterns, (b): the measurement from LSSM, (c): the measurement from MFH, (d): the measurement from the proposed method

From the measurement results, it can be found that the result from the proposed method has smoother surface with less noise. To better show the measurement results, the details of the measurement are shown in Fig.9. Because the proposed method is based on phase matching and sub-pixel parallax optimization, it can get more accurate corresponding points. By comparing the RMS errors from three methods, it can be found that the average RMS errors from LSSM, MFH and the proposed method are 0.35 rad, 0.21 rad and 0.08 rad respectively. The RMS errors from three methods are shown in Fig.10. The measurement time consumption is also calculated. In this paper, MATLAB 2015a is used. The corresponding process time of measurement in Fig.8bd are 2.31 s, 1.01 s and 0.81 s respectively. These data demonstrate that LSSM generates the largest phase errors. When the projector is not used, the stereo vision system determines the measurement accuracy. It will lead to wrong points matching especially for complex surface. When MFH is applied in practice, the projector will cast the dithering fringe pattern on the surface and it will increase the texture features. If the absolute phase is retrieved for corresponding points matching, the measurement time will be increased and the quality of absolute phase will be affected by the phase unwrapping algorithm. From the comparison results, we can find that the proposed method need the least measurement time and it can get the highest accurate measurement results.

Fig. 9
figure 9

The details of the measurement results among three different methods. (a): the detail from from LSSM, (b): the detail from MFH, (c): the detail from the proposed method

Fig. 10
figure 10

The average RMS comparison among three different methods

Besides discontinuous blocks and continuous blocks are measured respectively. The step-height is 60 mm, 50 mm, 30 mm respectively. Because MFH uses absolute phase, experiment results are compared between the proposed method and MFH. The measurement results are shown in Fig.11. From the measurement results, it can be found that the measurement results from MFH have more noise. Because the dithering fringe pattern belongs to binary coding, it can be used to eliminate the effect of high frequency. Although only one frame of fringe pattern is used, the wrapped phase is not required to be retrieved to absolute phase. To better show the difference, the cross section comparison of 200th lines are shown in Fig.12. From comparison results, it can be found that the proposed method can obtain smoother surface without less errors.

Fig. 11
figure 11

The measurement results comparison from discontinuous blocks and continuous blocks

Fig. 12
figure 12

The cross section comparison of 200th lines

From Table 1 to Table 2, the ideal height, average height, RMS, average error and maximum error are listed for different blocks. From these data, we can find that the height from the proposed method is closer to the ideal height. The height errors from the proposed method are smaller than that from MFH. This further verify the success of the proposed pattern. From the data, we can also find that the proposed method is not affected by the frequency leakage. It can be employed to measure large slope and discontinuous objects.

Table 1 The blocks measurement results comparison from MFH. (Units:mm)
Table 2 The blocks measurement results comparison from the proposed method. (Units:mm)

Conclusion

This paper proposes a high-speed 3D shape measurement algorithm based on FTP and stereo vision. It can use one-shot fringe pattern to get wrapped phase and 3D shape can be reconstructed without phase unwrapping. The original image matching constraint and sub-pixel phase matching are used to find the correct corresponding points. In order to increase the robustness and reduce missing rate of the boundary of the wrapped phase where phase value is π or −π, the average value is used based on the adjacent phase value. By comparing the matching precision, the proposed method can get high quality surface. Because the process of measurement only need one frame of deformed fringe pattern to get the wrapped phase and the matching process does not need phase unwrapping, the proposed method can be used in fast measurement.

References

  1. Lin, H., Gao, J., Mei, Q., et al.: Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment[J]. Opt. Lasers Eng. 91, 206–215 (2017)

    Article  Google Scholar 

  2. Kaura, S.K., Chhachhia, D.P., Aggarwal, A.K.: Interferometric moiré pattern encoded security holograms[J]. J. Opt. A Pure Appl. Opt. 8(1), 67 (2006)

    Article  ADS  Google Scholar 

  3. Peng, J., Liu, X., Deng, D., et al.: Suppression of projector distortion in phase-measuring profilometry by projecting adaptive fringe patterns[J]. Opt. Express. 24(19), 21846 (2016)

    Article  ADS  Google Scholar 

  4. Zhou, P., Liu, X., He, Y., et al.: Phase error analysis and compensation considering ambient light for phase measuring profilometry[J]. Opt. Lasers Eng. 55(7), 99–104 (2014)

    Article  Google Scholar 

  5. Zhang, Z., Wang, Y., Huang, S., et al.: Three-dimensional shape measurements of specular objects using phase-measuring Deflectometry:[J]. Sensors. 17(12), 2835 (2017)

    Article  Google Scholar 

  6. Lee, H., Min, Y.K., Moon, J.I.: Three-dimensional sensing methodology combining stereo vision and phase-measuring profilometry based on dynamic programming[J]. Opt. Eng. 56(12), 1 (2017)

    Article  Google Scholar 

  7. Yun, H., Li, B., Zhang, S.: Pixel-by-pixel absolute three-dimensional shape measurement with modified Fourier transform profilometry[J]. Appl. Opt. 56(5), 1472 (2017)

    Article  ADS  Google Scholar 

  8. Li, H., Hu, Y., Tao, T., et al.: Optimal wavelength selection strategy in temporal phase unwrapping with projection distance minimization[J]. Appl. Opt. 57(10), 2352 (2018)

    Article  ADS  Google Scholar 

  9. Sun, W., Wang, T., Zhao, Y., et al.: Advanced method of global phase shift estimation from two linear carrier interferograms[J]. J. Eur. Opt. Soc. Rapid Publ. 14(1), 10 (2018)

    Article  Google Scholar 

  10. Hu, Y., Chen, Q., Zhang, Y., et al.: Dynamic microscopic 3D shape measurement based on marker-embedded Fourier transform profilometry[J]. Appl. Opt. 57(4), 772 (2018)

    Article  ADS  Google Scholar 

  11. Zhou, Y., Tang, Y., Yang, Y., et al.: Topography measurement of large-range microstructures through advanced Fourier-transform method and phase stitching in scanning broadband light interferometry[J]. Micromachines. 8(11), 319 (2017)

    Article  Google Scholar 

  12. Su, X., Zhang, Q.: Dynamic 3-D shape measurement method: a review[J]. Opt. Lasers Eng. 48(2), 191–204 (2010)

    Article  ADS  Google Scholar 

  13. Li, B., Zhang, S.: High-resolution, real-time to superfast 3D imaging techniques[C]// IEEE international conference on advanced intelligent mechatronics. IEEE. 1252–1257 (2016)

  14. Jeught, S.V.D., Dirckx, J.J.J.: Real-time structured light profilometry: a review[J]. Opt. Lasers Eng. 87, 18–31 (2016)

    Article  Google Scholar 

  15. Zhang, Z.H.: Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques[J]. Opt. Lasers Eng. 50(8), 1097–1106 (2012)

    Article  Google Scholar 

  16. Xiao, Y., Li, Y.: High-quality binary fringe generation via joint optimization on intensity and phase[J]. Opt. Lasers Eng. 97, 19–26 (2017)

    Article  Google Scholar 

  17. Zhang, S.: Comparative study on passive and active projector nonlinear gamma calibration[J]. Appl. Opt. 54(13), 3834–3841 (2015)

    Article  ADS  Google Scholar 

  18. Lu, F., Wu, C.: Three-dimensional measurement of object surface by using ellipse binary defocusing projection. Journal of the European Optical Society-Rapid Publications. 13(1), 29 (2017)

  19. Dai, J., An, Y., Zhang, S.: Absolute three-dimensional shape measurement with a known object[J]. Opt. Express. 25(9), 10384 (2017)

    Article  ADS  Google Scholar 

  20. Geiger, A., Roser, M., Urtasun, R.: Efficient large-scale stereo matching. Comput. Vision – Accv. 6492, 25–38 (2011)

    Google Scholar 

  21. Li, Z.W., Shi, Y.S., Wang, C.J., Wang, Y.Y., et al.: Accurate calibration method for a structured light system. Opt. Eng. 47, 053604 (2008)

    Article  ADS  Google Scholar 

Download references

Acknowledgments

This work was supported by National Key R&D Program of China (2017YBF1300900), the the National Natural Science Foundation of China (U1713216), the Fund of Shenyang (17-87-0-00) and the Fundamental Research Funds for the Central Universities(N172604004).

Funding

Faculty of Robot Science and Engineering of Northeastern University, Shenyang, China provides the funding for this research.

Availability of data and materials

Details data has been provided in this paper.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally in all the sections of this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Feng Lu.

Ethics declarations

Authors’ information

Feng Lu, a doctor of Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China. His interest includes Structured light, Robot and image processing.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, F., Wu, C. & Yang, J. High-speed 3D shape measurement using Fourier transform and stereo vision. J. Eur. Opt. Soc.-Rapid Publ. 14, 22 (2018). https://doi.org/10.1186/s41476-018-0090-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41476-018-0090-z

Keywords