As a extension to the current paper, we propose
testing the pipeline using data captured through drone
flights. This way the GPS-RTK positioning informa-
tion can be tested in different weather and environ-
ment conditions. Additionally the testing on objects
with different sizes will provide data on how the met-
hod scales with size and if the uncertainty depends on
the size of the scanned object. Finally, different po-
sitioning systems would also be tested and modeled -
both indoor and outdoor, to make the pipeline more
versatile.
ACKNOWLEDGEMENTS
This work is funded by the LER project no. EUDP
2015-I under the Danish national EUDP programme.
This funding is gratefully acknowledged.
REFERENCES
Agisoft (2010). Agisoft: Photoscan. http://www.agisoft.
com/. Accessed: 2018-09-06.
Bay, H., Tuytelaars, T., and Van Gool, L. (2006). Surf:
Speeded up robust features. In European conference
on computer vision, pages 404–417. Springer.
Bentley (2016). Bentley: Contextcapture. https://www.
bentley.com/. Accessed: 2018-09-06.
CapturingReality (2016). Capturingreality: Reality capture.
https://www.capturingreality.com/. Accessed: 2018-
09-06.
Corti, A., Giancola, S., Mainetti, G., and Sala, R. (2016).
A metrological characterization of the kinect v2 time-
of-flight camera. Robotics and Autonomous Systems,
75:584–594.
Daakir, M., Pierrot-Deseilligny, M., Bosser, P., Pichard, F.,
Thom, C., and Rabot, Y. (2016). Study of lever-arm
effect using embedded photogrammetry and on-board
gps receiver on uav for metrological mapping purpose
and proposal of a free ground measurements calibra-
tion procedure. ISPRS Annals of Photogrammetry, Re-
mote Sensing & Spatial Information Sciences.
DeGroot, M. H. and Schervish, M. J. (2012). Probability
and statistics. Pearson Education.
DJI (2017). D-rtk gnss. https://www.dji.com/. Accessed:
2018-09-06.
Dugan, M. (2018). Rtk enhanced precision geospatial lo-
calization mechanism for outdoor sfm photometry ap-
plications. Robotics Research Journal.
Girardeau-Montaut, D. (2003). Cloudcompare. http://www.
cloudcompare.org/. Accessed: 2018-09-12.
Haralick, R. M. (2000). Propagating covariance in compu-
ter vision. In Performance Characterization in Com-
puter Vision, pages 95–114. Springer.
Jamaluddin, A., Mazhar, O., Jiang, C., Seulin, R., Morel,
O., and Fofi, D. (2017). An omni-rgb+ d camera rig
calibration and fusion using unified camera model for
3d reconstruction. In 13th International Conference
on Quality Control by Artificial Vision 2017, volume
10338.
Lowe, D. G. (2004). Distinctive image features from scale-
invariant keypoints. International journal of computer
vision, 60(2):91–110.
Madsen, C. B. (1997). A comparative study of the robust-
ness of two pose estimation techniques. Machine Vi-
sion and Applications, 9(5-6):291–303.
Nikolov, I. and Madsen, C. (2016). Benchmarking close-
range structure from motion 3d reconstruction soft-
ware under varying capturing conditions. In Euro-
Mediterranean Conference, pages 15–26. Springer.
Rabah, M., Basiouny, M., Ghanem, E., and Elhadary, A.
(2018). Using rtk and vrs in direct geo-referencing of
the uav imagery. NRIAG Journal of Astronomy and
Geophysics.
Sarbolandi, H., Lefloch, D., and Kolb, A. (2015). Kinect
range sensing: Structured-light versus time-of-flight
kinect. Computer vision and image understanding,
139:1–20.
Sarker, M., Ali, T., Abdelfatah, A., Yehia, S., and Elaksher,
A. (2017). a cost-effective method for crack detection
and measurement on concrete surface. The Internatio-
nal Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, 42:237.
Schonberger, J. L. (2016). Colmap. https://colmap.github.
io. Accessed: 2018-09-06.
Sch
¨
oning, J. and Heidemann, G. (2016). Taxonomy of 3d
sensors. Argos, 3:P100.
Sch
¨
ops, T., Sattler, T., H
¨
ane, C., and Pollefeys, M. (2015).
3d modeling on the go: Interactive 3d reconstruction
of large-scale scenes on mobile devices. In 3D Vision
(3DV), 2015 International Conference on, pages 291–
299. IEEE.
Triggs, B., McLauchlan, P. F., Hartley, R. I., and Fitzgibbon,
A. W. (1999). Bundle adjustmenta modern synthesis.
In International workshop on vision algorithms, pages
298–372. Springer.
Umeyama, S. (1991). Least-squares estimation of transfor-
mation parameters between two point patterns. IEEE
Transactions on Pattern Analysis & Machine Intelli-
gence, (4):376–380.
VisualSFM, C. W. (2011). A visual structure from motion
system.
Performance Characterization of Absolute Scale Computation for 3D Structure from Motion Reconstruction
891