of point correspondences is related to translation esti-
mation. When number of points is in a certain range,
the more points we use, the higher translation accu-
racy we will have. IMU+3P has higher translation
accuracy than IMU+2P which only applies two cor-
respondences. And the translation error of IMU+2P
and IMU+3P is usually higher than EPnP, LHM and
RPP. For the rotation error, roll and pitch angles could
be obtained from IMU. Usually IMU has a quite high
accuracy, so IMU+2P and IMU+3P have higher rota-
tion accuracy than GAO, LHM, EPnP and RPP. When
the number of point correspondences is small and not
large, the more point correspondencesmay disturb the
accuracy of rotation calculating especially that there
are high accuracy IMU data. So IMU+3P has lower
rotation accuracy than IMU+2P.
4 CONCLUSIONS
In this paper, we present an external vision based ro-
bust pose estimation system for a quadrotor in out-
door environments. We only use the own features
of the quadrotor. When four rotors are observed,
we present the EMRPP algorithm which has higher
accuracy and less computation time than RPP algo-
rithm. When only three or two rotors are observed, we
present IMU+3P or IMU+2P algorithm which could
also get right pose estimation results. We have imple-
mented real experiments using our system in outdoor
environments. This system could provide us with ap-
proximate ground truth of pose for a flying quadrotor.
Our pose estimation system could perform accurately
and robustly in real time.
ACKNOWLEDGEMENTS
This work is supported by the National Sci-
ence and Technology Major Project of the Min-
istry of Science and Technology of China: ITER
(No.2012GB102007).
REFERENCES
Abeywardena, D., Wang, Z., Kodagoda, S., and Dis-
sanayake, G. (2013). Visual-inertial fusion for quadro-
tor micro air vehicles with improved scale observabil-
ity. In ICRA, pages 3148–3153.
Achtelik, M., Zhang, T., Kuhnlenz, K., and Buss, M.
(2009). Viusal tracking and control of a quadcopter
using a stereo camera system and inertial sensors. In
ICMA, pages 2863–2869.
Ahrens, S., Levine, D., Andrews, G., and How, J. P. (2009).
Vision-based guidance and control of a hovering ve-
hicle in unknown, gps-denied environments. In ICRA,
pages 2643–2648.
Altug, E., Ostrowski, J. P., and Taylor, C. J. (2003). Quadro-
tor control using dual camera visual feedback. In
ICRA, pages 4294–4299.
Ansar, A. and Daniilidis, K. (2003). Linear pose estimation
from points or lines. In PAMI, 25:578–589.
Breitenmoser, A., Kneip, L., and Siegwart, R. (2011). A
monocular vision-based system for 6d relative robot
localization. In IROS, pages 79–85.
Dementhon, D. F. and Davis, L. S. (1995). Model-based
object pose in 25 lines of code. In IJCV, 15:123–141.
Fraundorfer, F., Transkanen, P., and Pollefeys, M. (2010).
A minimal case solution to the calibrated relative pose
problem for the case of two known orientation angles.
In ECCV, pages 269–282.
Gao, X.-S., Hou, X.-R., Tang, J., and Cheng, H.-F. (2003).
Complete solution classification for the perspective-
three-point problem. In PAMI, 25:930–943.
Ha, C. and Lee, D. (2013). Vision-based teleoperation of
unmanned aerial and ground vehicles. In ICRA, pages
1465–1470.
Hartley, R. and Zisserman, A. (2004). Multiple View Geom-
etry in Computer Vision (Second Edition). Cambridge
University Press.
How, J. P., Bethke, B., Frank, A., Dale, D., and Vian, J.
(2008). Real-time indoor autonomous vehicle test en-
vironment. IEEE Control Systems Magazine, 28:51–
64.
Hu, Z. and Wu, F. (2002). A note on the number of solutions
of the noncoplanar p4p problem. In PAMI, 24:550–
555.
Kukelova, Z., Bujnak, M., and Pajdla, T. (2010). Closed-
form solutions to the minimal absolute pose problems
with known vertical direction. In ACCV, pages 216–
229.
Lepetit, V., Moreno-Noguer, F., and Fua, P. (2008). Epnp:
Accurate non-iterative o(n) solution to the pnp prob-
lem. In IJCV, 81:151–166.
Lim, H., Sinha, S. N., Cohen, M. F., and Uyttendaele,
M. (2012). Real-time image-based 6-dof localization
in large-scale environments. In CVPR, pages 1043–
1050.
Lu, C., Hager, G., and Mjolsness, E. (2000). Fast and glob-
ally convergent pose estimation from video images. In
PAMI, 22:610–622.
Quan, L. and Lan, Z. (1999). Linear n-point camera pose
determination. In PAMI, 21:774–780.
Schweighofer, G. and Pinz, A. (2006). Robust pose estima-
tion from a planar target. In PAMI, 28:2024–2030.
Wendel, A., Irschara, A., and Bischof, H. (2011). Natural
landmark-based monocular localization for mavs. In
ICRA, pages 5792–5799.
ExternalVisionbasedRobustPoseEstimationSystemforaQuadrotorinOutdoorEnvironments
723