Table 1: Scenario settings for testing the proposed method.
Scen.# v
cam
l inear
((x,y,z), [m/s]) v
cam
angul ar
((x,y,z), [rad/s]) v
ob j
l inear
((x,y,z), [m/s]) Distance ([m])
1 (0.072, 0, 0) (0, 0, 0) (-0.072, 0, 0) 0.33
2 (0.072, 0, 0) (0, 0, 0) (-0.069, 0.012, 0) 0.33
3 (0.021, 0.018, 0.015) (0, 0, 0) (-0.033, 0, 0) 0.36
4 (0, 0, 0) (0, 0, 0.5445) (0.057, 0, 0) 0.23
5 (0, 0, 0) (1.617, 0, 0) (0, -0.057, 0) 0.24
6 (0, 0, 0) (1.617, 0, 0) (0, -0.057, 0) 0.35
Table 2: Results of optical flow ego-motion filtering and moving object state of motion estimation.
Scen.# Mean of calc. v
ob j
l inear
((x,y,z), [m/s]) Std of calc. v
ob j
l inear
((x,y,z), [m/s]) MAE ((x,y,z), [m/s])
1 (-0.071, -0.001, 0) (0.009, 0.002, 0.006) (0.001, -0.001, 0)
2 (-0.068, 0.002, 0.004) (0.013, 0002, 0.009) (0.001, -0.01, 0.004)
3 (-0.016, 0.015, 0.020) (0.029, 0.004, 0.06) (0.017, 0.015, 0.020)
4 (0.058, 0, 0.027) (0.007, 0.002, 0.005) (0.001, 0, 0.027)
5 (-0.001, 0.074, -0.02) (0.004, 0.016, 0.002) (-0.001, 0.131, -0.02)
6 (0, 0.078, -0.006) (0.0023, 0.009, 0.011) (0, 0.135, -0.006)
Table 3: Results of optical flow ego-motion filtering accu-
racy.
Scen.# Background filter accuracy [%]
1 98.0
2 98.5
3 99.6
4 89.3
5 93.9
6 90.0
dense optical flow and image depth information. The
camera’s translational and rotational movement refer-
ence frame is known in our approach. The accuracy
was tested with a moving test object, whose state of
motion is also known. The background filter results
showed very high accuracy – 94.88% on average, in
the different test scenarios). The accuracy of mov-
ing object state of motion estimation was high without
camera depth changing, but low if the camera’s depth
was changing, or the camera and the moving object
are moving in the same direction.
Our most crucial future work is the optimization
of the method and the implementation of outlier filter-
ing. Moreover, our method is planned to be used as a
pre-filter for neural network-based optical flow mov-
ing object segmentation. It may be employed inde-
pendently not only for mobile robot applications, but
also for other robotic problems, where the optical flow
is applied, such as in the case of Robot-Assisted Mini-
mally Invasive Surgery skill assessment (Nagyn
´
e Elek
and Haidegger, 2019).
ACKNOWLEDGEMENTS
Authors thankfully acknowledge the financial support
of this work by the Hungarian State and the European
Union under the EFOP-3.6.1-16-2016-00010 and
GINOP-2.2.1-15-2017-00073 projects. T. Haidegger
and R. Nagyn
´
e Elek are supported through the New
National Excellence Program of the Ministry of Hu-
man Capacities. T. Haidegger is a Bolyai Fellow of
the Hungarian Academy of Sciences. Authors thank
S
´
andor Tarsoly for helping with the UR5 program-
ming.
REFERENCES
Bloesch, M., Omari, S., Fankhauser, P., Sommer, H.,
Gehring, C., Hwangbo, J., Hoepflinger, M. A., Hutter,
M., and Siegwart, R. (2014). Fusion of optical flow
and inertial measurements for robust egomotion esti-
mation. In 2014 IEEE/RSJ International Conference
on Intelligent Robots and Systems, pages 3102–3107.
Bruhn, A., Weickert, J., and Schn
¨
orr, C. (2005). Lu-
cas/Kanade Meets Horn/Schunck: Combining Local
and Global Optic Flow Methods. International Jour-
nal of Computer Vision, 61(3):211–231.
Cheng, J., Tsai, Y.-H., Wang, S., and Yang, M.-H. (2017).
SegFlow: Joint Learning for Video Object Segmenta-
tion and Optical Flow. In Proceedings of the IEEE
International Conference on Computer Vision, pages
686–695.
Farneb
¨
ack, G. (2003). Two-Frame Motion Estimation
Based on Polynomial Expansion. In Goos, G., Hart-
manis, J., van Leeuwen, J., Bigun, J., and Gustavsson,
T., editors, Image Analysis, volume 2749, pages 363–
370. Springer Berlin Heidelberg, Berlin, Heidelberg.
Towards Optical Flow Ego-motion Compensation for Moving Object Segmentation
119