are thanked for helping the authors better understand
RBOT and OPT.
REFERENCES
Bugaev, B., Kryshchenko, A., and Belov, R. (2018). Com-
bining 3d model contour energy and keypoints for ob-
ject tracking. In Proceedings of the European Con-
ference on Computer Vision (ECCV), pages 53–69. 2,
7
Drummond, T. and Cipolla, R. (2002). Real-time visual
tracking of complex structures. IEEE Transactions on
pattern analysis and machine intelligence, 24(7):932–
946. 5
Garon, M. and Lalonde, J.-F. (2017). Deep 6-dof track-
ing. IEEE transactions on visualization and computer
graphics, 23(11):2410–2418. 2, 8
Han, P. and Zhao, G. (2019). A review of edge-based 3d
tracking of rigid objects. Virtual Reality & Intelligent
Hardware, 1(6):580–596. 2
Harris, C. and Stennett, C. (1990). Rapid-a video rate object
tracker. In BMVC, pages 1–6. 2
Lima, J. P., Teichrieb, V., Kelner, J., and Lindeman, R. W.
(2009). Standalone edge-based markerless tracking of
fully 3-dimensional objects for handheld augmented
reality. In Proceedings of the 16th ACM Symposium
on Virtual Reality Software and Technology, pages
139–142. 3
Lima, J. P. S. d. M. (2014). Object detection and pose es-
timation from rectification of natural features using
consumer rgb-d sensors. 4
Mur-Artal, R. and Tard
´
os, J. D. (2017). Orb-slam2:
An open-source slam system for monocular, stereo,
and rgb-d cameras. IEEE Transactions on Robotics,
33(5):1255–1262. 2
Prisacariu, V. A. and Reid, I. D. (2012). Pwp3d: Real-time
segmentation and tracking of 3d objects. International
journal of computer vision, 98(3):335–354. 2
Seo, B.-K., Park, J., Park, H., and Park, J.-I. (2013). Real-
time visual tracking of less textured three-dimensional
objects on mobile platforms. Optical Engineering,
51(12):127202. 2, 5
Stoiber, M., Pfanne, M., Strobl, K. H., Triebel, R., and
Albu-Schaeffer, A. (2020). A sparse gaussian ap-
proach to region-based 6dof object tracking. In Pro-
ceedings of the Asian Conference on Computer Vision.
2
Suzuki and Abe (1985). Topological structural analy-
sis of digitized binary images by border following.
Computer vision, graphics, and image processing,
30(1):32–46. 3, 5
Tan, D. J., Navab, N., and Tombari, F. (2017). Look-
ing beyond the simple scenarios: Combining learn-
ers and optimizers in 3d temporal tracking. IEEE
transactions on visualization and computer graphics,
23(11):2399–2409. 3
Tjaden, H., Schwanecke, U., Sch
¨
omer, E., and Cremers,
D. (2018). A region-based gauss-newton approach to
real-time monocular multiple object tracking. IEEE
transactions on pattern analysis and machine intelli-
gence, 41(8):1797–1812. 2, 7, 8, 11
Valenc¸a, L. (2020). Real-time 6dof tracking of rigid 3d ob-
jects using a monocular rgb camera. Bachelor’s thesis,
Universidade Federal de Pernambuco. 2
Wang, C. and Qian, X. (2018). Heaviside projection–based
aggregation in stress-constrained topology optimiza-
tion. International Journal for Numerical Methods in
Engineering, 115(7):849–871. 5
Wang, G., Wang, B., Zhong, F., Qin, X., and Chen, B.
(2015). Global optimal searching for textureless 3d
object tracking. The Visual Computer, 31(6-8):979–
988. 2
Wu, P.-C., Lee, Y.-Y., Tseng, H.-Y., Ho, H.-I., Yang, M.-H.,
and Chien, S.-Y. (2017). [poster] a benchmark dataset
for 6dof object pose tracking. In 2017 IEEE Interna-
tional Symposium on Mixed and Augmented Reality
(ISMAR-Adjunct), pages 186–191. IEEE. 2, 7
Zhong, L. and Zhang, L. (2019). A robust monocular 3d ob-
ject tracking method combining statistical and photo-
metric constraints. International Journal of Computer
Vision, 127(8):973–992. 2, 9
APPENDIX
Depth Testing
Because we’re only interested in the outermost 2D sil-
houette edges, overlap is infrequent. Objects with
rounded edges do not contain 2D silhouette edges
overlapping. Objects with straight-angled edges (such
as boxes) avoid significant silhouette overlap due to
perspective projection. To validate this, we imple-
mented a simple CPU-based z-buffer. As expected,
for rounded-edge objects it made little to no differ-
ence, and for the straight-angled ones in scenes where
the edges were overlapping results were slightly bet-
ter. Yet, to our surprise, depth-testing made results
much worse when dealing with rotations. A possible
explanation is that by allowing some edge overlap, we
can better segment colors that are not directly visible,
but that can appear with a small rotation. Thus, as not
including depth testing made our work both faster and
more precise, we have opted to not utilize it.
Saturation Threshold
By definition, HSV space only has overlap in hue val-
ues when there is zero pigmentation present, in which
case all hue values can lead to the same color. That is
not true in the real world, though, where illumination,
materials, camera sensors, and color-conversion algo-
rithms play a role in the final color. This concept can
VISAPP 2021 - 16th International Conference on Computer Vision Theory and Applications
772