
6 CONCLUSION AND OUTLOOK
In this paper, we proposed a method for automatically
improving the visual observability of robotic systems.
This method relies heavily on our two main contri-
butions, the automatic sampling of the training space
via modified smooth random functions as well as the
training and evaluation of lightweight MLPs to repre-
sent the occlusion function. While there is still room
for improvement due to the measured performance
gap between independently sampled data and smooth
trajectories, we could show that this approach works
well in the simulation and real-world experiments and
consistently shows high validation scores. We fur-
ther showed the modular integration in a trajectory-
planning example, where it successfully increased the
visibility under given constraints.
There are several directions for further research,
the most promising being a smart coupling of the tra-
jectory generation with live feedback from the camera
system. This would potentially allow for more effi-
cient approaches to sampling the boundary between
visible and hidden states, thus reducing the dimen-
sionality of the problem. Another promising approach
would be to disentangle camera pose, robot geometry,
and robot kinematics into several learned components
of a combined system. This would possibly allow it to
be more resilient against minor changes in the scene
and would only require minor retraining upon repo-
sitioning the camera. Another research direction is
investigating different robot geometries and material
properties. This could include semi-translucent and
reflective surfaces as well as the addition of positional
encodings for handling more complex joint geome-
tries.
ACKNOWLEDGEMENT
This research is funded through the Project ”OP
der Zukunft” within the funding program by
Europ
¨
aischen Fonds f
¨
ur Regionale Entwicklung
(REACT-EU).
REFERENCES
Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M.
(2019). Optuna: A next-generation hyperparameter
optimization framework. In Proceedings of the 25th
ACM SIGKDD International Conference on Knowl-
edge Discovery and Data Mining.
Ayvaci, A., Raptis, M., and Soatto, S. (2012). Sparse occlu-
sion detection with optical flow. International journal
of computer vision, 97:322–338.
Bishop, C. M. and Nasrabadi, N. M. (2006). Pattern recog-
nition and machine learning, volume 4. Springer.
Boroushaki, T., Leng, J., Clester, I., Rodriguez, A., and
Adib, F. (2021). Robotic grasping of fully-occluded
objects using rf perception. In 2021 IEEE Inter-
national Conference on Robotics and Automation
(ICRA), pages 923–929. IEEE.
Brodersen, K. H., Ong, C. S., Stephan, K. E., and Buhmann,
J. M. (2010). The balanced accuracy and its posterior
distribution. In 2010 20th international conference on
pattern recognition, pages 3121–3124. IEEE.
Cortes, C. and Vapnik, V. (1995). Support-vector networks.
Machine learning, 20:273–297.
Cover, T. and Hart, P. (1967). Nearest neighbor pattern clas-
sification. IEEE transactions on information theory,
13(1):21–27.
Danielczuk, M., Angelova, A., Vanhoucke, V., and Gold-
berg, K. (2020). X-ray: Mechanical search for an oc-
cluded object by minimizing support of learned occu-
pancy distributions. In 2020 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS),
pages 9577–9584. IEEE.
Fan, X., Simmons-Edler, R., Lee, D., Jackel, L., Howard,
R., and Lee, D. (2021). Aurasense: Robot colli-
sion avoidance by full surface proximity detection. In
2021 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems (IROS), pages 1763–1770.
IEEE.
Filip, S., Javeed, A., and Trefethen, L. N. (2019). Smooth
random functions, random odes, and gaussian pro-
cesses. SIAM Review, 61(1):185–205.
Gandhi, D. and Cervera, E. (2003). Sensor covering of a
robot arm for collision avoidance. In SMC’03 Con-
ference Proceedings. 2003 IEEE International Con-
ference on Systems, Man and Cybernetics. Confer-
ence Theme-System Security and Assurance (Cat. No.
03CH37483), volume 5, pages 4951–4955. IEEE.
Guizzo, E. (2019). By leaps and bounds: An exclusive look
at how boston dynamics is redefining robot agility.
IEEE Spectrum, 56(12):34–39.
Hornik, K. (1991). Approximation capabilities of mul-
tilayer feedforward networks. Neural networks,
4(2):251–257.
Kingma, D. P. and Ba, J. (2014). Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Ben-
Tzur, J., Hardt, M., Recht, B., and Talwalkar, A.
(2020). A system for massively parallel hyperparam-
eter tuning. Proceedings of Machine Learning and
Systems, 2:230–246.
Liu, Y., Li, Y., Zhuang, Z., and Song, T. (2020). Improve-
ment of robot accuracy with an optical tracking sys-
tem. Sensors, 20(21):6341.
McNicholas, P. D. (2016). Mixture model-based classifica-
tion. CRC press.
Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T.,
Ramamoorthi, R., and Ng, R. (2021). Nerf: Repre-
senting scenes as neural radiance fields for view syn-
thesis. Communications of the ACM, 65(1):99–106.
Learning Occlusions in Robotic Systems: How to Prevent Robots from Hiding Themselves
491