on the distance to which the camera is located, in the
case of the collimator being larger the camera was sit-
uated between 65cm and 75cm further,this distance is
because of the limited dimensions of the LHC tunnel,
Figure 11.
Figure 11: Schematic of a cross section of the LHC tunnel.
During the tests, Figure 12, it has been verified that
the processing time of the algorithm is between 5 and
10 seconds. This depends on the segmentation op-
eration, which must sometimes iterate over and over
again until it finds the right plane. The times are quite
acceptable, since the algorithm must be run only once,
because the global position is known.
The test point clouds were taken at distances to the
target between 50 cm and 136 cm, which is the maxi-
mum distance in the tunnel, Figure 11.
5 FUTURE DEVELOPMENT
As seen in the validation tests on large pieces, such
as the collimator, the error of the estimated position
increases, approaching levels that would cause prob-
lems in tele-operation. One way to reduce this error is
to make a second estimation of the position. Once the
first estimation is made, the camera in the robot arm
can be approached to a predetermined part, this could
be done automatically. Using the algorithm to detect
that part of the large piece, it is detected with minor
errors. And since the position of that part is known
a priori with respect to the rest, this allows to reduce
the error of the global piece.
6 CONCLUSION
In this paper, a novel algorithm to detect a 6D pose
of an object was presented. The novel solution has
been shown to be robust to be deployed in harsh and
unstructured environment, like the CERN accelera-
tors complexes. The proposed solution is time-wise
light and allows the three-dimensional reconstruction
of an object. This aspects are fundamental for robotic
inspections and telemanipulation, in the specific for
detecting collisions and performing path planning in
areas that the 2D cameras detect incompletely.
REFERENCES
Assmann, R., Magistris, M., Aberle, O., Mayer, M., Rug-
giero, F., Jim´enez, J., Calatroni, S., Ferrari, A., Bel-
lodi, G., Kurochkin, I., et al. (2006). The final colli-
mation system for the lhc. Technical report.
Engel, J., Koltun, V., and Cremers, D. (2016). Direct sparse
odometry. In arXiv:1607.02565.
Engel, J., Sch¨ops, T., and Cremers, D. (2014). LSD-
SLAM: Large-Scale Direct Monocular SLAM, pages
834–849. Springer International Publishing, Cham.
Hinterstoisser, S. Holzer, S., Cagniart, C., Ilic, S., Kono-
lige, K., Navab, N., and Lepetit, V. (2011). Multi-
modal templates for real-time detection of texture-less
objects in heavily cluttered scenes.
Intel. Intel realsense camera r200 datasheet. [online].
Intel. Intel realsense camera sr300 datasheet. [online].
Lunghi, G., Prades, R. M., and Castro, M. D. (2016). An ad-
vanced, adaptive and multimodal graphical user inter-
face for human-robot teleoperation in radioactive sce-
narios. In Proceedings of the 13th International Con-
ference on Informatics in Control, Automation and
Robotics (ICINCO 2016) - Volume 2, Lisbon, Portu-
gal, July 29-31, 2016., pages 224–231.
Microsoft. Kinect hardware. [online].
Orbbec. Orbbec astra pro. [online].
Orbbec. Orbbec. how our 3d camera works. [online].
Prisacariu, V. and Reid, I. (2009). Pwp3d: Real-time seg-
mentation and tracking of 3d objects. In Proceedings
of the 20th British Machine Vision Conference.
Song, S. and Xiao, J. (2014). Sliding Shapes for 3D Object
Detection in Depth Images, pages 634–651. Springer
International Publishing, Cham.
Song, S. and Xiao, J. (2016). Deep Sliding Shapes for
amodal 3D object detection in RGB-D images.
Zeng, A., Yu, K.-T., Song, S., Suo, D., Jr., E. W., Rodriguez,
A., and Xiao, J. (2016). Multi-view self-supervised
deep learning for 6d pose estimation in the amazon
picking challenge.