S(t
0
) ≥
Z
τ=t
0
+T
R
+T
S
τ=t
0
v
H
(τ)dτ +
Z
τ=t
0
+T
R
τ=t
0
v
R
(τ)dτ
+
Z
τ=t
0
+T
R
+T
S
τ=t
0
+T
R
v
S
(τ)dτ + (C + Z
S
+ Z
R
)
(2)
In (2), v
H
is the “directed speed” of the closest op-
erator which travels toward the robot, v
R
is the speed
of the robot in the direction of the operator, v
S
is the
directed speed of the robot in course of stopping. The
remaining terms represents uncertainties: the intru-
sion distance C is based on the operator reach, Z
R
is
the robot position uncertainty, and Z
S
is the operator
position uncertainty (i.e., the sensor uncertainty). Fi-
nally, t
0
is considered the current time.
The main issue of (ISO 13855, 2010) is that
the separation distance was initially intended for
static machinery, not for dynamic and reconfigurable
robotic systems. Therefore, extending what is con-
tained in the standard to the case of industrial robotics
is not trivial. Nevertheless, ISO/TS 15066 tries to
make a contribution to the HRC problem and de-
scribes S using the linear function
S = (v
H
T
R
+ v
H
T
S
) + (v
R
T
R
) + (B) + (C + Z
S
+ Z
R
)
(3)
where B is the Euclidean distance travelled by the
robot while braking. Note the one-to-one correlation
between eq. (2) and the linear relationship (3). The
first term in parentheses describes the contribution at-
tributable to the operator’s change in location in the
time necessary to bring the robot to a full stop from its
current speed. The second term describes the contri-
bution attributable to the robot system reaction time,
before it initiates the braking sequence. The third
term describes the distance travelled by the robot dur-
ing its braking. Finally, the fourth term describes the
possible distance of intrusion into the robot work vol-
ume as a function of the operator reach and the un-
certainty of the sensory system and robot kinematics.
The values of v
H
, T
S
, B and C can be found in the
safety standards: the values of v
H
and C are given in
ISO 13855, while guidelines for evaluating T
S
and B
are given in Annex B of ISO 10218-1 and they result
from measurements that directly depend on the robot
system under test.
This paper decomposes and assesses the perfor-
mance of ISO/TS 15066 SSM minimum protective
distance metric and adds a contribution to improve
some aspects to allow its applicability in industrial
scenarios. The following sections widely discuss
four main areas that are directly pertinent to SSM:
human detection and tracking, prediction of human
and robot motions, safety separation maintenance and
robot speed monitoring.
3 HUMAN-ROBOT
INTERACTION
The robot control system must be able to adapt the
robot trajectory to the current observed scene and to
perform its task efficiently and safely. This means
that the control system must be able to detect the
presence of human operators inside the collaborative
workspace, to track the human closest to the machine
and, finally, to modulate the robot speed according to
the minimum protective distance S.
The HRC has been addressed dividing it into
two distinct problems: human detection and tracking
(HDT) and intention estimation (IE).
3.1 Perception System
The experimental set-up of this work is composed by
two depth cameras, which have been used to monitor
the collaborative workspace: a Microsoft Kinect v1
and an Intel RealSense D435 (see Figure 2a). At least
two views become necessary to minimize the occlu-
sions of the observed area, as shown in Figure 2b and
Figure 2c.
An intrinsic calibration is necessary to update
the rough intrinsic default parameters, as well as, a
sphere-tracking procedure has been developed for ex-
trinsic calibration. The obtained homogeneous trans-
formation matrices, T
robot
camera1
and T
robot
camera2
, express the
poses of the camera frames with respect to the robot
base frame.
The goal of the extrinsic calibration is to obtain
an accurate identification of the camera pose, which
guarantees the minimum relative positioning error
when the two camera views are merged.
Therefore, a 3D tracking technique has been de-
veloped by using a polystyrene sphere of 0.12 m di-
ameter. The red sphere has been mounted at the
robot end effector, so as to match the center of the
sphere with the origin of the end-effector frame, as
shown in Figure 2. The calibration procedure uses
the M-estimator SAmple Consensus (MSAC) algo-
rithm (Torr and Murray, 1997) (which is an exten-
sion of the best known RANdom SAmple Consensus
(RANSAC) algorithm (Fischler and Bolles, 1981)), to
find a sphere within a radius constraint, and to provide
its geometric model. The robot has been positioned
at specific configurations, which allow to correctly
distinguish the target within the two camera views.
From the robot joint states, the forward kinematics
computes the pose of the center of the red sphere. At
the same time, the developed procedure acquires the
depth images, converts them into point clouds (Rusu
and Cousins, 2011) and estimates the target model.
ICINCO 2019 - 16th International Conference on Informatics in Control, Automation and Robotics
80