GEOMETRIC ADVANCED TECHNIQUES FOR ROBOT GRASPING
USING STEREOSCOPIC VISION
Julio Zamora-Esquivel and Eduardo Bayro-Corrochano
Department of Electrical Engineering and Computer Science
CINVESTAV, unidad Guadalajara. Jalisco, Mexico
Keywords:
Conformal Geometry, Kinematics, Grasping, Tracking.
Abstract:
In this paper the authors propose geometric techniques to deal with the problem of grasping objects relying
on their mathematical models. For that we use the geometric algebra framework to formulate the kinematics
of a three finger robotic hand. Our main objective is by formulating a kinematic control law to close the loop
between perception and actions. This allows us to perform a smooth visually guided object grasping action.
1 INTRODUCTION
In this work the authors show how to obtain a feasible
grasping strategy based on the mathematical model
of the object and the manipulator. In order to close
the loop between perception and action we estimate
the pose of the object and the robot hand. A con-
trol law is also proposed using the mechanical Jaco-
bian matrix computed using the lines of the axis of
the Barrett hand. Conformal geometric algebra has
been used within this work instead of the projective
approach (Ruf, 2000) due to the advantages which are
provided by this mathematical framework in the pro-
cess of modeling of the mechanical structures like the
one of the Barrett Hand.
In our approach first we formulate the inverse
kinematics of the robot hand and analyze the object
models in order to identify the grasping constraints.
This takes into account suitable contact points be-
tween object and robot hand. Finally a control law
to close the perception and action loop is proposed.
In the experimental analyzes we present a variety of
real grasping situations.
2 GEOMETRIC ALGEBRA
Let
G
n
denote the geometric algebra of n-dimensions,
this is a graded linear space. As well as vector
addition and scalar multiplication we have a non-
commutative product which is associative and dis-
tributive over addition this is the geometric or Clif-
ford product.
The inner product of two vectors is the standard
scalar product and produces a scalar. The outer
or wedge product of two vectors is a new quantity
which we call a bivector. Thus, b a will have
the opposite orientation making the wedge product
anti-commutative. The outer product is immediately
generalizable to higher dimensions for example,
(a b) c, a trivector, is interpreted as the oriented
volume formed by sweeping the area a b along vec-
tor c. The outer product of k vectors is a k-vector or
k-blade, and such a quantity is said to have grade k.
A multivector (linear combination of objects of differ-
ent type) is homogeneous if it contains terms of only
a single grade.
We will specify a geometric algebra
G
n
of the n
dimensional space by G
p,q,r
, where p, q and r stand
for the number of basis vector which squares to 1, -1
and 0 respectively and fulfill n = p+ q+ r.
We will use e
i
to denote the vector basis i. In a Ge-
ometric algebra G
p,q,r
, the geometric product of two
basis vector is defined as
e
i
e
j
=
1 f or i = j 1, ··· , p
1 f or i = j p+ 1,· ·· , p+ q
0 f or i = j p+ q+ 1,·· · , p+ q+ r.
e
i
e
j
for i 6= j
This leads to a basis for the entire algebra:
{1},{e
i
},{e
i
e
j
},{e
i
e
j
e
k
},. . .,{e
1
e
2
... e
n
} (1)
Any multivector can be expressed in terms of this
basis.
3 CONFORMAL GEOMETRY
Geometric algebra G
4,1
can be used to treat confor-
mal geometry in a very elegant way. To see how this
is possible, we follow the same formulation presented
175
Zamora-Esquivel J. and Bayro-Corrochano E. (2007).
GEOMETRIC ADVANCED TECHNIQUES FOR ROBOT GRASPING USING STEREOSCOPIC VISION.
In Proceedings of the Fourth International Conference on Informatics in Control, Automation and Robotics, pages 175-182
DOI: 10.5220/0001627701750182
Copyright
c
SciTePress
in (H. Li, 2001) and show how the Euclidean vector
space R
3
is represented in R
4,1
. This space has an or-
thonormal vector basis given by {e
i
} and e
ij
= e
i
e
j
are bivectorial basis and e
23
, e
31
and e
12
correspond to
the Hamilton basis. The unit Euclidean pseudo-scalar
I
e
:= e
1
e
2
e
3
, a pseudo-scalar I
c
:= I
e
E and the
bivector E := e
4
e
5
= e
4
e
5
are used for computing
the inverse and duals of multivectors.
3.1 The Stereographic Projection
The conformal geometry is related to a stereographic
projection in Euclidean space. A stereographic pro-
jection is a mapping taking points lying on a hyper-
sphere to points lying on a hyperplane. In this case,
the projection plane passes through the equator and
the sphere is centered at the origin. To make a projec-
tion, a line is drawn from the north pole to each point
on the sphere and the intersection of this line with the
projection plane constitutes the stereographic projec-
tion.
For simplicity, we will illustrate the equivalence
between stereographic projections and conformal ge-
ometric algebra in R
1
. We will be working in R
2,1
with the basis vectors {e
1
,e
4
,e
5
} having the usual
properties. The projection plane will be the x-axis
and the sphere will be a circle centered at the origin
with unitary radius.
Figure 1: Stereographic projection for 1-D.
Given a scalar x
e
representing a point on the x-
axis, we wish to find the point x
c
lying on the circle
that projects to it (see Figure 1). The equation of the
line passing through the north pole and x
e
is given by
f(x) =
1
x
e
x + 1 and the equation of the circle x
2
+
f(x)
2
= 1. Substituting the equation of the line on
the circle, we get the point of intersection x
c
, which
can be represented in homogeneous coordinates as the
vector
x
c
= 2
x
e
x
2
e
+ 1
e
1
+
x
2
e
1
x
2
e
+ 1
e
4
+ e
5
. (2)
From (2) we can infer the coordinates on the circle for
the point at infinity as
e
= lim
x
e
{
x
c
}
= e
4
+ e
5
, (3)
e
o
=
1
2
lim
x
e
0
{
x
c
}
=
1
2
(e
4
+ e
5
), (4)
Note that (2) can be rewritten to
x
c
= x
e
+
1
2
x
2
e
e
+ e
o
, (5)
3.2 Spheres and Planes
The equation of a sphere of radius ρ centered at point
p
e
R
n
can be written as (x
e
p
e
)
2
= ρ
2
. Since x
c
·
y
c
=
1
2
(x
e
y
e
)
2
and x
c
· e
= 1 we can factor the
expression above to
x
c
· (p
c
1
2
ρ
2
e
) = 0. (6)
Which finally yields the simplified equation for the
sphere as s = p
c
1
2
ρ
2
e
. Alternatively, the dual of
the sphere is represented as 4-vector s
= sI
c
. The
sphere can be directly computed from four points as
s
= x
c
1
x
c
2
x
c
3
x
c
4
. (7)
If we replace one of these points for the point at infin-
ity we get the equation of a plane
π
= x
c
1
x
c
2
x
c
3
e
. (8)
So that π becomes in the standard form
π = I
c
π
= n+ de
(9)
Where n is the normal vector and d represents the
Hesse distance.
3.3 Circles and Lines
A circle z can be regarded as the intersection of two
spheres s
1
and s
2
as z = (s
1
s
2
). The dual form of
the circle can be expressed by three points lying on it
z
= x
c
1
x
c
2
x
c
3
. (10)
Similar to the case of planes, lines can be defined by
circles passing through the point at infinity as:
L
= x
c
1
x
c
2
e
. (11)
The standard form of the line can be expressed by
L = l + e
(t · l), (12)
the line in the standard form is a bivector, and it has
six parameters (Plucker coordinates), but just four de-
grees of freedom.
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
176
4 DIRECT KINEMATICS
The direct kinematics involves the computation of the
position and orientation of the end-effector given the
parameters of the joints. The direct kinematics can be
easily computed given the lines of the axes of screws.
4.1 Rigid Transformations
We can express rigid transformations in conformal ge-
ometry carrying out reflections between planes.
4.1.1 Reflection
The reflection of conformal geometric entities help us
to do any other transformation. The reflection of a
point x with respect to the plane π is equal x minus
twice the direct distance between the point and plane
as shown in figure 2.
Figure 2: Reflection of a point x respect to the plane π.
For any geometric entity Q, the reflection respect to
the plane π is given by
Q
= πQπ
1
(13)
4.1.2 Translation
The translation of conformal entities can be done by
carrying out two reflections in parallel planes π
1
and
π
2
see the figure (3), that is
Q
= (π
2
π
1
)
|
{z}
T
a
Q(π
1
1
π
1
2
)
|
{z }
e
T
a
(14)
T
a
= (n+ de
)n = 1+
1
2
ae
= e
a
2
e
(15)
With a = 2dn.
4.1.3 Rotation
The rotation is the product of two reflections between
nonparallel planes, (see figure (4))
Q
= (π
2
π
1
)
|
{z}
R
θ
Q(π
1
1
π
1
2
)
|
{z }
f
R
θ
(16)
Figure 3: Reflection about parallel planes.
Figure 4: Reflection about nonparallel planes.
Or computing the conformal product of the normals
of the planes.
R
θ
= n
2
n
1
= Cos(
θ
2
) Sin(
θ
2
)l = e
θ
2
l
(17)
With l = n
2
n
1
, and θ twice the angle between the
planes π
2
and π
1
. The screw motion called motor in
(Bayro-Corrochano and Kahler, 2000) related to an
arbitrary axis L is M = TR
e
T
Q
= (TR
e
T)
|
{z }
M
θ
Q((T
e
R
e
T))
|
{z }
f
M
θ
(18)
M
θ
= TR
e
T = Cos(
θ
2
) Sin(
θ
2
)L = e
θ
2
L
(19)
4.2 Kinematic Chains
The direct kinematics for serial robot arms is a succes-
sion of motors and it is valid for points, lines, planes,
circles and spheres.
Q
=
n
i=1
M
i
Q
n
i=1
e
M
ni+1
(20)
5 BARRETT HAND DIRECT
KINEMATICS
The direct kinematics involves the computation of the
position and orientation of the end-effector given the
parameters of the joints. The direct kinematics can be
easily computed given the lines of the axes of screws.
GEOMETRIC ADVANCED TECHNIQUES FOR ROBOT GRASPING USING STEREOSCOPIC VISION
177
In order to explain the kinematics of the Barrett
hand, we show the kinematics of one finger. In this
example we will assume that the finger is totally ex-
tended. Note that such a hypothetical position is not
reachable in normal operation, but this simplifies the
explanation.
We initiated denoting some points on the finger
which help to describe their position.
x
1o
= A
w
e
1
+ A
1
e
2
+ D
w
e
3
, (21)
x
2o
= A
w
e
1
+ (A
1
+ A2)e
2
+ D
w
e
3
, (22)
x
3o
= A
w
e
1
+ (A
1
+ A
2
+ A
3
)e
2
+ D
w
e
3
. (23)
The points
x
1o
,
x
2o
and
x
3o
describe the position of
each union and the end of the finger in the Euclidean
space, see the figure 5.
Figure 5: Barrett hand hypotetical position.
Having defined these points it is quite simple to
calculate the axes,which will be used as motor’s axis.
L
1o
= A
w
(e
2
e
) + e
12
, (24)
L
2o
= (x
1o
e
1
e
)I
c
, (25)
L
3o
= (x
2o
e
1
e
)I
c
, (26)
when the hand is initialized the fingers moves away
to home position, this is Φ
2
= 2.46
o
in union two and
Φ
3
= 50
o
degrees in union three. In order to move
the finger from this hypothetical position to its home
position the appropriate transformation need to be ob-
tained.
M
2o
= cos(Φ
2
/2) sin(Φ
2
/2)L
2o
, (27)
M
3o
= cos(Φ
3
/2) sin(Φ
3
/2)L
3o
, (28)
Having obtained the transformations, then we apply
them to the points and lines to them that must move.
x
2
= M
2o
x
2o
e
M
2o
, (29)
x
3
= M
2o
M
3o
x
3o
e
M
3o
e
M
2o
, (30)
L
3
= M
2o
L
3o
e
M
2o
. (31)
The point x
1
= x
1o
is not affected by the transfor-
mation, as are for the lines L
1
= L
1o
and L
2
= L
2o
see
figure 6.
Figure 6: Barrett hand at home position.
Since the rotation angle of both axis L
2
and L
3
are related, we will use fractions of the angle q
1
to
describe their individual rotation angles. The mo-
tors of each joint are computed using
2
35
2q
4
to rotate
around L
1
,
1
125
q
1
around L
2
and
1
375
q
1
around L
3
, the
angles coefficients were taken from the Barrett hand
user manual.
M
1
= cos(q
4
/35) + sin(q
4
/35)L
1
, (32)
M
2
= cos(q
1
/250) sin(q
1
/250)L
2
, (33)
M
3
= cos(q
1
/750) sin(q
1
/750)L
3
. (34)
The position of each point is related to the angles
q
1
and q
4
as follows:
x
1
= M
1
x
1
e
M
1
, (35)
x
2
= M
1
M
2
x
2
e
M
2
e
M
1
, (36)
x
3
= M
1
M
2
M
3
x
3
e
M
3
e
M
2
e
M
1
, (37)
L
3
= M
1
M
2
L
3
e
M
2
e
M
1
, (38)
L
2
= M
1
L
2
e
M
1
. (39)
Since we already know x
3
, L
1
, L
2
and L
3
we can cal-
culate the speed of the end of the finger using
˙
X
3
= X
3
· (
2
35
L
1
˙q
4
+
1
125
L
2
˙q
1
+
1
375
L
3
˙q
1
). (40)
6 POSE ESTIMATION
There are many approaches to solve the pose estima-
tion problem ((Hartley and Zisserman, 2000)). In our
approach we project the known mathematical model
of the object on the camera’s image. This is possible
because after calibration we know the intrinsic param-
eters of the camera, see fig 7. The image of the math-
ematical projected model is compared with the image
of the segmented object. If we find a match between
them, then this means that the mathematical object is
placed in the same position and orientation as the real
object. Otherwise we follow a descendant gradient
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
178
Figure 7: Mathematical model of the object.
Figure 8: Pose estimation of a disk with a fixed camera.
based algorithm to rotate and translate the mathemat-
ical model in order to reduce the error between them.
This algorithm runs very fast
Figure 8 shows the pose estimation result. In this
case we have a maximum error of 0.4
o
in the orienta-
tion estimation and 5mm of maximum error in the po-
sition estimation of the object. The problem becomes
more difficult to solve when the stereoscopic system
is moving. Figure 9 shows how well the stereo system
track the object. If we want to know the real object’s
position with respect to the world coordinate system,
of course we must know the extrinsic camera’s pa-
rameters. Figure 10 illustrates the object’s position
and orientation with respect to the robot’s hand. In
the upper row of this figure we can see an augmented
reality position sequence of the object. This shows
that we can add the mathematical object in the real
image. Furthermore, in the second row of the same
image we can see the virtual reality pose estimation
result.
Figure 9: Pose estimation of a recipient.
Figure 10: Object presented in augmented and virtual real-
ity.
7 GRASPING THE OBJECTS
Considering that in using cameras we can only see the
surface of the observed objects, in this work we con-
sider them as bidimensional surfaces which are em-
bed in a 3D space, and are described by the following
function
H
(s,t) = h
x
(s,t)e
1
+ h
y
(s,t)e
2
+ h
z
(s,t)e
3
, (41)
where s and t are real parameters in the range [0,1].
Such parametrization allows us to work with differ-
ent objects like points, conics, quadrics, or even more
complex real objects like cups, glasses, etc.
Table 1: Functions of some objects.
Particle
H
= 3e
1
+ 4e
2
+ 5e
3
Cylinder
H
= cos(t)e
1
+ sin(t)e
2
+ se
3
Plane
H
= te
1
+ se
2
+ (3s+ 4t + 2)e
3
There are many styles of grasping, however we are
taking into account only three principal styles. Note
that also for each style of grasping there are many pos-
sible solutions, for another approach see (Ch Borst
and Hirzinger, 1999).
7.1 Style of Grasp One
Since our objective is to grasp such objects with the
Barrett Hand, we must consider that it has only three
fingers, so the problem consists in finding three points
of grasping for which the system is in equilibrium
by holding; this means that the sum of the forces are
equal to zero, and also the sum of the moments.
We know the surface of the object, so we can com-
pute its normal vector in each point using
N(s,t) =
H
(s,t)
s
H
(s,t)
t
!
I
e
. (42)
GEOMETRIC ADVANCED TECHNIQUES FOR ROBOT GRASPING USING STEREOSCOPIC VISION
179
In surfaces with low friction the value of F tends
to its projection over the normal (F F
n
). To main-
tain equilibrium, the sum of the forces must be zero
3
i=1
k
F
n
k
N(s
i
,t
i
) = 0, (Fig. 11 ).
Figure 11: Object and his normal vectors.
This fact restricts the points over the surface in
which the forces can be applied. This number of
points is more reduced if we consider that the forces
over the object are equal.
3
i=1
N(s
i
,t
i
) = 0. (43)
Additionally, in order to maintain the equilibrium of
the system, it must be accomplished that the sum of
the moments is zero
3
i=1
H(s,t) N(s,t) = 0. (44)
The points on the surface with the maximum and
minim distance to the mass’s center of the object ful-
fill H(s,t) N(s,t) = 0. The normal vector in such
points crosses the center of mass (C
m
) and it does not
produce any moment. Before determining the exter-
nal and internal points, we must compute the center
of mass as
C
m
=
1
0
1
0
H
(s,t)dsdt (45)
Once C
m
is calculated we can establish the next re-
striction
(H(s,t) C
m
) N(s,t) = 0. (46)
The values s and t satisfying (46), form a subspace
and they fulfill that H(s,t) are critical points on the
surface (being maximums, minimums or inflections)
The constraint imposing that the three forces must
be equal is hard to fulfill because it implies that the
three points must be symmetric with respect to the
mass center. When such points are not present, we can
relax the constraint to allow that only two forces are
equal in order to fulfill the hand’s kinematics equa-
tions. Then, the normals N(s
1
,t
1
) and N(s
2
,t
2
) must
be symmetric with respect to N(s
3
,t
3
)
N(s
3
,t
3
)N(s
1
,t
1
)N(s
3
,t
3
)
1
= N(s
2
,t
2
) (47)
7.2 Style of Grasp Two
In the previous style of grasping three points of con-
tact were considered. In this section we are taking
into account a greater number of contact points, this
fact generates a style of grasping that take the objects
more secure. To increment the number of contact
points is taken into account the base of the hand.
Since the object is described by the equation
H(s,t) it is possible to compute a plane π
b
that di-
vides the object in the middle, this is possible using
lineal regression and also for the principal axis L
p
.
See figure 12.
Figure 12: Planes of the object.
One Select only the points from locations with normal
parallels to the plane π
b
N(s,t) π
b
0 (48)
Now we chose three points separated by 25 mm to
generate a plane in the object. In this style of grasping
the position of the hand relative to the object is trivial,
because we just need to align the center of these points
with the center of the hand’s base. Also the orienta-
tion is the normal of the plane π
1
= x
1
x
2
x
3
e
.
7.3 Style of Grasp Three
In this style of grasping the forces F
1
, F
2
and F
3
do
not intersect the mass center. They are canceled by
symmetry, because the forces are parallel.
N(s
3
,t
3
)F
3
= N(s
1
,t
1
)F
1
+ N(s
2
,t
2
)F
2
. (49)
Also the forces F
1
, F
2
and F
3
are in the plane π
b
and they are orthogonal to the principal axis L
p
(π
b
=
L
p
· N(s,t)) as you can see in the figure 13.
A new restriction is then added to reduce the sub-
space of solutions
F
3
= 2F
1
= 2F
2
, (50)
N(s
1
,t
1
) = N(s
2
,t
2
) = N(s
3
,t
3
). (51)
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
180
Figure 13: Forces of grasping.
Finally the direct distance between the parallels
apply to x
1
and x
2
must be equal to 50 mm and be-
tween x
1
,x
2
to x
3
must be equal to 25 mm.
Now we search exhaustively three points chang-
ing s
i
and t
i
. Figure 14 shows the simulation and re-
sult of this grasping algorithm. The position of the
Figure 14: Simulation and result of the grasping.
object relative to the hand must be computed using a
coordinate frame in the object and other in the hand.
8 TARGET POSE
Once the three grasping points (P
1
= H(s
1
,t
1
), P
2
=
H(s
2
,t
2
), P
3
= H(s
3
,t
3
)) are calculated, for each fin-
ger it is really easy to determine the angles at the
joints. To determine the angle of the spread (q
4
= β),
Figure 15: Object’s position relative to the hand.
we use
cosβ =
(p
1
C
m
) · (C
m
p
3
)
|
p
1
c
m
||
C
m
p
3
|
. (52)
To calculate each one of the finger angles, we deter-
mine its elongation as
x
3
· e
2
=
|
(p
1
C
m
)
|
A
w
sin(β)
A
1
, (53)
x
3
· e
2
=
|
(p
2
C
m
)
|
A
w
sin(β)
A
1
, (54)
x
3
· e
2
=
|
(p
3
C
m
)
|
+ h A
1
, (55)
where x
3
· e
2
determines the opening distance of the
finger
x
3
· e
2
= (M
2
M
3
x
3
e
M
3
e
M
2
) · e
2
(56)
x
3
· e
2
= A
1
+ A
2
cos(
1
125
q+ I
2
) +
+A
3
cos
4
375
q+ I
2
+ I
3
. (57)
Solving for the angle q we have the opening angle for
each finger. These angles are computed off line for
each style of grasping of each object. They are the
target in the velocity control of the hand.
8.1 Object Pose
We must find the transformation M which allows us
to put the hand in a such way that each finger-end
coincides with the corresponding contact point. For
the sake of simplicity transformation M is divided in
three transformations (M
1
,M
2
,M
3
). With the same
purpose we label the finger ends as X
1
, X
2
and X
3
, and
the contact points as P
1
, P
2
and P
3
.
The first transformation M
1
is the translation be-
tween the object and the hand, which is equal to the
directed distance between the centers of the circles
called Z
h
= X
1
X
2
X
3
y Z
o
= P
1
P
2
P
3
, and it
can be calculated as
M
1
= e
1
2
Z
h
Z
h
e
Z
o
Z
o
e
e
I
c
. (58)
The second transformation allows the alignment of
the planes π
h
= Z
h
= X
1
X
2
X
3
e
and π
o
=
Z
o
e
, which are generated by the new points of the
hand and the object. This transformation is calculated
as M
2
= e
1
2
π
h
π
o
. The third transformation allows
that the points overlap and this can be calculated us-
ing the planes π
1
= Z
o
X
3
e
and π
2
= Z
o
P
3
e
,
which are generated by the circle’s axis and any of the
points M
3
= e
1
2
π
1
π
2
.
These transformations define also the pose of the
object relative to the hand. They are computed off line
in order to know the target position and orientation of
the object with respect to the hand, it will be used to
design a control law for visually guided grasping
GEOMETRIC ADVANCED TECHNIQUES FOR ROBOT GRASPING USING STEREOSCOPIC VISION
181
9 VISUALLY GUIDED GRASPING
Once the target position and orientation of the object
is known for each style of grasping and the hand’s
posture (angles of joints), it is possible to write a con-
trol law using this information and the equation of dif-
ferential kinematics of the hand that it allows by using
visual guidance to take an object.
Basically the control algorithm takes the pose of the
object estimated as shown in the Section 6 and com-
pares with the each one of the target poses computed
in the Section 8 in order to choose as the target the
closest pose, in this way the style of grasping is auto-
matically chosen.
Once the style of grasping is chosen and target
pose is known, the error ε between the estimated and
computed is used to compute the desired angles in the
joints of the hand
α
d
= α
t
e
ε
2
+ (1 e
ε
2
)α
a
(59)
where α
d
is the desired angle of the finger, α
t
is the
target angle computed in the section 8 and α
a
is the
actual angle of the finger. Now the error between the
desired and the actual position is used to compute the
new joint angle using the equation of differential kine-
matics of the Barrett hand given in the Section 5.
9.1 Results
Next we show the results of the combination of the al-
gorithms of pose estimation, visual control and grasp-
ing to create a new algorithm for visually guided
grasping. In the Figure 16 a sequence of images of the
grasping is presented. When the bottle is approached
by the hand the fingers are looking for a possible point
of grasp.
Figure 16: Visually guided grasping.
Now we can change the object or the pose of the
object and the algorithm is computing a new behav-
ior of grasping. The figure (17) shows a sequence of
images changing the pose of the object.
Figure 17: Changing the object’s pose.
10 CONCLUSION
In this paper the authors used conformal geometric al-
gebra to formulate grasping techniques. Using stereo
vision we are able to detect the 3D pose and the intrin-
sic characteristics of the object shape. Based on this
intrinsic information we developed feasible grasping
strategies.
This paper emphasizes the importance of the de-
velopment of algorithms for perception and grasping
using a flexible and versatile mathematical framework
REFERENCES
Bayro-Corrochano, E. and Kahler, D. (2000). Motor algebra
approach for computing the kinematics of robot ma-
nipulators. In Journal of Robotic Systems. 17(9):495-
516.
Ch Borst, M. F. and Hirzinger, G. (1999). A fast and robust
grasp planner for arbitrary 3d objects. In ICRA99, In-
ternational Conference on Robotics and Automation.
pages 1890-1896.
H. Li, D. Hestenes, A. R. (2001). Generalized Homoge-
neous coordinates for computational geometry. pages
27-52, in ((Somer, 2001)).
Hartley and Zisserman, A. (2000). Multiple View Geometry
in Computer Vision. Cambridge University Press, UK,
1st edition.
Ruf, A. (2000). Closing the loop between articulated mo-
tion and stereo vision: a projective approach. In PhD.
Thesis, INP, GRENOBLE.
Somer, G. (2001). Geometric Computing with Clifford Al-
gebras. Springer-Verlag, Heidelberg, 2nd edition.
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
182