of the robot’s effector (Corke, 2011). This approach
has the advantage of allowing the control by directly
measuring the error on the effector’s interaction with
the environment; making it robust to inaccuracies in
estimates of the system parameters (Chaumette and
Hutchinson, 2006).
This research seeks to contribute to the debate
standing from the point of view of cognitive roboti-
cists. It can be conceived as an effort to assess to what
extent it is feasible to build cognitive systems mak-
ing use of the benefits of a psychologically-oriented
CA; without leaving behind efficient control strate-
gies such as visual servoing. The aim is to verify the
potential benefits of creating an interactive platform
under these technologies; and to analyze the resulting
flexibility in automating manipulative tasks.
2 COGNITIVE ARCHITECTURES
According to (Kelley, 2003), two key design prop-
erties that underlie the development of any CA are
memory and learning. Various types of memory serve
as a repository for background knowledge about the
world, the current episode, the activity, and oneself;
while learning is the main process that shapes this
knowledge. Based on these two features, different ap-
proaches can be gathered in three groups: symbolic,
non-symbolic, and hybrid models.
A symbolic CA has the ability to input, output,
store and alter symbolic entities; executing appropri-
ate actions in order to reach goals (Newell, 1994).
The majority of these architectures employ a central-
ized control over the information flow from sensory
inputs, through memory; to motor outputs. This ap-
proach stresses the working memory executive func-
tions, with an access to semantic memory; where
knowledge generally has a graph-based representa-
tion. Rule-based representations of perceptions / ac-
tions in the procedural memory, embody the logical
reasoning of human experts.
Inspired by connectionist ideas, a sub-symbolic
CA is composed by a network of processing nodes
(Duch et al., 2008). These nodes interact with each
other in specific ways changing the internal state of
the system. As a result, interesting emergent proper-
ties are revealed. There are two complementary ap-
proaches to memory organization, globalist and lo-
calist. In these architectures, the generalization of
learned responses to novel stimuli is usually good,
but learning new items may lead to problematic inter-
ference with existent knowledge (O’Reilly and Mu-
nakata, 2000).
A hybrid CA combines the relative strengths of
the first two paradigms (Kelley, 2003). In this sense,
symbolic systems are good approaches to process and
executing high-level cognitive tasks; such as, plan-
ning and deliberative reasoning, resembling human
expertise. But they are not the best approach to rep-
resent low-level information. Sub-symbolic systems
are better suited for capturing the context-specificity
and handling low-level information and uncertainties.
Yet, their main shortcoming are difficulties for repre-
senting and handling higher-order cognitive tasks.
3 VISUAL SERVOING
The task in visual servoing (VS) is to use visual fea-
tures, extracted from an image, to control the pose of
the robot’s end-effector in relation to a target. The
camera may be carried by the end-effector (a con-
figuration known by eye-in-hand) or fixed in (eye-to-
hand) (Corke, 2011). The aim of all vision-based con-
trol schemes is to minimize an error e(t), which is
typically defined by
e(t) = s(m(t), a) − s
∗
(1)
The vector m(t) is a set of image measure-
ments used to compute a vector of k visual features
s(m(t),a), based on a set of parameters a represent-
ing potential additional knowledge about the system
(i.e., the camera intrinsic parameters, or a 3-D model
of the target). The vector s
∗
contains the desired val-
ues of the features.
Depending on the characteristics of the task, a
fixed goal can be considered where changes in s de-
pend only on the camera’s motion. A more general
situation can also be modeled, where the target is
moving and the resulting image depends both on the
camera’s and the target’s motion. In any case, VS
schemes mainly differ in the way s is designed. For
image-based visual servo control (IBVS), s consists of
a set of features that are immediately available in the
image data. For position-based visual servo control
(PBVS), s consists of a set of 3D parameters, which
must be estimated from image measurements. Once s
is selected, a velocity controller relating its time vari-
ation to the camera velocity is given by
˙s = L
s
V
c
(2)
The spatial velocity of the camera is denoted by
V
c
= (v
c
,ω
c
), with v
c
the instantaneous linear velocity
of the origin of the camera frame and ω
c
the instanta-
neous angular velocity of the camera frame. L
s
∈ R
6×k
is named the interaction matrix related to s.
Using (1) and (2), the relation between the camera
velocity and the time variation of e can be defined by
ICINCO2013-10thInternationalConferenceonInformaticsinControl,AutomationandRobotics
214