some external feedback. This viewpoint is consis-
tent with Craik’s understanding of complex behaviors
and learning: “We should now have to conceive a ma-
chine capable of modification of its own mechanism
so as to establish that mechanism which was suc-
cessful in solving the problem at hand, and the sup-
pression of alternative mechanisms” (Craik, 1966).
The optimization model offers mechanisms that allow
to ground symbols with regards to external feedback
from problem solving processes.
While only observing regularities and invariances,
a cognitive system (or “agent”, for short) is able to
act and to predict without internalizing any form of
ontological reality. This machine learning motivated
point of view is related to constructivism, in which the
world remains a black box in the sense of Ernst von
Glasersfeld (Glasersfeld, 1987). From his point of
view, his experience, i.e., the high-dimensional infor-
mation perceived by sense organs, is the only contact
a cognitive system has with the ontological reality. It
organizes its “experience into viable representation of
a world, then one can consider that representation a
model, and the “outside reality” it claims to represent,
a black box.” (Glasersfeld, 1979). A viable represen-
tation already implies a functional component. The
representation must fulfill a certain quality with re-
gard to the fulfillment of the agent’s target.
2 SYMBOLS AND THEIR
MEANING
Before we can discuss the problem SGP in greater
detail, let me review the current state of play. As
pointed out above, according to mainstream AI claims
that cognitive operations are carried out on a sym-
bolic level (Newell and Simon, 1976). In this view
I assume that an autonomous agent performs cogni-
tive operations with a symbolic algorithm, i.e., based
on an algorithm that operates on a symbolic level. An
agent is typically part of the actual or a virtual world.
It is situated in a real environment and this is referred
to as “embodied intelligence” (Pfeier and Iida, 2003).
An embodied agent should physically interact with its
environment and exploit the laws of physics in that
environment, in order to be deeply grounded in its
world. It is able to perceive its environment with var-
ious (e.g., visual or tactile) sensors that deliver high-
dimensional data, e.g., a visual system or tactile sen-
sors. These sensory information is the used to build
its cognitive structures.
In the following I assume that an artificial agent
uses an interface algorithm I that performs a map-
ping I : D → S from a data sample d ∈ D to a sym-
bol s ∈ S, i.e., the I maps subsymbolic data from a
high-dimensional set D of input data onto a set of
symbols S. The set of symbols is subject to cogni-
tive manipulations A. The interface is the basis of
many approaches in engineering, and artificial intel-
ligence – although not always explicitly stated. The
meaning of a symbol s ∈ S is based on its interpreta-
tion on the symbolic level. On the one hand symbols
are only tokens, which may be defined independent
of as their shape (Harnad, 1994). On the other hand,
the effect they have on the symbolic algorithm A can
be referred to as the meaning or interpretation of the
symbol. Formally, a symbolic algorithm A performs
(cognitive) operations on a set of symbols S, which is
then the basis of acting and decision making.
In this context Newell and Simon (Newell and Si-
mon, 1976) stated that “a physical symbol system has
the necessary and sufficient means for general intel-
ligent action”. Even if we assume this to be true
and if we have the means to implement these gen-
eral intelligent algorithms, the question of how we can
get a useful physical symbol system remains unan-
swered. How are symbols defined in this symbol sys-
tem, how do they get their meaning? Floridi empha-
sizes that the SGP is an important question in the phi-
losophy of information (Floridi, 2004). It describes
the problem of how words get assigned to meanings
and what meaning is. Related questions have been in-
tensively discussed over the last few decades (Harnad,
1987; Harnad, 1990). Harnad argues that symbols are
bound to a meaning independent of their shape (Har-
nad, 1990). This meaning-shape independence is an
indication that the ontological reality is not reflected
in the shape of a symbol and is consistent with. The
ability to organize the perceived input-ouput relations
is independent of the shape of a symbol. This can also
be observed in a lot of existing machine learning ap-
proaches for artificial agents.
While it may not be difficult to ground symbols
in one way or other, finding an answer to the ques-
tion of how an autonomous agent is able to solve this
task on its own thereby elaborating its own semantics
renders much more difficult. In biological systems,
genetic preconditions and the interaction with the en-
vironment and other autonomous agents seem to be
the only sources this elaboration is based on. There-
fore, the interpretation of symbols must be an intrinsic
process to the symbolic system itself without the need
for external influence. This process allows the agent
to construct a sort of “mental” representation that
increases its chances of survival in its environment.
Harnad derived three conditions from this assump-
tion: First, no semantic resources are preinstalled in
the autonomous agent (no innatism, or nativism re-
MACHINE SYMBOL GROUNDING AND OPTIMIZATION
465