brain structures implicated in error-noting; see
(Kendler & Kendler, 1962, 1969). Finally, various
implementations of the above ideas have shown
success in a wide range of domains (Anderson,
Fults, et al, 2008); In these cases however the “core”
of irregularity-fix strategies was separate from the
system’s KB.
Irregularity-fix cores for autonomous agents
does exist, as the afore-mentioned implementations
show. But can they – or some improved future
version – do all that has been hypothesized? What
sorts of irregularities might lie beyond any given
core set of fixes? Are we in a Godelian situation
where, for any core set, there are yet more
irregularities beyond its reach? And if a core can
reach far, will it no longer be concise? Will core
effectiveness scale with the KB? Will a powerful
core also require a powerfully expressive language
and possibly thereby risk inconsistencies within
itself?
There are grounds to think that reach and
conciseness and effectiveness are well within the
capacity of an implementable commonsense core:
much the same grounds cited above for the existence
of a core in the first place. But scalability? As
humans are faced with more and more information,
our effectiveness can sometimes degrade in two
ways. Not only can it take us longer consider all the
data (though in some cases of course, the extra
information makes things go faster), but also there is
a heightened likelihood that we will mess up: we’ll
forget something, lose track of where we are,
confuse or conflate similar notions, etc. However,
this is not the issue; rather it is whether we cope as
well, whether we still notice things amiss and bring
corrective strategies to bear, as well as we do when
we have a smaller set of facts to deal with. Here I
simply state an opinion (in the current absence of
empirical data): yes, we do notice our confusion, our
lack of progress, and so on, whether working with a
large or small KB, on a simple or complex task, and
we also respond actively as well: we start over, or
ask for help, give up, etc. But we do not rotely go on
and on oblivious to the mess we are in.
4 SHOULD THE CORE FIT
INTO THE KB?
We now address the last hypothesized item:
consistency within the core. As claimed, the
commonsense core can be implemented and
included as part of an autonomous system. Having
the core sit outside the KB – for instance as a
monitor-and-control Bayesian net apart from the
agent’s world model – is an effective design for
many purposes. Further, its isolation then protects it
from possible infection from a contradiction in the
KB. While the KB may be in the throes of explosive
inference, the core is not. Even the beginnings of an
explosive KB inference process are readily noted by
such a core, which in turn then can redirect KB
inference in more productive ways. If the core fixes
are expressed in propositional language, and
together form a concise set, and if each fix is of the
simple sort we have described (ask for help, give up,
etc) it is plausible that there may well be no internal
inconsistency between them.
Yet there are situations in which it may make
less sense to separate the core from the KB. Here are
four such situations: (i) over time the core trains a
new item into the KB so that what had been a
particular kind of anomaly handled directly by the
core becomes encoded as a familiar event: the core
strategy that had been handling these events is now
largely replicated in the KB as a standard piece of
knowledge about how the world works; (ii) the
query “why did you do that?” may require reference
to the core, and so the KB reasoner must have some
ability to monitor facts about the core: “I did that
because I got confused and had to start over”; (iii)
“how/why did I do that?” can be asked as an
exercise in self-improvement (maybe it can be done
better), which suggests bidirectional monitoring and
control between core and KB; and (iv) the core itself
may behave in an anomalous manner (and if an
infinite regress of anomaly-handling meta-cores is to
be avoided then we might as well have the all the
anomaly-handling inside a single KB at the outset).
On the other hand, combining core and KB raises
the danger of inconsistency infecting the core; how
serious a problem this may be is currently under
investigation.
REFERENCES
Anderson, M., Gomaa, W., Grant, J., Perlis, D., 2008a.
Active logic semantics for a single agent in a static
world. Artificial Intelligence 172: 1045-63.
Anderson, M., Fults, S., Josyula, D., Oates, T., Perlis, D.,
Schmill, M., Wilson, S., Wright, D., 2008b. A self-
help guide for autonomous systems. AI Magazine,
29(2):67-76.
Grant, J., 1978 Classifications for inconsistent theories.
Notre Dame Journal of Formal Logic, 3: 435-444.
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
580