agents. Also, since no examples from other universi-
ties were used for a particular agent, this agent really
reflects just the view of this university, so that dif-
ferences in the understanding of a concept (i.e. what
should be taught by what unit) between the universi-
ties are preserved.
4.2 Concepts in Action
It is very important to assess the performance of the
learner agent regarding a newly learned concept. As
stated before our main concern in this paper is to
have agents learn concepts to improve communica-
tion. Needless to say, to communicate about a con-
cept, an agent must distinguish an instance of the
concept (i.e.an object) from other instances. Based
on this fact, we conducted an experiment to see how
A g
L
classifies objects in U (i.e the set of all possi-
ble objects) when it learns a new concept. Also we
have chosen the set of examples which the majority
of agents vote for, to be learned by A g
L
.
First we enabled A g
L
to learn three different con-
cepts
Greek
,
Computer Science
and
Mathematics
.
We allowed A g
L
to use the most popular way of rely-
ing on a group decision which is to follow the major-
ity of votes as the representative examples of the con-
cept. Then we trained the learner using different per-
centages of positive examples in this area (i.e. Table 1
n% column). These percentages show us the classifi-
cation accuracy of A g
L
when the learner does not uti-
lize the maximum number of available examples from
the teachers. That is the case when due to the commu-
nication cost, the teachers could not send every possi-
ble example that they possess to the learner. Table 1
shows the classification results of the learner for the
three different concepts. In fact this table shows the
number of correctly classified examples both for pos-
itive examples and negative examples in two separate
columns. We should mention that, when we consider
the examples that a majority of agents agreed upon
as the boundary for the concept in the learner, every
other examples will be tested as the negative exam-
ples by A g
L
in the testing process. For instance and
for concept
Mathematics
, the majority set has 501
positive examples and the other objects (i.e 19061-
501=18560) could be considered as negative exam-
ples for it.
One interesting preliminary result, that in fact we
expected, was the significant increase of correctly
classified examples when the concept is mostly unan-
imous. For example the programs
Mathematics
and
Greek
have more common courses than
Computer
Science
among three different universities (which
also is very true among other universities). As Ta-
ble 1 shows the accuracy result for
Mathematics
is
much better than
Computer Science
. The last row
of Table 1 shows the performance of the learner when
it is trained by the whole set of examples it pos-
sess for each concept. For instance, the second and
third columns show that A g
L
classified 497 positive
and 17324 negative examples out of 501 positive and
18560 negative examples respectively. Therefore A g
L
classified 93% ((497+17324)/(501+18560)) of ob-
jects correctly for
Mathematics
while this accuracy
is 81% ((429+15104)/(505+18556)) for
Computer
Science
and 90% ((170+16991)/(171+18890)) for
Greek
. There is a small “dip” in
Greek
when A g
L
is
trained by 70% of examples when the accuracy jumps
to 91% and then comes back to 90%. Despite this
”dip” the learner shows a consistent behavior clas-
sifying positive examples. We conclude that having
agents with close viewpoints helps the learner to have
a concrete understanding of a concept which naturally
leads to a learner with better performance.
To compare the performance of A g
L
with the
teacher agents we had to compare the classification
capability of A g
L
with A g
W
, A g
C
, and A g
M
respec-
tively. As we discussed earlier, we assume that the
teacher agents have learned the concepts in their on-
tology before they start to teach a concept to the
learner. This learning has been achieved using some
supervised inductive learning mechanisms and using
the example objects that in each agent are associated
with every concept in its ontology. Therefore we are
supposed to simply compare the classification effi-
ciency of A g
L
with A g
W
, A g
C
, and A g
M
.
Nevertheless we can not guarantee that A g
L
learns
a concept using the same number of examples as each
teacher agent and ,obviously,the more examples are
provided to the agent the better a classifier it can
learn. This possibility causes an unbalanced situa-
tion in which A g
L
and other agents can not be com-
pared. To overcome this problem, we have to prepare
a fair situation in which the learner agent classifica-
tion efficiency could be compared with each teacher
agent. Therefore we selected a fragment of positive
examples in A g
L
which is quantitatively equal to the
number of positive examples in each teacher agent to
train A g
L
with the same number of examples that the
teacher agents utilized to learn the concept before.
Table 2, 3 and 4 show the results of comparisons
of A g
L
with A g
M
, A g
W
and A g
C
respectively. The
second column in each table shows the number of cor-
rectly classified examples, both positive and negative,
out of 19061 test examples (i.e. objects in U ) by
A g
L
. The third column shows the number of cor-
rectly classified examples by the teacher agent and
finally the forth row shows the percentage of exam-
ICAART 2009 - International Conference on Agents and Artificial Intelligence
530