be an instrument to facilitate the interaction of a user with the data, keeping in mind
that the user’s situated, contextual presence is indispensible for the creation of mean-
ing. For instance, it would be a good idea to partially formalize the syntactic part of the
interaction process that goes into the creation of meaning.
By following this direction, it is our conviction that one of the major limitations of
languages for representing ontologies - and in this respect OWL is no exception - stems
from the static assignment of relations between concepts, e.g. “Man is a subclass of
Human”. On the one hand, ontology languages for the semantic web, such as OWL and
RDF, are based on crisp logic and thus cannot handle incomplete, partial knowledge for
any domain of interest. On the other hand, it has been shown how (see, for instance [19])
uncertainty exists in almost every aspects of ontology engineering, and probabilistic di-
rected Graphical Models (GMs) such as Bayesian Nets (BN) can provide a suitable tool
for coping with uncertainty. Yet, in our view, the main drawback of BNs as a representa-
tion tool, is in the reliance on class/subclass relationships subsumed under the directed
links of their structure. We argue that an ontology is not just the product of deliberate
reflection on what the world is like, but is the realization of semantic interconnections
among concepts, where each of them could belong to different domains.
Indeed, since the seminal and outstanding work by Anderson on probabilistic foun-
dations of memory and categorization, concepts/classes and relations among concepts
arise in terms of their prediction capabilities with respect to a given contex [20]. Further,
the availability of a category grants the individual the ability to recall patterns of behav-
ior (stereotypes, [21]) as built on past interactions with objects in a given category. In
these terms, an object is not simply a physical object but a view of an interaction.
Thus, even without entering the fierce dispute whether ontologies should or should
not be shaped in terms of categories [22], it is clear that to endow ontologies with pre-
dictive capabilities together with properties of reconfigurability,what we name ontology
plasticity, one should relax constraints on the GM structure and allow the use of cyclic
graphs. A further advantage of an effort in this direction is the availability of a large
number of conceptual and algorithmic tools that have been produced by the Machine
Learning community in most recent years.
The main idea here is the introduction of a method for automatic construction of
ontology based on the extension of the probabilistic topic model introduced in [1] and
[23].
4.2 Ontology for Light Semantics
The description of both Word – Word and Word – Concept relations, related to the light
part of semantics, is based on an extension of the computational model depicted above
and discussed in [1] and [11]. Here we discuss how to model Word – Word relations,
whereas the Word – Concept relations are modeled by using the concept-topic model
proposed in [11]. We consider that, together with the topics model, what we call the
words model, in order to performs well in predicting word association and the effects of
semantic association and ambiguity on a variety of language-processing and memory
tasks.
The original theory of Griffiths mainly asserts a semantic representation in which
word meanings are represented in terms of a set of probabilistic topics z
i
where the
17