OntoRama (OntoRama, 2007) is an ontology
browser for RDF models based on a hyperbolic
layout of nodes and arcs. As the nodes in the center
are distributed on more space than those near to the
circumference, they are visualized with a higher
level of detail, while maintaining a reasonable
overview of the peripheral nodes. In addition to this
pseudo-3D space, OntoRama also introduces the
idea of cloned nodes in order to reduce the number
of crossed arcs and enhance the readability. The
duplicate nodes are displayed using an ad-hoc color
in order to avoid confusion. Unfortunately, this
application does not support editing and can only
manage RDF data.
Another interesting work is the OntoViz
(OntoViz, 2007) plug-in displays an ontology as a
graph by exploiting an open source library optimized
for graph visualization (Gansner & North, 1999).
Intuitively, classes and instances are represented as
nodes, while relations are visualized as oriented arcs.
Both nodes and arcs are labelled and displaced in a
way that minimizes overlapping, but not the size of
the graph. Therefore, the navigation of the graph,
enhanced only by magnification and panning tools,
does not provide a good overall view of the
ontology, as the graphical elements easily become
indistinguishable. OntoViz supports visualization of
several disconnected graphs at once. The users can
select a set of classes or instances to visualize.
OntoViz generates graphs that are static and non-
interactive which makes it less suitable for the
visualization of large ontologies.
TGViz (TGVizTab, 2007), similarly to OntoViz,
visualizes Protege (Protege, 2007) ontologies as
graphs. In this case however, the displacement of
nodes and arcs is computed using the spring layout
algorithm implemented in the Java TouchGraph
library (TouchGraph, 2007).
3 PROVIDING SEMANTIC
RELATION TO WEB BASED
KNOWLEDGE
Providing traditional Web searching with semantic
features is an application of the Semantic Web,
based on an explicit representation of semantics
about web resources and real world object. It is
aimed to improve both the proportion of relevant
material actually retrieved and the proportion of
retrieved material that is actually relevant. Recently,
research on information systems has increasingly
focused on how to effectively manage and share data
in a such heterogeneous and distributed
environment. In particular, the investigation of the
Semantic Web as an extension of the actual World
Wide Web, is aimed to make the Web content
machine understandable, allowing agents and
applications to access a variety of heterogeneous
resources (Dolog & W.Nejdl 2007).
It has been proposed to deal with problems such
as information overload and info-smog that are
responsible for the “lost on the net” effect and make
the web content inaccessible Our approach is to
index and retrieve information both in a generic and
in a specific context whether documents can be
mapped or not on ontologies, vocabularies and
thesauri. To achieve this goal, we perform a
semantic analysis process on structured and
unstructured parts of documents. The unstructured
parts need a linguistic analysis and a semantic
interpretation performed by means of Natural
Language Processing (NLP) techniques, while the
structured parts need a specific parser.
3.1 Semantic Analysis Process
A semantic analysis process will be carried out on
the transition from the old style of serving the web-
data visualization to the new style of providing the
3D graphical user interface. Firstly, we increase the
semantic net of WordNet (WordNet, 2007), a lexical
dictionary for the English language that groups
nouns, verbs, adjectives and adverbs into synonyms
sets, called synsets, linked by relations, such as
meronymy, synonymy or hyperonymy/hyponymy,
identifying valid and well-founded conceptual
relations and links contained in documents in order
to build a data structure, composed by concepts and
correlation between concepts and information, to
overlay the result set returned from the search
engine. At the same time we realized the
importance of being able to access a
multidisciplinary structure of documents, evaluating
several solutions like language specific thesaurus or
on-line encyclopedia. To achieve this goal we
choose a multidisciplinary, multilingual, web-based,
free content document encyclopaedia, such as
Wikipedia (Wikipedia, 2007), that contains about
1.900.000 encyclopedic information. We used the
great amount of documents included in Wikipedia to
extract new knowledge and to define a new semantic
net enriching WordNet. We added with new terms,
new associative relations and their classification, as
emphasized in (Harabagiu et al., 1999) where
authors identify several other weaknesses in the
WordNet semantic net constitution. In fact it
A 3D USER INTERFACE FOR THE SEMANTIC NAVIGATION OF WWW INFORMATION
257