then they browse and choose resources that are repre-
sented by graphic objects, such as computing nodes,
datasets, DM tools and algorithms correspondent to
DM technique chosen. These resources are either
on local site or distributed on different heterogene-
ity sites with heterogenous platforms. However, AD-
MIRE allows users to interact with them transparently
at this level. The second step in the building of a
DDM job is to establish links between tasks chosen,
i.e. the execution order. By checking this order, AD-
MIRE system can detect independent tasks that can
be executed concurrently. Furthermore, users can also
use this interface to publish new DM tools and algo-
rithms.
This layer allows to visualize, represent as well
as to evaluate results of an DDM application too.
The discovered knowledges will be represented in
many defined forms such as graphical, geometric, etc.
ADMIRE supports different visualization techniques
which are applicable to data of certain types (discrete,
continual, point, scalar or vector) and dimensions (1-
D, 2-D, 3-D). It also supports the interactive visual-
ization which allows users to view the DDM results
in different perspectives such as layers, levels of de-
tail and help them to understand these results better.
Besides the GUI, there are four modules in this layer:
DDM task management, Data/Resource management,
interpretation and evaluation.
The first module spans both Interface and Core lay-
ers of ADMIRE. The part in the Interface layer of this
module is responsible for mapping user requirements
via selected DM tasks and their resources to an exe-
cuting schema of tasks correspondent. Another role
of this part is to check the coherence between DM
tasks of this executing schema for a given DDM job.
The purpose of this checking is, as mentioned above,
to detect independent tasks and then this schema is re-
fined to obtain an optimal execution. After verifying
the executing schema, this module stores it in a task
repository that will be used by the lower part of this
task management module in the core layer to execute
this DDM job.
The second module allows to browse necessary re-
sources in a set of resources proposed by ADMIRE.
This module manages the meta-data of all the avail-
able datasets and resources (computing nodes, DM
algorithms and tools) published. The part in the Inter-
face layer of this module is based on these meta-data
that are stored in two repositories: datasets reposi-
tory and resources repository to supply an appropri-
ate set of resources depending on the given DM task.
Data/Resource management module spans both In-
terface and Core layers of ADMIRE. The reason is
that modules in the ADMIRE core layer also need
to interact with data and resources to perform data
mining tasks as well as integration tasks. In or-
der to mask grid platform, data/resource manage-
ment module is based on a data grid middleware, e.g.
DGET(Kechadi,2005).
The third module is for interpreting DDM results
to different ordered presentation forms. Integrat-
ing/mining result models from knowledge map mod-
ule in the ADMIRE core layer is explained and eval-
uated.
The last module deals with evaluation the DDM re-
sults by providing different evaluation techniques. Of
course, measuring the effectiveness or usefulness of
these results is not always straightforward. This mod-
ule also allows experienced users to add new tools or
techniques to evaluate knowledge mined.
3.2 Core Layer
The ADMIRE core layer is composed of three
parts: knowledge discovery, task management and
data/resource management.
The role of the first part is to mine the data, inte-
grate the data and discover knowledge required. It is
the centre part of this layer. This part contains three
modules: data preprocessing; distributed data mining
(DDM) with two sub components: local data min-
ing (LDM) and integration/coordination; knowledge
map.
The first module carries out locally data pre-
processing of a given tasks such as data cleaning,
data transformation, data reduction, data project, data
standardisation, data density analysis, etc. These pre-
processed data will be the input of the DDM mod-
ule. Its LDM component performs locally data min-
ing tasks. The specific characteristic of ADMIRE by
compared with other current DDM system is that dif-
ferent mining algorithms will be used in a local DM
task to deal with different kind of data. So, this com-
ponent is responsible for executing and these algo-
rithms. Then, local results will be integrated and/or
coordinated by the second component of DDM mod-
ule to produce a global model. Algorithms of data-
preprocessing and data-mining are chosen from a set
of pre-defined algorithms in the ADMIRE system.
Moreover, users can publish new algorithms to in-
crease the performance.
The results of local DM such as association rules,
classification, and clustering etc. should be collected
and analysed by domain knowledge. This is the role
of the last module: knowledge map. This module
will generate significant, interpretable rules, models
and knowledge. Moreover, the knowledge map also
controls all the data mining process by proposing dif-
ferent strategies mining as well as for integrating and
coordinating to achieve the best performance.
The task management part plays an important role
in ADMIRE framework. It manages all executing
plan created from the interface layer. This part reads
an executing schema from the task repository and then
ICSOFT 2006 - INTERNATIONAL CONFERENCE ON SOFTWARE AND DATA TECHNOLOGIES
70