Table 2: Conference results.
Conference Prec Rec F
0.5
-Meas F
1
-Meas F
2
-Meas
CODI 0.74 0.57 0.7 0.64 0.6
LogMap 0.85 0.5 0.75 0.63 0.54
MaasMatch 0.83 0.42 0.69 0.56 0.47
Table 3: Average size of alignments, number of incoherent alignments, and average degree of incoherence.
Coherence Size Inc. Alignments Degree of Inc. Reasoning problems
CODI 9.5 0/91 0% 0
LogMap 8 8/91 2% 0
MaasMatch 7.5 21/91 4% 0
select a set of proposed tags to use in the new or mod-
ified content. Second, we will use CODI as a lazy
alignment tool server side, this way we make align-
ments and Ontology enrichment with only our data
base information, therefore we reach a coherent set of
knowledge. Third, with the database knowledgeprop-
erly tagged, grouped, and with its own representation
(as ontologies) we will assign or suggest tags, con-
tent and activities by LogMap in real time when it’s
necessary, therefore we finally set up a mixed strat-
egy with CODI and LogMap advantages. After this
test is done, the next step will be to mix our schema,
tags, ontologies and generated taxonomies with a set
of taxonomies made of every verb and name of inter-
est in our case, it’s like a WordNet but specialized and
limited within our target field of interest.
About the algorithms tested, there is still a lot of work
to do. Firstly, in the sense of runtime-coherence we
need to look for a good method to reduce the runtime
like LogMap does, but preventing the coherence loss
like it’s done by CODI. This is a great challenge and
a key step to really using Ontology Matching in real
systems.
Secondly, almost every algorithm depends on mapped
knowledge, only a few does auto-mapping, and those
who do it they have a very low confidence on the gen-
erated data. With machine learning technologies this
can be improved by creating more useful and reliable
ontologies. Thirdly, to improve precision-recall ratios
and to improve capabilities to face new problems and
scalability.
These three challenges are the main and permanent
lines of improvement in every algorithm that has been
made, so they can’t be removed from the scope.
ACKNOWLEDGEMENTS
This work is part of the project "TSI-090500-2011-36
- Ministerio de Industria, Turismo y Comercio", and
was also supported by Sandra Castro, Noelia Gil and
J.M. Castro from Intellectia Bank S.A.
REFERENCES
Batini, C. and M.Lenzerini (1986). A comparative analy-
sis of methodologies for database schema integration.
ACM Computing Surveys, 18(4).
Bellahsene, Z., Bonifati, A., Duchateau, F., and Velegrakis,
Y. (2011). On Evaluating Schema Matching and Map-
ping.
David, J., Euzenat, J., Scharffe, F., and dos Santos, C. T.
(2010). The Alignment API 4.0. IOS Press, 1:1 – 8.
Euzenat, J. and Shvaiko, P. (2007). Ontology Matching.
Springer.
Huber, J., Sztyler, T., Nößner, J., and Meilicke, C. (2011).
Codi: Combinatorial optimization for data integra-
tion: results for oaei 2011. In (Shvaiko et al., 2011).
Jiménez-Ruiz, E., Morant, A., and Cuenca Grau, B. (2011).
LogMap results for OAEI 2011. In Proc. of the 6th
International Workshop on Ontology Matching (OM),
volume 814. CEUR Workshop Proceedings (CEUR-
WS.org). http://ceur-ws.org/Vol-814/.
Sakarkar, G. and Upadhye, S. (2010). A survey of software
agent and ontology. International Journal of Com-
puter Applications, 1(7).
Schadd, F. C. and Roos, N. (2011). Maasmatch results for
oaei 2011. In (Shvaiko et al., 2011).
Shvaiko, P. and Euzenat, J. (2008). Ten challenges for on-
tology matching.
Shvaiko, P., Euzenat, J., Heath, T., Quix, C., Mao, M., and
Cruz, I. F., editors (2011). Proceedings of the 6th In-
ternational Workshop on Ontology Matching, Bonn,
Germany, October 24, 2011, volume 814 of CEUR
Workshop Proceedings. CEUR-WS.org.
Spaccapietra, S., Parent, C., and Dupont, Y. (1992). Model
Independent Assertions for Integration of Heteroge-
neous Schemas. VLDB Journal, 1:81 – 126.
DescriptionandEvaluationofAlgorithmsforOntologyMatching
495