Authors:
Arnaud Renard
;
Sylvie Calabretto
and
Béatrice Rumpler
Affiliation:
Université de Lyon, CNRS, INSA-Lyon and LIRIS, France
Keyword(s):
Evaluation Model, Framework, Error Correction, Textual Documents, Distance and Similarity Measure, Metrics, Information Retrieval.
Related
Ontology
Subjects/Areas/Topics:
Biomedical Engineering
;
Data Engineering
;
Enterprise Information Systems
;
Health Information Systems
;
Information Systems Analysis and Specification
;
Knowledge Management
;
Ontologies and the Semantic Web
;
Society, e-Business and e-Government
;
Tools, Techniques and Methodologies for System Development
;
Web Information Systems and Technologies
Abstract:
In this article we present a solution to overcome the difficulties in the comparative evaluation of error corrections systems and mechanisms. An overview of existing error correction approaches allowed us to notice that most of them introduce their own evaluation process with the drawbacks it represents: i.e. it is not clear if one approach is better suited than another to correct a specific type of error. Obviously each evaluation process in itself is not completely original and consequently some similarities can be observed. In this context, we rely on this fact to propose a generalist ``evaluation design pattern'' we fitted to the case of error correction in textual documents. The idea lying beyond that is to provide a standard way to integrate required resources according to the family (previously defined in the evaluation model) they belong to. Moreover, we developed a platform which relies on OSGi specifications to provide a framework supporting the proposed evaluation model.