loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Houda Chaabouni-Chouayakh 1 ; 2 ; Manel Farhat 1 ; 2 and Achraf Ben-Hamadou 1 ; 2

Affiliations: 1 Centre de Recherche en Numérique de Sfax, 3021, Sfax, Tunisia ; 2 Laboratory of Signals, Systems, Artificial Intelligence and Networks (SM@RT), Sfax, Tunisia

Keyword(s): Endoscopic Images,Deep Learning, Image Feature Matching, Adaptive Triplet Loss.

Abstract: Image feature matching is a key step in creating endoscopic mosaics of the bladder inner walls, which help urologists in lesion detection and patient follow-up. Endoscopic images, on the other hand, are particularly difficult to match because they are weekly textured and have limited surface area per frame. Deep learning techniques have recently gained popularity in a variety of computer vision tasks. The ability of convolutional neural networks (CNNs) to learn rich and optimal features contributes to the success of these methods. In this paper, we present a novel deep learning based approach for endoscopic image matching. Instead of standard handcrafted image descriptors, we designed a CNN to extract feature vector from local interest points. We propose an efficient approach to train our CNN without manually annotated data. We proposed an adaptive triplet loss which has the advantage of improving the inter-class separability as well as the inter class compactness. The training datas et is automatically constructed, each sample is a triplet of patches: an anchor, one positive sample (a perspective transformation of the anchor) and one negative sample. The obtained experimental results show at the end of the training step a more discriminative space representation where the anchor becomes closer to the positive sample and farther from the negative one in the embedding space. Comparison with the well-known standard hand-crafted descriptor SIFT in terms of recall and precision showed the effectiveness of the proposed approach, reaching the top recall value for a precision value of 0.97. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.12.148.180

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Chaabouni-Chouayakh, H.; Farhat, M. and Ben-Hamadou, A. (2022). Deep Features Extraction for Endoscopic Image Matching. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP; ISBN 978-989-758-555-5; ISSN 2184-4321, SciTePress, pages 925-932. DOI: 10.5220/0010833700003124

@conference{visapp22,
author={Houda Chaabouni{-}Chouayakh. and Manel Farhat. and Achraf Ben{-}Hamadou.},
title={Deep Features Extraction for Endoscopic Image Matching},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP},
year={2022},
pages={925-932},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010833700003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 4: VISAPP
TI - Deep Features Extraction for Endoscopic Image Matching
SN - 978-989-758-555-5
IS - 2184-4321
AU - Chaabouni-Chouayakh, H.
AU - Farhat, M.
AU - Ben-Hamadou, A.
PY - 2022
SP - 925
EP - 932
DO - 10.5220/0010833700003124
PB - SciTePress