loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Clara Brémond-Martin ; Huaqian Wu ; Cédric Clouchoux and Kévin François-Bouaou

Affiliation: Witsee, 33 Av. des Champs-Élysées, 75008 Paris, France

Keyword(s): GAN, Single Input, Auto-Encoder, Biomedical, Pair, Segmentation.

Abstract: Generating synthetic pairs of raw and ground truth (GT) image is a strategy to reduce the amount of acquisition and annotation by biomedical experts. Pair image generation strategies, from single-input paired images (SIP), focus on patch-pyramid (PP) or on dual branch generator but, resulting synthetic images are not natural. With few-input images, for raw synthesis, adversarial auto-encoders synthesises more natural images. Here we propose Pair-GAN, a combination of PP containing auto-encoder generators at each level, for the biomedical image synthesis based upon a SIP. PP allows to synthesise using SIP while the AAE generator renders most natural the image content. We use for this work two biomedical datasets containing raw and GT images. Our architecture is evaluated with seven state of the art method updated for SIP: qualitative, similitude and segmentation metrics, Kullback Leibler divergences from synthetic and original feature image representations, computational costs and sta tistical analyses. Pair-GAN generates most qualitative and natural outputs, similar to original pairs with complex shape not produced by other methods, however with increased memory needs. Future works may use this generative procedure for multimodal biomedical dataset synthesis to help their automatic processing such as classification or segmentation with deep learning tools. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.116.89.8

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Brémond-Martin, C.; Wu, H.; Clouchoux, C. and François-Bouaou, K. (2024). Pair-GAN: A Three-Validated Generative Model from Single Pairs of Biomedical and Ground Truth Images. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 37-52. DOI: 10.5220/0012318300003660

@conference{visapp24,
author={Clara Brémond{-}Martin. and Huaqian Wu. and Cédric Clouchoux. and Kévin Fran\c{C}ois{-}Bouaou.},
title={Pair-GAN: A Three-Validated Generative Model from Single Pairs of Biomedical and Ground Truth Images},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP},
year={2024},
pages={37-52},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012318300003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP
TI - Pair-GAN: A Three-Validated Generative Model from Single Pairs of Biomedical and Ground Truth Images
SN - 978-989-758-679-8
IS - 2184-4321
AU - Brémond-Martin, C.
AU - Wu, H.
AU - Clouchoux, C.
AU - François-Bouaou, K.
PY - 2024
SP - 37
EP - 52
DO - 10.5220/0012318300003660
PB - SciTePress