loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Kuangdai Leng 1 ; Robert Atwood 2 ; Winfried Kockelmann 3 ; Deniza Chekrygina 1 and Jeyan Thiyagalingam 1

Affiliations: 1 Scientific Computing Department, Science and Technology Facilities Council, Rutherford Appleton Laboratory, Didcot, U.K. ; 2 Diamond Light Source, Rutherford Appleton Laboratory, Didcot, U.K. ; 3 ISIS Neutron and Muon Source, Science and Technology Facilities Council, Rutherford Appleton Laboratory, Didcot, U.K.

Keyword(s): Unsupervised Learning, Image and Video Segmentation, Representation Learning, Regional Adjacency Graph.

Abstract: Fully unsupervised semantic segmentation of images has been a challenging problem in computer vision. Many deep learning models have been developed for this task, most of which using representation learning guided by certain unsupervised or self-supervised loss functions towards segmentation. In this paper, we conduct dense or pixel-level representation learning using a fully-convolutional autoencoder; the learned dense features are then reduced onto a sparse graph where segmentation is encouraged from three aspects: nor-malised cut, similarity and continuity. Our method is one- or few-shot, minimally requiring only one image (i.e., the target image). To mitigate overfitting caused by few-shot learning, we compute the reconstruction loss using augmented size-varying patches sampled from the image(s). We also propose a new adjacency-based loss function for continuity, which allows the number of superpixels to be arbitrarily large whereby the creation of the sparse graph can remain ful ly unsupervised. We conduct quantitative and qualitative experiments using computer vision images and videos, which show that segmentation becomes more accurate and robust using our sparse loss functions and patch reconstruction. For comprehensive application, we use our method to analyse 3D images acquired from X-ray and neutron tomography. These experiments and applications show that our model trained with one or a few images can be highly robust for predicting many unseen images with similar semantic contents; therefore, our method can be useful for the segmentation of videos and 3D images of this kind with lightweight model training in 2D. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 13.58.245.158

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Leng, K. ; Atwood, R. ; Kockelmann, W. ; Chekrygina, D. and Thiyagalingam, J. (2024). Unsupervised Few-Shot Image Segmentation with Dense Feature Learning and Sparse Clustering. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 575-586. DOI: 10.5220/0012380700003660

@conference{visapp24,
author={Kuangdai Leng and Robert Atwood and Winfried Kockelmann and Deniza Chekrygina and Jeyan Thiyagalingam},
title={Unsupervised Few-Shot Image Segmentation with Dense Feature Learning and Sparse Clustering},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2024},
pages={575-586},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012380700003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Unsupervised Few-Shot Image Segmentation with Dense Feature Learning and Sparse Clustering
SN - 978-989-758-679-8
IS - 2184-4321
AU - Leng, K.
AU - Atwood, R.
AU - Kockelmann, W.
AU - Chekrygina, D.
AU - Thiyagalingam, J.
PY - 2024
SP - 575
EP - 586
DO - 10.5220/0012380700003660
PB - SciTePress