Enhanced Generative Data Augmentation for Semantic Segmentation via Stronger Guidance

Quang-Huy Che, Quang-Huy Che, Duc-Tri Le, Duc-Tri Le, Bich-Nga Pham, Bich-Nga Pham, Duc-Khai Lam, Duc-Khai Lam, Vinh-Tiep Nguyen, Vinh-Tiep Nguyen

2025

Abstract

Data augmentation is crucial for pixel-wise annotation tasks like semantic segmentation, where labeling requires significant effort and intensive labor. Traditional methods, involving simple transformations such as rotations and flips, create new images but often lack diversity along key semantic dimensions and fail to alter high-level semantic properties. To address this issue, generative models have emerged as an effective solution for augmenting data by generating synthetic images. Controllable Generative models offer data augmentation methods for semantic segmentation tasks by using prompts and visual references from the original image. However, these models face challenges in generating synthetic images that accurately reflect the content and structure of the original image due to difficulties in creating effective prompts and visual references. In this work, we introduce an effective data augmentation pipeline for semantic segmentation using Controllable Diffusion model. Our proposed method includes efficient prompt generation using Class-Prompt Appending and Visual Prior Blending to enhance attention to labeled classes in real images, allowing the pipeline to generate a precise number of augmented images while preserving the structure of segmentation-labeled classes. In addition, we implement a class balancing algorithm to ensure a balanced training dataset when merging the synthetic and original images. Evaluation on PASCAL VOC datasets, our pipeline demonstrates its effectiveness in generating high-quality synthetic images for semantic segmentation. Our code is available at this https URL.

Download


Paper Citation


in Harvard Style

Che Q., Le D., Pham B., Lam D. and Nguyen V. (2025). Enhanced Generative Data Augmentation for Semantic Segmentation via Stronger Guidance. In Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM; ISBN 978-989-758-730-6, SciTePress, pages 251-262. DOI: 10.5220/0013175900003905


in Bibtex Style

@conference{icpram25,
author={Quang-Huy Che and Duc-Tri Le and Bich-Nga Pham and Duc-Khai Lam and Vinh-Tiep Nguyen},
title={Enhanced Generative Data Augmentation for Semantic Segmentation via Stronger Guidance},
booktitle={Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM},
year={2025},
pages={251-262},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013175900003905},
isbn={978-989-758-730-6},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM
TI - Enhanced Generative Data Augmentation for Semantic Segmentation via Stronger Guidance
SN - 978-989-758-730-6
AU - Che Q.
AU - Le D.
AU - Pham B.
AU - Lam D.
AU - Nguyen V.
PY - 2025
SP - 251
EP - 262
DO - 10.5220/0013175900003905
PB - SciTePress