GCCNet: Global Context Constraint Network for Semantic Segmentation

Hyunwoo Kim, Huaiyu Li, Seok-Cheol Kee

2019

Abstract

The state-of-the-art semantic segmentation tasks can be achieved by the variants of the fully convolutional neural networks (FCNs), which consist of the feature encoding and the deconvolution. However, they struggle with missing or inconsistent labels. To alleviate these problems, we utilize the image-level multi-class encoding as the global contextual information. By incorporating object classification into the objective function, we can reduce incorrect pixel-level segmentation. Experimental results show that our algorithm can achieve better performance than other methods on the same level training data volume.

Download


Paper Citation


in Harvard Style

Kim H., Li H. and Kee S. (2019). GCCNet: Global Context Constraint Network for Semantic Segmentation.In Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems - Volume 1: VEHITS, ISBN 978-989-758-374-2, pages 380-387. DOI: 10.5220/0007705703800387


in Bibtex Style

@conference{vehits19,
author={Hyunwoo Kim and Huaiyu Li and Seok-Cheol Kee},
title={GCCNet: Global Context Constraint Network for Semantic Segmentation},
booktitle={Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems - Volume 1: VEHITS,},
year={2019},
pages={380-387},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007705703800387},
isbn={978-989-758-374-2},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems - Volume 1: VEHITS,
TI - GCCNet: Global Context Constraint Network for Semantic Segmentation
SN - 978-989-758-374-2
AU - Kim H.
AU - Li H.
AU - Kee S.
PY - 2019
SP - 380
EP - 387
DO - 10.5220/0007705703800387