loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Xuan Wang 1 ; Hao Tang 1 ; 2 and Zhigang Zhu 1 ; 3

Affiliations: 1 The Graduate Center - CUNY, New York, NY 10016, U.S.A. ; 2 Borough of Manhattan Community College - CUNY, New York, NY 10007, U.S.A. ; 3 The City College of New York - CUNY, New York, NY 10031, U.S.A.

Keyword(s): Deep Learning, Context Understanding, Convolutional Neural Networks, Graph Convolutional Network.

Abstract: Contextual information has been widely used in many computer vision tasks. However, existing approaches design specific contextual information mechanisms for different tasks. In this work, we propose a general context learning and reasoning framework for object detection tasks with three components: local contextual labeling, contextual graph generation and spatial contextual reasoning. With simple user defined parameters, local contextual labeling automatically enlarge the small object labels to include more local contextual information. A Graph Convolutional Network learns over the generated contextual graph to build a semantic space. A general spatial relation is used in spatial contextual reasoning to optimize the detection results. All three components can be easily added and removed from a standard object detector. In addition, our approach also automates the training process to find the optimal combinations of user defined parameters. The general framework can be easily adapte d to different tasks. In this paper we compare our framework with a previous multistage context learning framework specifically designed for storefront accessibility detection and a state of the art detector for pedestrian detection. Experimental results on two urban scene datasets demonstrate that our proposed general framework can achieve same performance as the specifically designed multistage framework on storefront accessibility detection, and with improved performance on pedestrian detection over the state of art detector. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.117.166.52

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Wang, X.; Tang, H. and Zhu, Z. (2023). A General Context Learning and Reasoning Framework for Object Detection in Urban Scenes. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 5: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 91-102. DOI: 10.5220/0011637600003417

@conference{visapp23,
author={Xuan Wang. and Hao Tang. and Zhigang Zhu.},
title={A General Context Learning and Reasoning Framework for Object Detection in Urban Scenes},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 5: VISAPP},
year={2023},
pages={91-102},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011637600003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 5: VISAPP
TI - A General Context Learning and Reasoning Framework for Object Detection in Urban Scenes
SN - 978-989-758-634-7
IS - 2184-4321
AU - Wang, X.
AU - Tang, H.
AU - Zhu, Z.
PY - 2023
SP - 91
EP - 102
DO - 10.5220/0011637600003417
PB - SciTePress