loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Ryogo Takemoto 1 ; Yuya Nagamine 1 ; Kazuki Yoshihiro 1 ; Masatoshi Shibata 2 ; Hideo Yamada 2 ; Yuichiro Tanaka 3 ; Shuichi Enokida 4 and Hakaru Tamukoh 3 ; 1

Affiliations: 1 Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka, 808-0196, Japan ; 2 AISIN CORPORATION, 2-1 Asahi-machi, Kariya, Aichi, 448-8650, Japan ; 3 Research Center for Neuromorphic AI Hardware, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka, 808-0196, Japan ; 4 Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, 680-4 Kawazu, Iizuka, Fukuoka, 820-8502, Japan

Keyword(s): Image Processing, Human Recognition, Human Detection, HOG, MRCoHOG, GMM-MRCoHOG, FPGA.

Abstract: High-speed and accurate human recognition is necessary to realize safe autonomous mobile robots. Recently, human recognition methods based on deep learning have been studied extensively. However, these methods consume large amounts of power. Therefore, this study focuses on the Gaussian mixture model of multiresolution co-occurrence histograms of oriented gradients (GMM-MRCoHOG), which is a feature extraction method for human recognition that entails lower computational costs compared to deep learning-based methods, and aims to implement its hardware for high-speed, high-accuracy, and low-power human recognition. A digital hardware implementation method of GMM-MRCoHOG has been proposed. However, the method requires numerous look-up tables (LUTs) to store state spaces of GMM-MRCoHOG, thereby impeding the realization of human recognition systems. This study proposes a LUT reduction method to overcome this drawback by standardizing basis function arrangements of Gaussian mixture distrib utions in GMM-MRCoHOG. Experimental results show that the proposed method is as accurate as the previous method, and the memory required for state spaces consuming LUTs can be reduced to 1/504th of that required in the previous method. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.117.216.36

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Takemoto, R.; Nagamine, Y.; Yoshihiro, K.; Shibata, M.; Yamada, H.; Tanaka, Y.; Enokida, S. and Tamukoh, H. (2023). Memory-Efficient Implementation of GMM-MRCoHOG for Human Recognition Hardware. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 648-655. DOI: 10.5220/0011698400003417

@conference{visapp23,
author={Ryogo Takemoto. and Yuya Nagamine. and Kazuki Yoshihiro. and Masatoshi Shibata. and Hideo Yamada. and Yuichiro Tanaka. and Shuichi Enokida. and Hakaru Tamukoh.},
title={Memory-Efficient Implementation of GMM-MRCoHOG for Human Recognition Hardware},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={648-655},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011698400003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - Memory-Efficient Implementation of GMM-MRCoHOG for Human Recognition Hardware
SN - 978-989-758-634-7
IS - 2184-4321
AU - Takemoto, R.
AU - Nagamine, Y.
AU - Yoshihiro, K.
AU - Shibata, M.
AU - Yamada, H.
AU - Tanaka, Y.
AU - Enokida, S.
AU - Tamukoh, H.
PY - 2023
SP - 648
EP - 655
DO - 10.5220/0011698400003417
PB - SciTePress