loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Saima Nazir 1 ; Yu Qian 2 ; Muhammad Haroon Yousaf 3 ; Sergio A. Velastin 4 ; Ebroul Izquierdo 5 and Eduard Vazquez 2

Affiliations: 1 University of Engineering and Technology Taxila, Pakistan, Queen Mary University of London, U.K., Cortexica Vision Systems Ltd. and U.K. ; 2 Cortexica Vision Systems Ltd. and U.K. ; 3 University of Engineering and Technology Taxila and Pakistan ; 4 Queen Mary University of London, U.K., Universidad Carlos III de Madrid, Spain, Cortexica Vision Systems Ltd. and U.K. ; 5 Queen Mary University of London and U.K.

Keyword(s): Deep Learning, Residual Network, Spatio-Temporal Network, Temporal Residual Network, Human Action Recognition.

Abstract: Deep learning has led to a series of breakthrough in the human action recognition field. Given the powerful representational ability of residual networks (ResNet), performance in many computer vision tasks including human action recognition has improved. Motivated by the success of ResNet, we use the residual network and its variations to obtain feature representation. Bearing in mind the importance of appearance and motion information for action representation, our network utilizes both for feature extraction. Appearance and motion features are further fused for action classification using a multi-kernel support vector machine (SVM). We also investigate the fusion of dense trajectories with the proposed network to boost up the network performance. We evaluate our proposed methods on a benchmark dataset (HMDB-51) and results shows the multi-kernel learning shows the better performance than the fusion of classification score from deep network SoftMax layer. Our proposed method also sh ows good performance as compared to the recent state-of-the-art methods. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.227.114.125

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Nazir, S.; Qian, Y.; Yousaf, M.; Velastin, S.; Izquierdo, E. and Vazquez, E. (2019). Human Action Recognition using Multi-Kernel Learning for Temporal Residual Network. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP; ISBN 978-989-758-354-4; ISSN 2184-4321, SciTePress, pages 420-426. DOI: 10.5220/0007371104200426

@conference{visapp19,
author={Saima Nazir. and Yu Qian. and Muhammad Haroon Yousaf. and Sergio A. Velastin. and Ebroul Izquierdo. and Eduard Vazquez.},
title={Human Action Recognition using Multi-Kernel Learning for Temporal Residual Network},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP},
year={2019},
pages={420-426},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007371104200426},
isbn={978-989-758-354-4},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP
TI - Human Action Recognition using Multi-Kernel Learning for Temporal Residual Network
SN - 978-989-758-354-4
IS - 2184-4321
AU - Nazir, S.
AU - Qian, Y.
AU - Yousaf, M.
AU - Velastin, S.
AU - Izquierdo, E.
AU - Vazquez, E.
PY - 2019
SP - 420
EP - 426
DO - 10.5220/0007371104200426
PB - SciTePress