Authors:
Anurag Bagchi
1
;
Jazib Mahmood
1
;
Dolton Fernandes
1
and
Ravi Kiran Sarvadevabhatla
2
Affiliations:
1
International Institute of Information Technology, Hyderabad, India
;
2
Center for Visual Information Technology (CVIT), IIIT Hyderabad, India
Keyword(s):
Temporal Activity Localization, Graph Convolution Networks, Multi-modal Fusion, Audio.
Abstract:
State of the art architectures for untrimmed video Temporal Action Localization (TAL) have only considered RGB and Flow modalities, leaving the information-rich audio modality unexploited. Audio fusion has been explored for the related but an arguably easier problem of trimmed (clip-level) action recognition. However, TAL poses a unique set of challenges. In this paper, we propose simple but effective fusion-based approaches for TAL. To the best of our knowledge, our work is the first to jointly consider audio and video modalities for supervised TAL. We experimentally show that our schemes consistently improve performance for the state-of-the-art video-only TAL approaches. Specifically, they help achieve a new state-of-the-art performance on large-scale benchmark datasets - ActivityNet-1.3 (54.34 mAP@0.5) and THUMOS14 (57.18 mAP@0.5). Our experiments include ablations involving multiple fusion schemes, modality combinations, and TAL architectures. Our code, models, and associated dat
a are available at https://github.com/skelemoa/tal-hmo.
(More)