Authors:
Zainy M. Malakan
1
;
2
;
Nayyer Aafaq
1
;
Ghulam Mubashar Hassan
1
and
Ajmal Mian
1
Affiliations:
1
Department of Computer Science and Software Engineering, The University of Western Australia, Australia
;
2
Department of Information Science, Faculty of Computer Science and Information System, Umm Al-Qura University, Saudi Arabia
Keyword(s):
Storytelling, Image Captioning, Visual Description.
Abstract:
Automatic natural language description of visual content is an emerging and fast-growing topic that has attracted extensive research attention recently. However, different from typical ‘image captioning’ or ‘video captioning’, coherent story generation from a sequence of images is a relatively less studied problem. Story generation poses the challenges of diverse language style, context modeling, coherence and latent concepts that are not even visible in the visual content. Contemporary methods fall short of modeling the context and visual variance, and generate stories devoid of language coherence among multiple sentences. To this end, we propose a novel framework Contextualize, Attend, Modulate and Tell (CAMT) that models the temporal relationship among the image sequence in forward as well as backward direction. The contextual information and the regional image features are then projected into a joint space and then subjected to an attention mechanism that captures the spatio-temp
oral relationships among the images. Before feeding the attentive representations of the input images into a language model, gated modulation between the attentive representation and the input word embeddings is performed to capture the interaction between the inputs and their context. To the best of our knowledge, this is the first method that exploits such a modulation technique for story generation. We evaluate our model on the Visual Storytelling Dataset (VIST) employing both automatic and human evaluation measures and demonstrate that our CAMT model achieves better performance than existing baselines.
(More)