SEARCHING MOVIES BASED ON USER DEFINED SEMANTIC EVENTS

Bart Lehane, Noel E. O’Connor, Hyowon Lee

Abstract

The number, and size, of digital video databases is continuously growing. Unfortunately, most, if not all, of the video content in these databases is stored without any sort of indexing or analysis and without any associated metadata. If any of the videos do have metadata, then it is usually the result of some manual annotation process rather than any automatic indexing. Locating clips and browsing content is difficult, time consuming and generally inefficient. The task of managing a set of movies is particularly difficult given their innovative creation process and the individual style of directors. This paper proposes a method of searching video data in order to retrieve semantic events thereby facilitating management of video databases. An interface is created which allows users to perform searching using the proposed method. In order to assess the searching method, this interface is used to conduct a set of experiments in which users are timed completing a set of tasks using both the searching method and an alternate, keyframe based, retrieval method. These experiments evaluate the searching method, and demonstrate it’s versatility.

References

  1. Bordwell, D. and Thompson, K. (1997). Film Art: An Introduction. McGraw-Hill.
  2. Chen, L., Rizvi, S. J., and O tzu, M. (2003). Incorporating audio cues into dialog and action scene detection. In Proceedings of SPIE Conference on Storage and Retrieval for Media Databases, pages 252-264.
  3. Kang, H.-B. (2003). Emotional event detection using relevance feedback. In Proceedings of the International Conference on Image Processing.
  4. Lehane, B. and O'Connor, N. (2006). Workshop on image analysis for multimedia interactive services (wiamis), incheon, korea. In Movie Indexing via Event Detection.
  5. Lehane, B., O'Connor, N., and Murphy, N. (2004a). Action sequence detection in motion pictures. In The international Workshop on Multidisciplinary Image, Video, and Audio Retrieval and Mining.
  6. Lehane, B., O'Connor, N., and Murphy, N. (2004b). Dialogue scene detection in movies using low and midlevel visual features. In International Workshop on Image, Video, and Audio Retrieval and Mining.
  7. Lehane, B., O'Connor, N., and Murphy, N. (2005). Dialogue scene detection in movies. In International Conference on Image and Video Retrieval (CIVR), Singapore, 20-22 July 2005, pages 286-296.
  8. Leinhart, R., Pfeiffer, S., and Effelsberg, W. (1999). Scene determination based on video and audio features. In In proceedings of IEEE Conference on Multimedia Computing and Systems, pages 685-690.
  9. Li, Y. and Kou, C.-C. J. (2001). Movie event detection by using audiovisual information. In Proceedings of the Second IEEE Pacific Rim Conferences on Multimedia: Advances in Multimedia Information PRocessing.
  10. Li, Y. and Kou, C.-C. J. (2003). Video Content Analysis using Multimodal Information. Kluwer Academic Publishers.
  11. Nam, J., Alghoniemy, M., and Tewfik, A. H. (1998). Audiovisual content-based violent scene characterization. In Proceedings of International Conference on Image Processing (ICIP), volume 1, pages 351-357.
  12. Rasheed, Z. and Shah, M. (2003). Scene detection in hollywood movies and tv shows. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  13. Rui, Y., Huang, T. S., and Mehrotra, S. (1998). Constructing table-of-content for video. In ACM Journal of Multimedia Systema, pages 359-368.
  14. Sundaram, H. and Chan, S.-F. (2000). Determining computable scenes in films and their structures using audio-visual memory models. In ACM Multimedia 2000.
  15. Yeung, M. and Yeo, B.-L. (1996). Time constrained clustering for segmentation of video into story units. In Proceedings of International Conference on Pattern Recognition.
  16. Yeung, M. and Yeo, B.-L. (1997). Video visualisation for compact presentation and fast browsing of pictorial content. In IEEE Transactions on Circuits and Systems for Video Technology, pages 771-785.
  17. Zhai, Y., Rasheed, Z., and Shah, M. (2004). A framework for semantic classification of scenes using finite state machines. In International Converence on Image and Video Retrieval.
  18. Zhou, J. and Tavanapong, W. (2002). Shotweave: A shot clustering technique for story browsing for large video databases. In Proceedings of the Workshops XMLDM, MDDE, and YRWS on XML-Based Date Management and Multimedia Engineering-Revised Papers.
Download


Paper Citation


in Harvard Style

Lehane B., E. O’Connor N. and Lee H. (2006). SEARCHING MOVIES BASED ON USER DEFINED SEMANTIC EVENTS . In Proceedings of the International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2006) ISBN 978-972-8865-64-1, pages 232-239. DOI: 10.5220/0001569802320239


in Bibtex Style

@conference{sigmap06,
author={Bart Lehane and Noel E. O’Connor and Hyowon Lee},
title={SEARCHING MOVIES BASED ON USER DEFINED SEMANTIC EVENTS},
booktitle={Proceedings of the International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2006)},
year={2006},
pages={232-239},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001569802320239},
isbn={978-972-8865-64-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Signal Processing and Multimedia Applications - Volume 1: SIGMAP, (ICETE 2006)
TI - SEARCHING MOVIES BASED ON USER DEFINED SEMANTIC EVENTS
SN - 978-972-8865-64-1
AU - Lehane B.
AU - E. O’Connor N.
AU - Lee H.
PY - 2006
SP - 232
EP - 239
DO - 10.5220/0001569802320239