gives lowest redundancy in key frames according to
CR and PSNR values. All these results demonstrate
the feasibility and efficiency of the proposed method.
Our method can offer us a video summary with a little
number of key frames and also with a low
computational cost since it is based on PCA algorithm
coupled with HAC. We can see also that in some
cases our approach doesn’t always give the best result
compared with the other state of the art method. This
is due to the quantity of information lost after
applying PCA whitch is ranging from 7% to 20%.
This is a compromise. We win in complexity
computation and time cost but we lose some
information.
5 CONCLUSIONS
In this paper, we presented an innovative algorithm
for key frame extraction. In this paper, we have
proposed a simple and effective technique for key
frame extraction based on local description "interest
points" and using a new interest points matching
method. This interest points matching method is
based on local description around each interest point
and also spatial constraints coupled with geometric
invariants. After that we computed a repeatability
matrix for each shot. We applied PCA and HAC to
extract key frames. We used an unsupervised
classification method to generate clusters regrouping
forms with the same content. While choosing the
center of each cluster as a key frame, we eliminate the
redundancy. The experiments showed that the
proposed algorithm gives a set of image that covers
all significant events in the video while minimizing
information redundancy in these key frames. We
studied some state of the local description. Most of
them are based on global image description.
As a perspective, we will try to apply other non-
supervised clustering methods. We want to see what
is the effectiveness of using PCA before clustering.
As a second perspective and after extracting
keyframes from all the videos in the database, we will
try to give the visual summary which is composed by
the most representative objects in the videos database.
The user can initiate his visual query by selecting one
or some of these objects.
REFERENCES
Ueda, H., Miyatake, T., and Yoshizawa, S., 1991. An
interactive natural-motion-picture dedicated multime-
dia authoring system. Proc. ACM CHI Conference, 343
-350.
Pentland, A., Picard, R.,Davenport G., and Haase , K.,
1994. Video and image semantics, advanced tools for
telecommunications. IEEE Multimedia. 73-75.
Zhuang, Y., Rui, Y., Huang, T. S, Mehrotra, S., 1998. Key
Frame Extraction Using Unsupervised Clustering.
ICIP’98, Chicago, USA, 866-870.
Girgensohn, A., Boreczky, J., 2000. Time-Constrained
Keyframe Selection Technique. Multimedia Tools and
Application, 347-358.
Gong Y., and Liu, X., 2000. Generating optimal video
summaries. Proc. IEEE Int. Conference on Multimedia
and Expo, 3:1559-1562.
Mundur, P., Rao, Y. and Yesha Y., 2006 .Keyframe-based
video summarization using Delaunay clustering.
International Journal on Digital Libraries, vol. 6, no.
2, pp. 219–232.
Luo, J., Papin, C., Costello, K., 2009. Towards extracting
semantically meaningful key frames from personal
video clips: from humans to computers. IEEE
Transactions on Circuits and Systems for Video
Technology 19 (2) 289–301.
Guironnet, M., Pellerin, D., Guyader, N., 2007. Ladret,
P.,Video summarization based on camera motion and a
subjective evaluation method. EURASIP Journal on
Image and Video Processing, 12.
Chen, F., Delannay, D., Vleeschouwer, C., 2011. “An
autonomous framework to produce and distribute
personalized team-sport video summaries: a basketball
case study. IEEE Transactions on Multimedia 13 (6)
1381–1394.
Truong, B.T., Venkatesh, S., 2007. Video abstraction: a
systematic review and classification, ACM
Transactions Multimedia Computing. Communications
and Applications. 3 (1).
Cai et al., 2005. A Study of Video Scenes Clustering Based
on Shot Key Frames. Series Core Journal of Wuhan
University (English) Wuhan University Journal of
Natural Sciences Pages 966-970.
Lowe D. G., 2004. Distinctive image features from scale
Invariant keypoints. Int. J. Computer Vision, vol. 60,
no. 2, pp. 91–110.
Bahroun, S., Gharbi, H., and Zagrouba, E., 2014. Local
query on satellite images based on interest points.
International Geoscience and Remote Sensing
Symposium, Quebec.
Gharbi, H., Bahroun, S., and Zagrouba, E., 2014. Robust
interest points matching based on local description and
spatial constraints. International Conference on Image,
Vision and Computing, Paris.
Park, K. T., Lee, J. Y., Rim, K. W., Moon, Y. S., 2005. Key
frame extraction based on shot coverage and distortion.
LNCS, 3768:291-300.
Wolf, W., 1996. Key frame selection by motion analysis.
Int Conf on Acoustic. Speech and Signal Processing.
Barhoumi, W., and Zagrouba, E., 2013. “On-the-fly
extraction of key frames for efficient video
summarization. AASRI Procedia 4, 78 – 84.
Ciocca, G., and Schettini, R. 2006. An innovative algorithm
for key frame extraction in video summarization. J. of