A Full Reference Video Quality Measure based on Motion Differences and Saliency Maps Evaluation

B. Ortiz-Jaramillo, A. Kumcu, L. Platisa, W. Philips

Abstract

While subjective assessment is recognized as the most reliable means of quantifying video quality, objective assessment has proven to be a desirable alternative. Existing video quality indices achieve reasonable prediction of human quality scores, and are able to well predict quality degradation due to spatial distortions but not so well those due to temporal distortions. In this paper, we propose a perception-based quality index in which the novelty is the direct use of motion information to extract temporal distortions and to model the human visual attention. Temporal distortions are computed from optical flow and common vector metrics. Results of psychovisual experiments are used for modeling the human visual attention. Results show that the proposed index is competitive with current quality indices presented in the state of art. Additionally, the proposed index is much faster than other indices also including a temporal distortion measure.

References

  1. Barron, J., Fleet, D., Beauchemin, S., and Burkitt, T. A. (1992). Performance of optical flow techniques. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 236-242.
  2. Chikkerur, S., Sundaram, V., Reisslein, M., and Karam, L. (2011). Objective video quality assessment methods: A classification, review, and performance comparison. IEEE Transactions on broadcasting, 57(2):165-182.
  3. Daly, S. (1998). Engineering observations from spatiovelocity and spatiotemporal visual models. In Human Vision and Electronic Imaging III.
  4. Hammett, S. and Larsson, J. (2012). The effect of contrast on perceived speed and flicker. Journal of Vision, 12(12):1-8.
  5. ITU-R-Recommendation-BT.500-11 (1998). Methodology for the subjective assessment of the quality of television pictures. ITU, Geneva, Switzerland.
  6. Kingdom, F. and Prins, N. (2010). Psychophysics A practical introduction. Elsevier, London, 1st edition.
  7. Li, S., Ma, L., and Ngan, K. (2011). Video quality assessment by decoupling additive impairments and detail losses. In Third International Workshop on Quality of Multimedia Experience (QoMEX).
  8. Lucas, B. and Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In Imaging Understanding Workshop.
  9. Marat, S., Phuoc, T., Granjon, L., Guyader, N., Pellerin, D., and Guérin-Dugué (2009). Modelling spatio-temporal saliency to predict gaze direction for short videos. International journal of Computer Vision, 82(3):231- 243.
  10. Max-Planck-institute, M.-P.-i.-f.-b.-c. (2013). Brain research on non-human primates. http://hirnforschung. kyb.mpg.de/en/homepage.html.
  11. Moorthy, A. and Bovik, A. (2010). Efficient video quality assessment along temporal trajectories. IEEE transactions on circuits and systems for video technology, 20(3):1653-1658.
  12. Pinson, M. and Wolf, S. (2004). A new standardized method for objectively measuring video quality. IEEE Transactions on Broadcasting, 50(3):312-322.
  13. Seshadrinathan, K. and Bovik, A. (2010). Motion tuned spatio-temporal quality assessment of natural videos. IEEE transactions on image processing, 19(11):335- 350.
  14. Seshadrinathan, K., Soundararajan, R., Bovik, A., and Cormack, L. (2010). Study of Subjective and Objective Quality Assessment of Video. IEEE Transactions on Image Processing, 19(6):1427-1441.
  15. VQEG, V.-Q.-E.-G. (2003). The validation of objective models of video quality assessment, phase ii. http://www.its.bldrdoc.gov/vqeg/vqeg-home.aspx.
  16. Wang, Z., Bovik, A., Sheikh, H., and Simoncelli, E. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600-612.
  17. Wang, Z. and Li, Q. (2007). Video quality assessment using a statistical model of human visual speed perception. Journal of optical society of America, 24(12):B61- B69.
  18. Watson, A. and Ahumada, A. (1985). Model of human visual-motion sensing. Journal of optical society of America, 2(2):322-342.
  19. Yao, P., Evans, G., and Calway, A. (2001). Face tracking and pose estimation using affine motion parameters. In Proceedings of the 12th Scandinavian Conference on Image Analysis.
  20. Yuen, M. and Wu, H. (1998). A survey of hybrid mc/dpcm/dct video coding distortions. Signal processing, 70:247-278.
  21. Zhang, F., Li, S., Ma, L., Wong, Y., and Ngan, K. (2011). Ivp subjective quality video database. http://ivp.ee.cuhk.edu.hk/research/ database/subjective/index.shtml.
Download


Paper Citation


in Harvard Style

Ortiz-Jaramillo B., Kumcu A., Platisa L. and Philips W. (2014). A Full Reference Video Quality Measure based on Motion Differences and Saliency Maps Evaluation . In Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: PANORAMA, (VISIGRAPP 2014) ISBN 978-989-758-004-8, pages 714-722. DOI: 10.5220/0004870607140722


in Bibtex Style

@conference{panorama14,
author={B. Ortiz-Jaramillo and A. Kumcu and L. Platisa and W. Philips},
title={A Full Reference Video Quality Measure based on Motion Differences and Saliency Maps Evaluation},
booktitle={Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: PANORAMA, (VISIGRAPP 2014)},
year={2014},
pages={714-722},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004870607140722},
isbn={978-989-758-004-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Vision Theory and Applications - Volume 2: PANORAMA, (VISIGRAPP 2014)
TI - A Full Reference Video Quality Measure based on Motion Differences and Saliency Maps Evaluation
SN - 978-989-758-004-8
AU - Ortiz-Jaramillo B.
AU - Kumcu A.
AU - Platisa L.
AU - Philips W.
PY - 2014
SP - 714
EP - 722
DO - 10.5220/0004870607140722