Which Saliency Detection Method is the Best to Estimate the Human Attention for Adjective Noun Concepts?

Marco Stricker, Syed Saqib Bukhari, Mohammad Al Naser, Saleh Mozafari, Damian Borth, Andreas Dengel

2017

Abstract

This paper asks the question: how salient is human gaze for Adjective Noun Concepts (a.k.a Adjective Noun Pairs - ANPs)? In an existing work the authors presented the behavior of human gaze attention with respect to ANPs using eye-tracking setup, because such knowledge can help in developing a better sentiment classification system. However, in this work, only very few ANPs, out of thousands, were covered because of time consuming eye-tracking based data gathering mechanism. What if we need to gather the similar knowledge for a large number of ANPs? For example this could be required for designing a better ANP based sentiment classification system. In order to handle that objective automatically and without using an eye-tracking based setup, this work investigated if there are saliency detection methods capable of recreating the human gaze behavior for ANPs. For this purpose, we have examined ten different state-of-the-art saliency detection methods with respect to the ground-truths, which are human gaze pattern themselves over ANPs. We found very interesting and useful results that the Graph-Based Visual Saliency (GBVS) method can better estimate the human-gaze heatmaps over ANPs that are very close to human gaze pattern.

References

  1. Al-Naser, M., Chanijani, S. S. M., Bukhari, S. S., Borth, D., and Dengel, A. (2015). What makes a beautiful landscape beautiful: Adjective noun pairs attention by eye-tracking and gaze analysis. In Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia, pages 51-56. ACM.
  2. Borji, A., Sihite, D. N., and Itti, L. (2013). Quantitative analysis of human-model agreement in visual saliency modeling: a comparative study. IEEE Transactions on Image Processing, 22(1):55-69.
  3. Borth, D., Ji, R., Chen, T., Breuel, T., and Chang, S.-F. (2013). Large-scale visual sentiment ontology and detectors using adjective noun pairs. In Proceedings of the 21st ACM international conference on Multimedia, pages 223-232. ACM.
  4. Fang, Y., Chen, Z., Lin, W., and Lin, C.-W. (2011). Saliency-based image retargeting in the compressed domain. In Proceedings of the 19th ACM international conference on Multimedia, pages 1049-1052. ACM.
  5. Goferman, S., Zelnik-manor, L., and Tal, A. (2010). Context-aware saliency detection. In in [IEEE Conf. on Computer Vision and Pattern Recognition.
  6. Harel, J., Koch, C., and Perona, P. (2006). Graph-based visual saliency. In Advances in neural information processing systems, pages 545-552.
  7. Itti, L. and Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision research, 40(10):1489-1506.
  8. Judd, T., Durand, F., and Torralba, A. (2012). A benchmark of computational models of saliency to predict human fixations.
  9. Mancas, M. (2009). Relative influence of bottom-up and top-down attention, attention in cognitive systems: 5th international workshop on attention in cognitive systems, wapcv 2008 fira, santorini, greece, may 12, 2008 revised selected papers.
  10. Mancas, M., Couvreur, L., Gosselin, B., Macq, B., et al. (2007). Computational attention for event detection. In Proc. Fifth Intl Conf. Computer Vision Systems.
  11. Mancas, M., Mancas-Thillou, C., Gosselin, B., and Macq, B. (2006). A rarity-based visual attention map - application to texture description. In 2006 International Conference on Image Processing, pages 445-448.
  12. Otsu, N. (1975). A threshold selection method from graylevel histograms. Automatica, 11(285-296):23-27.
  13. Riche, N., Mancas, M., Duvinage, M., Mibulumukini, M., Gosselin, B., and Dutoit, T. (2013). Rare2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Processing: Image Communication, 28(6):642-658.
  14. Vikram, T. N., Tscherepanow, M., and Wrede, B. (2011). A random center surround bottom up visual attention model useful for salient region detection. In Applications of Computer Vision (WACV), 2011 IEEE Workshop on, pages 166-173. IEEE.
  15. Zhang, L., Gu, Z., and Li, H. (2013). Sdsp: A novel saliency detection method by combining simple priors. In 2013 IEEE International Conference on Image Processing, pages 171-175. IEEE.
Download


Paper Citation


in Harvard Style

Stricker M., Bukhari S., Al Naser M., Mozafari S., Borth D. and Dengel A. (2017). Which Saliency Detection Method is the Best to Estimate the Human Attention for Adjective Noun Concepts? . In Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-220-2, pages 185-195. DOI: 10.5220/0006198901850195


in Bibtex Style

@conference{icaart17,
author={Marco Stricker and Syed Saqib Bukhari and Mohammad Al Naser and Saleh Mozafari and Damian Borth and Andreas Dengel},
title={Which Saliency Detection Method is the Best to Estimate the Human Attention for Adjective Noun Concepts?},
booktitle={Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2017},
pages={185-195},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006198901850195},
isbn={978-989-758-220-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Which Saliency Detection Method is the Best to Estimate the Human Attention for Adjective Noun Concepts?
SN - 978-989-758-220-2
AU - Stricker M.
AU - Bukhari S.
AU - Al Naser M.
AU - Mozafari S.
AU - Borth D.
AU - Dengel A.
PY - 2017
SP - 185
EP - 195
DO - 10.5220/0006198901850195