5.2.2 Efficiency
Although our method has different saliency priors,
it is time efficient and can be easily used as prepro-
cessing step for different applications. This is mainly
due to the fact that the saliency computation by our
method is performed on image patches or superpix-
els which are much lesser in number than the set of
pixels. Moreover, parallel computation of the priors
is also possible. We compare the running time of our
implementation (in C++) with other competing meth-
ods. We use Matlab implementation from authors for
(Itti et al., 1998), (Goferman et al., 2010), (Achanta
et al., 2009), (Li et al., 2013), (Harel et al., 2006),
(Hou et al., 2012) and C++ implementation of (Cheng
et al., 2011), (Perazzi et al., 2012) on a intel core 2
extreme 3.00 GHz CPU with 4 GB RAM. Table 1
lists the average running time of 8 competing methods
along with PARAM. For (Hou and Zhang, 2007) we
get the results from the publicly available executable
of (Cheng et al., 2011), and we do not have the time
efficiency information for the same. The work pro-
posed in (Achanta et al., 2009) is the fastest, but per-
forms much inferior (refer Figures 3 - 6).
6 CONCLUSIONS
We have presented a bottom-up saliency estimation
method for images using low level cues. We have
proposed a novel graph-based feature rarity computa-
tion, utilizing the concepts of spectral clustering (Ng
et al., 2001). It shows that eigenvectors of Laplacian
of the affinity matrix of the graph, taking image ele-
ments as node gives good measure of rarity. Again,
we exploit spatial compactness of color and we use
the cue of boundary prior by statistically modeling
the background in color space. We show, both quali-
tatively (Figure 1) and quantitatively using Precision-
Recall metric (Figure 2), that these components com-
pliment each other. We also give a comparative study
of the performance of our method with 9 state-of-the-
art methods, using three different measures of evalu-
ation on two popular real-world benchmark datasets.
Since, our method is not just restricted to global spa-
tial feature rarity, but also utilizes the boundary cue as
well as spectral clustering based feature rarity, it gives
better performance and in most of the cases accurately
detects the salient object.
REFERENCES
A feature-integration theory of attention. Cognitive Psy-
chology, 12(1).
Achanta, R., Hemami, S., Estrada, F., and Susstrunk, S.
(2009). Frequency-tuned salient region detection. In
CVPR.
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., and
Susstrunk, S. (2010). SLIC Superpixels. Technical
report, EPFL.
Arbelaez, P., Maire, M., Fowlkes, C., and Malik, J. (2011).
Contour detection and hierarchical image segmenta-
tion. TPAMI, 33(5):898–916.
Cheng, M.-M., Zhang, G.-X., Mitra, N. J., Huang, X., and
Hu, S.-M. (2011). Global contrast based salient region
detection. In CVPR.
Dolson, J., Jongmin, B., Plagemann, C., and Thrun, S.
(2010). Upsampling range data in dynamic environ-
ments. In CVPR.
Goferman, S., Zelnik-manor, L., and A.Tal (2010). Context-
aware saliency detection. In CVPR.
Harel, J., Koch, C., and Perona, P. (2006). Graph-based
visual saliency. In NIPS, pages 545–552.
Hou, X., Harel, J., and Koch, C. (2012). Image signature:
Highlighting sparse salient regions. TPAMI, 34(1).
Hou, X. and Zhang, L. (2007). Saliency detection: A spec-
tral residual approach. In CVPR, pages 1–8.
Itti, L., Koch, C., and Niebur, E. (1998). A model of
saliency-based visual attention for rapid scene anal-
ysis. TPAMI, 20(11).
Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., and Li,
S. (2013). Salient object detection: A discriminative
regional feature integration approach. In CVPR.
Koch, C. and Ullman, S. (1987). Shifts in selective visual
attention: Towards the underlying neural circuitry. In
Matters of Intelligence, volume 188. Springer Nether-
lands.
Li, J., Levine, M. D., An, X., Xu, X., and He, H. (2013).
Visual saliency based on scale-space analysis in the
frequency domain. TPAMI, 35(4).
Ng, A. Y., Jordan, M. I., and Weiss, Y. (2001). On spectral
clustering: Analysis and an algorithm. In NIPS, pages
849–856.
Perazzi, F., Krahenbuhl, P., Pritch, Y., and Hornung, A.
(2012). Saliency filters: Contrast based filtering for
salient region detection. In CVPR.
Schauerte, B. and Rainer, S. (2012). Quaternion-based
spectral saliency detection for eye fixation prediction.
In ECCV, pages 116–129.
Shi, J. and Malik, J. (2000). Normalized cuts and image
segmentation. TPAMI, 22(8):888–905.
Tatler, B. W. (2007). The central fixation bias in scene view-
ing: Selecting an optimal viewing position indepen-
dently of motor biases and image feature distributions.
Journal of Vision, 7(14).
Wei, Y., Wen, F., Zhu, W., and Sun, J. (2012). Geodesic
saliency using background priors. In ECCV.
Yang, C., Zhang, L., Lu, H., Ruan, X., and Yang, M.-H.
(2013). Saliency detection via graph-based manifold
ranking. In CVPR.
Zhou, D., Weston, J., Gretton, A., Bousquet, O., and
Schlkopf, B. (2004). Ranking on data manifolds. In
NIPS.
VISAPP2014-InternationalConferenceonComputerVisionTheoryandApplications
530