loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Zahra Kouchaki 1 and Ali Motie Nasrabadi 2

Affiliations: 1 Islamic Azad University, Iran, Islamic Republic of ; 2 Shahed University, Iran, Islamic Republic of

Keyword(s): Saliency Map, Visual Attention, Nonlinear Fusion, Neural Network, Object Detection.

Related Ontology Subjects/Areas/Topics: Computer Vision, Visualization and Computer Graphics ; Image and Video Analysis ; Visual Attention and Image Saliency

Abstract: This study presents a novel combinational visual attention system which applies both bottom-up and top-down information. This can be employed in further processing such as object detection and recognition purpose. This biologically-plausible model uses nonlinear fusion of feature maps instead of simple superposition by employing a specific Artificial Neural Network (ANN) as combination operator. After extracting 42 feature maps by Itti’s model, they are weighed purposefully through several training images with their corresponding target masks to highlight the target in the final saliency map. In fact, the weights of 42 feature maps are proportional to their influence on finding target in the final saliency map. The lack of bottom-up information is compensated by applying top-down information with available target masks. Our model could automatically detect the conceptual features of desired object only by considering the target information. We have tried to model the process of comb ining 42 feature maps to form saliency map by applying the neural network which resembles biological neural network. The Experimental results and comparing our model with the basic saliency model using 32 images of test dataset indicate a noticeable improvement in finding target in the first hit. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.144.123.24

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Kouchaki, Z. and Motie Nasrabadi, A. (2012). A NONLINEAR FEATURE FUSION BY VARIADIC NEURAL NETWORK IN SALIENCY-BASED VISUAL ATTENTION. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISIGRAPP 2012) - Volume 2: VISAPP; ISBN 978-989-8565-03-7; ISSN 2184-4321, SciTePress, pages 457-461. DOI: 10.5220/0003859204570461

@conference{visapp12,
author={Zahra Kouchaki. and Ali {Motie Nasrabadi}.},
title={A NONLINEAR FEATURE FUSION BY VARIADIC NEURAL NETWORK IN SALIENCY-BASED VISUAL ATTENTION},
booktitle={Proceedings of the International Conference on Computer Vision Theory and Applications (VISIGRAPP 2012) - Volume 2: VISAPP},
year={2012},
pages={457-461},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003859204570461},
isbn={978-989-8565-03-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the International Conference on Computer Vision Theory and Applications (VISIGRAPP 2012) - Volume 2: VISAPP
TI - A NONLINEAR FEATURE FUSION BY VARIADIC NEURAL NETWORK IN SALIENCY-BASED VISUAL ATTENTION
SN - 978-989-8565-03-7
IS - 2184-4321
AU - Kouchaki, Z.
AU - Motie Nasrabadi, A.
PY - 2012
SP - 457
EP - 461
DO - 10.5220/0003859204570461
PB - SciTePress