Subjective Annotations for Vision-based Attention Level Estimation
Andrea Coifman, Péter Rohoska, Miklas S. Kristoffersen, Sven E. Shepstone, Zheng-Hua Tan
2019
Abstract
Attention level estimation systems have a high potential in many use cases, such as human-robot interaction, driver modeling and smart home systems, since being able to measure a person’s attention level opens the possibility to natural interaction between humans and computers. The topic of estimating a human’s visual focus of attention has been actively addressed recently in the field of HCI. However, most of these previous works do not consider attention as a subjective, cognitive attentive state. New research within the field also faces the problem of the lack of annotated datasets regarding attention level in a certain context. The novelty of our work is two-fold: First, we introduce a new annotation framework that tackles the subjective nature of attention level and use it to annotate more than 100,000 images with three attention levels and second, we introduce a novel method to estimate attention levels, relying purely on extracted geometric features from RGB and depth images, and evaluate it with a deep learning fusion framework. The system achieves an overall accuracy of 80.02%. Our framework and attention level annotations are made publicly available.
DownloadPaper Citation
in Harvard Style
Coifman A., Rohoska P., Kristoffersen M., Shepstone S. and Tan Z. (2019). Subjective Annotations for Vision-based Attention Level Estimation. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP; ISBN 978-989-758-354-4, SciTePress, pages 249-256. DOI: 10.5220/0007311402490256
in Bibtex Style
@conference{visapp19,
author={Andrea Coifman and Péter Rohoska and Miklas S. Kristoffersen and Sven E. Shepstone and Zheng-Hua Tan},
title={Subjective Annotations for Vision-based Attention Level Estimation},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP},
year={2019},
pages={249-256},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007311402490256},
isbn={978-989-758-354-4},
}
in EndNote Style
TY - CONF
JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 5: VISAPP
TI - Subjective Annotations for Vision-based Attention Level Estimation
SN - 978-989-758-354-4
AU - Coifman A.
AU - Rohoska P.
AU - Kristoffersen M.
AU - Shepstone S.
AU - Tan Z.
PY - 2019
SP - 249
EP - 256
DO - 10.5220/0007311402490256
PB - SciTePress