Elbattah, M. (2020). GitHub Repository:
https://github.com/Mahmoud-Elbattah/NCTA2020
Elbattah, M., Carette, R., Dequen, G., Guérin, J.L., & Cilia,
F. (2019). Learning Clusters in Autism Spectrum
Disorder: Image-Based Clustering of Eye-Tracking
Scanpaths with Deep Autoencoder. In Proceedings of
the 41st Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC),
(pp. 1417-1420). IEEE.
Fuhl, W. (2020). Fully Convolutional Neural Networks for
Raw Eye Tracking Data Segmentation, Generation, and
Reconstruction. arXiv preprint arXiv:2002.10905.
Guillon, Q., Hadjikhani, N., Baduel, S., & Rogé, B. (2014).
Visual social attention in autism spectrum disorder:
Insights from eye tracking studies. Neuroscience &
Biobehavioral Reviews, 42, 279-297.
Henderson, J. M. (2003). Human gaze control during real-
world scene perception. Trends in Cognitive Sciences,
7(11), 498-504.
Huey, E. B. (1908). The psychology and pedagogy of
reading. The Macmillan Company: New York, NY,
USA.
Jacob, R.J. (1995). Eye tracking in advanced interface
design. In W. Barfield W, T.A. Furness (eds). Virtual
Environments and Advanced Interface Design. pp.
258–288. New York: Oxford University Press.
Javal, L. (1878). Essai sur la physiologie de la lecture.
Annales d'Oculistique. 80:240–274.
Javal, L. (1879). Essai sur la physiologie de la lecture.
Annales d'Oculistique. 82:242–253.
Khalighy, S., Green, G., Scheepers, C., & Whittet, C.
(2015). Quantifying the qualities of aesthetics in
product design using eye-tracking technology.
International Journal of Industrial Ergonomics, 49, 31-
43.
Klein, A., Yumak, Z., Beij, A., & van der Stappen, A. F.
(2019). Data-driven gaze animation using recurrent
neural networks. In Proceedings of ACM SIGGRAPH
Conference on Motion, Interaction and Games (MIG)
(pp. 1-11). ACM.
Kingma, D.P., & Ba, J. (2015). Adam: a method for
stochastic optimization. In Proceedings of the 3rd
International Conference on Learning Representations
(ICLR).
Khushaba, R. N., Wise, C., Kodagoda, S., Louviere, J.,
Kahn, B. E., & Townsend, C. (2013). Consumer
neuroscience: Assessing the brain response to
marketing stimuli using electroencephalogram (EEG)
and eye tracking. Expert Systems with Applications,
40(9), 3803-3812.
Le, B.H., Ma, X., & Deng, Z. (2012). Live speech driven
head-and-eye motion generators. IEEE Transactions on
Visualization and Computer Graphics, 18(11), pp.
1902-1914.
Lee, S. P., Badlr, J. B., & Badler, N. I. (2002). Eyes alive.
In Proceedings of the 29th annual Conference on
Computer Graphics and Interactive Techniques (pp.
637-644).
Ma, X., & Deng, Z. (2009). Natural eye motion synthesis
by modeling gaze-head coupling. In Proceedings of the
IEEE Virtual Reality Conference (pp. 143-150). IEEE.
Majaranta P., Bulling A. (2014). Eye tracking and eye-
based human–computer interaction. In: Fairclough S.,
Gilleade K. (eds). Advances in Physiological
Computing. Human–Computer Interaction Series.
Springer, London.
Mele, M. L., & Federici, S. (2012). Gaze and eye-tracking
solutions for psychological research. Cognitive
Processing, 13(1), 261-265.
Meißner, M., Pfeiffer, J., Pfeiffer, T., & Oppewal, H.
(2019). Combining virtual reality and mobile eye
tracking to provide a naturalistic experimental
environment for shopper research. Journal of Business
Research, 100, 445-458.
Oyekoya, O., Steptoe, W., & Steed, A. (2009). A saliency-
based method of simulating visual attention in virtual
scenes. In Proceedings of the 16th ACM Symposium on
Virtual Reality Software and Technology (pp. 199-206).
Steptoe, W., Oyekoya, O., & Steed, A. (2010). Eyelid
kinematics for virtual characters. Computer Animation
and Virtual Worlds, 21(3-4), pp. 161-171.
Trutoiu, L. C., Carter, E. J., Matthews, I., & Hodgins, J. K.
(2011). Modeling and animating eye blinks. ACM
Transactions on Applied Perception (TAP), 8(3), 1-17.
Zemblys, R., Niehorster, D. C., & Holmqvist, K. (2019).
gazeNet: End-to-end eye-movement event detection
with deep neural networks. Behavior Research
Methods, 51(2), 840-864.
Zhai, S. (2003). What's in the eyes for attentive input.
Communications of the ACM, 46(3), 34-39.