3:15 PM
Synthetic Gaze Data Augmentation for Improved User Calibration,
by
Gonzalo Garde,
Andoni Larumbe-Bergera,
Sonia Porta,
Rafael Cabeza
and
Arantxa Villanueva
Abstract,
Slides,
Video
In this paper, we focus on the calibration possibilities of a deep learning based gaze estimation
process applying transfer learning, comparing its performance when using a general dataset versus when
using a gaze specific dataset in the pretrained model. Subject calibration has demonstrated to improve
gaze accuracy in high performance eye trackers. Hence, we wonder about the potential of a deep learning
gaze estimation model for subject calibration employing fine-tuning procedures. A pretrained Resnet-18
network, which has great performance in many computer vision tasks, is fine-tuned using user's specific
data in a few shot adaptive gaze estimation approach. We study the impact of pretraining a model with
a synthetic dataset, U2Eyes, before addressing the gaze estimation calibration in a real dataset,
I2Head. The results of the work show that the success of the individual calibration largely depends on
the balance between fine-tuning and the standard supervised learning procedures and that using a gaze
specific dataset to pretrain the model improves the accuracy when few images are available for calibration.
This paper shows that calibration is feasible in low resolution scenarios providing outstanding accuracies
below 1.5 ° of error.
3:35 PM
Eye Movement Classification With Temporal Convolutional Networks,
by
Carlos Eduardo Elmadjian,
Candy Veronica Gonzales and
Carlos Hitoshi Morimoto
Abstract,
Slides,
Video
Recently, deep learning approaches have been proposed to detect eye movements such as fixations, saccades,
and smooth pursuits from eye tracking data. These are end-to-end methods that have shown to surpass traditional
ones, requiring no ad hoc parameters. In this work we propose the use of temporal convolutional networks (TCNs)
for automated eye movement classification and investigate the influence of feature space, scale, and context
window sizes on the classiffication results. We evaluated the performance of TCNs against a state-of-the-art
1DCNN-BLSTM model using GazeCom, a public available dataset. Our results show that TCNs can outperform the
1D-CNN-BLSTM, achieving an F-score of 94.2% for fixations, 89.9% for saccades, and 73.7% for
smooth pursuits on sample level, and 89.6%, 94.3%, and 60.2% on event level. We also state the advantages of TCNs
over sequential networks for this problem, and how these scores can be further improved by feature space extension.
3:55 PM
A Web-Based Eye Tracking Data Visualization Tool,
by
Hristo Bakardzhiev,
Marloes van der Burgt,
Eduardo Martins,
Bart van den Dool,
Chyara Jansen,
David van Scheppingen,
Guenter Wallner and
Michael Burch
Abstract,
Slides,
Video
Visualizing eye tracking data can provide insights in many research fields. However, visualizing such data
efficiently and cost-effectively is challenging without well-designed tools. Easily accessible web-based
approaches equipped with intuitive and interactive visualizations offer to be a promising solution. Many
of such tools already exist, however, they mostly use one specific visualization technique. In this paper,
we describe a web application which uses a combination of different visualization methods for eye tracking
data. The visualization techniques are interactively linked to provide several perspectives on the eye tracking
data. We conclude the paper by discussing challenges, limitations, and future work.
4:35 PM
Judging Qualification, Gender, and Age of Observer Based on Gaze Patterns When Looking at Faces,
by
Pawel Kasprowski,
Katarzyna Harezlak,
Piotr Fudalej and
Pawel Fudalej
Abstract,
Slides,
Video
The research aimed to compare eye movement patterns of people looking at faces with different but subtle teeth
imperfections. Both non-specialists and dental experts took part in the experiment. The research outcome includes
the analysis of eye movement patterns depending on the specialization, gender, age, face gender, and level of
teeth deformation. The study was performed using a novel, not widely explored features of eye movements, derived
from recurrence plots and Gaze Self Similarity Plots. It occurred that most features are significantly different
for laypeople and specialists. Significant differences were also found for gender and age among the observers.
There were no differences found when comparing the gender of the face being observed and levels of imperfection.
Interestingly, it was possible to define which features are sensitive to gender and which to qualification.
4:55 PM
Predicting Reading Speed From Eye-Movement Measures,
by
Ádám Nárai,
Kathleen Kay Amora,
Zoltán Vidnyászky
and
Béla Weiss
Abstract,
Slides,
Video
Examining eye-movement measures makes understanding the intricacies of reading processes possible. Previous studies
have identified some eye-movement measures such as fixation time, number of progressive and regressive saccades as
possible major indices for measuring silent reading speed, however, not quite intensively and systematically investigated.
The purpose of this study was to exhaustively reveal the functions of different global eye movement measures and their
contribution to reading speed using linear regression analysis. Twenty-four young adults underwent an eye-tracking
experiment while reading text paragraphs. Reading speed and a set of twenty-three eye-movement measures including
properties of saccades, glissades and fixations were estimated. Correlation analysis indicated multicollinearity
between several eye-movement measures, and accordingly, linear regression with elastic net regularization was used
to model reading speed with eye-movement explanatory variables. Regression analyses revealed the capability of
progressive saccade frequency and the number of progressive saccades normalized by the number of words in predicting
reading speed. Furthermore, the results supported claims in the existing literature that reading speed depends on
fixation duration, as well as the amplitude, number and percentage of progressive saccades, and also indicated the
potential importance of glissade measures in deeper understanding of reading processes. Our findings indicate the
possibility of the applied linear regression modeling approach to eventually identify important eye-movements
measures related to different reading performance metrics, which could potentially improve the assessment of reading
abilities.
5:15 PM
Investigating the Effect of Inter-Letter Spacing Modulation on Data-Driven Detection of Developmental
Dyslexia Based on Eye-Movement Correlates of Reading: A Machine Learning Approach,
by
János Szalma,
Kathleen Kay Amora,
Zoltán Vidnyánszky and
Béla Weiss
Abstract,
Slides,
Video
Developmental dyslexia is a reading disability estimated to affect between 5 to 10 percent of the population.
However, current screening methods are limited as they tell very little about the oculomotor processes underlying
natural reading. Accordingly, investigating the eye-movement correlates of reading in a machine learning framework
could potentially enhance the detec-tion of poor readers. Here, the capability of eye-movement measures in
classifying dyslexic and control young adults (24 dyslexic, 24 control) was assessed on eye-tracking data acquired
during reading of isolated sentences presented at five inter-letter spacing levels. The set of 65 eye-movement
features included properties of fixations, saccades and glissades. Classification accuracy and importance of features
were assessed for all spacing levels by aggregating the results of five feature selection methods. Highest
classification accuracy (73.25%) was achieved for an increased spacing level, while the worst classification
performance (63%) was obtained for the minimal spacing condition. However, the classification performance did
not differ significantly between these two spacing levels (p=0.28). The most important features contributing
to the best classification performance across the spacing levels were as follows: median of progressive and
all saccade amplitudes, median of fixation duration and interquartile range of forward glissade duration.
Selection frequency was even for the median of fixation duration, while the median amplitude of all and forward
saccades measures exhibited complementary distributions across the spacing levels. The results suggest that
although the importance of features may vary with the size of inter-letter spacing, the classification performance
remains invariant.
Eye-Movement Patterns and Viewing Biases During Visual Scene Processing
In this presentation, I will review eye-movement patterns and viewing biases of observers watching an image onscreen. It will mainly consist in discussing eye-tracking data collected in different experimental conditions, involving different populations (e.g., young children vs adults [1], neurotypical vs observers with ASD (Autism Spectrum Disorders) [2]) and involving different kinds of stimuli (e.g. natural scenes, webpages, paintings [3]). The discussion will be, however, strongly oriented towards the computational modelling of visual attention. Most of existing approaches make strong assumptions about eye-movements and about the existence of a universal saliency map indicating where we look at.
I aim to push forward the idea that we have to change this paradigm [4] and to put the observers in the midst of the design of saliency models. Observers have to become the key ingredient when it comes to simulate our visual behavior.
Please find below some references of my work.
-
Le Meur, O., Coutrot, A., Liu, Z., Rämä, P., Le Roch, A., and Helo, A. (2017). Visual attention saccadic models learn to
emulate gaze patterns from childhood to adulthood. IEEE Transactions on Image Processing, 26(10), 4777-4789.
-
Le Meur, O., Nebout, A., Cherel, M., and Etchamendy, E. (2020). From Kanner Autism to Asperger Syndromes, the Difficult
Task to Predict Where ASD People Look at. IEEE Access, 8, 162132-162140.
-
Le Meur, O., Le Pen, T., and Cozot, R. (2020). Can we accurately predict where we look at paintings?. Plos one, 15(10), e0239980.
-
Le Meur, O., and Liu, Z. (2015). Saccadic model of eye movements for free-viewing condition. Vision research, 116, 152-164.