a total of ten participants were recruited, which gen-
erated a medium volume of gaze-data. The recruit-
ment ensued random selection of people visiting the
canteen on the survey-day, given participant willing-
ness to take part in the study and informed writ-
ten study-consents were available. The inclusion
of ten participant for a pilot feasibility was deter-
mined by the period of lunch-break, of about half an
hour per person, over an entire lunch period of two
hours between 12.00 noon and 14.00 p.m. on the
day of data-collection and the use of a single com-
puter based terminal equipped with visual analytic for
data-gathering. Study participation, even so random
stochastically determined, can not rule out study par-
ticipation bias by those, who were inherently more
interested in nutrition, compared to those, who were
less interested in nutrition. The study was imple-
mented by one study-personnel on the survey day, the
software developed for this pilot study integrated a
self-training sequence for those conducting the study
allowing easy use and applicability.
Dietary patterns included non-vegetarian and veg-
etarian; non-vegetarian was defined as omnivore,
healthy omnivore nutrition (Deutsche Gesellschaft f
¨
ur
Ern
¨
ahrung e. V., 2020), and paleo-diet. Vegetarian
was defined as ovo-lacto-vegetarian, ovo-vegetarian,
lacto-vegetarian and vegan diets; raw-food, whole-
food and flexitarian diets; and pesco-vegetarian diet.
Dietary pattern was assumed to indicate long-term nu-
tritional pattern and behavior. Food choices were de-
fined by canteen menus that were chosen by the study
participants on the survey-day. Food choices were
assumed to indicate short-term nutritional behavior.
There were five menus for choice on the study-day.
Background data were collected on age, gender, and
country of birth.
For the mobile hardware platform, a standard mid-
class laptop with a built-in, detachable low-cost eye-
tracking device was used. Estimating the computing
power required for mobile software solution, it was
determined that a low-cost laptop would not be able
to simultaneously handle multi-threading, necessary
for synchronous visualization of the stimuli, and data-
storage of gaze data. We, hence, used a standard mid-
class laptop for the pilot study, laptop specifications
were Intel i5 CPU equipped with a Tobii EyeX eye
tracker.
As seen in Figure 1, our software
1
covered the
whole acquisition process of data, including human-
machine-interaction, e.g., for the questionnaire and
for calibration of the eye tracking device. We provide
this software as free, open-source software, which is
1
The source code can be downloaded at https://www.
hs-osnabrueck.de/prof-dr-julius-schoening/chira2020
capable of providing a timed graphical user interface
for presenting the stimuli at a defined time and in-
teroperable to different low-cost eye tracking device,
in the programming language of C++. The software
given as open-source for public research is also pro-
vided with an open-source free widget toolkit QT.
A simple calibration routine was implemented for
the use of low-cost eye-tracking device along with the
minimization of time spent for the study by each par-
ticipant. As shown in Figure 1(b), the participant was
therefore asked to fixate on a cross in the middle of the
screen. The computer recognized a fixation over one
second within an offset of 50 pixels, and the software
re-calibrated the gaze tracking.
The study-design required that the stimuli presen-
tation, as illustrated in Figure 1(c) would be flex-
ible showing the day’s dishes available at the can-
teen. The stimuli were not compiled with the soft-
ware. Rather the stimuli, e.g., photos of dishes, were
placed in a specified folder next to the executable
built-in software. In this way, this pilot-study devel-
oped an easy, simple, and flexible solution for inter-
changeable stimuli. By naming convention, the order,
as well as the duration of the stimulus, was set. For
example, the photograph titled “dish1 3000.jpg” was
shown for 3000msec followed by “dish2 1242.jpg”
and so on. To avoid restarting the software repeat-
edly, a guided dialog was provided for the survey in-
structor, allowing back and forth movements with in-
struction repeats. Thus, the software applicability al-
lowed easy, on-site training for people conducting the
study, thereof saving time in study-specific schooling
and training of personnel.
To regain participants after lunch we had a reward
for the study participants built in to the study-design.
Participants received a visualization of their gaze in
the format compatible with standard multimedia play-
ers (Sch
¨
oning et al., 2017a). This is described in detail
in the following section.
3.3 Analysis
For scene perception analysis, the gaze trajectories
on each dish were visualized for our investigation.
Therefore static heat-maps, as shown in Figure 2, for
each shown dish per participant, were generated by a
simple Python script. Next to the five different dishes,
cf. Subsection 3.1, a white screen was presented as
well for zero-error correction calibration. For further
analyses, the differences in the gaze patterns when
hungry and when satiated were visualized.
According to the hypotheses 1), the resulting heat-
maps were composed next to each other for visual in-
spection. In addition to the heat-maps per participant,
CHIRA 2020 - 4th International Conference on Computer-Human Interaction Research and Applications
190