particular, could have altered our results, as male
individuals often do show a stronger interest in
technical innovations and do show lower insecurities
towards it (Goswami & Dutta, 2015).
Possible age-related differences could be explored
more clearly in future studies if intermediate age
groups between 30 and 49 would be included into the
sample. In this regard, age should be used as
continuous independent variable in the analysis
instead of categorizing it into age groups in order to
strengthen the result by using an appropriate
regression model (Streiner, 2002; Sauerbrei &
Royston, 2010).
A systematic bias from participant recruitment
cannot be fully ruled out, as the advertisements on
digital bulletin boards might have led to a stronger
representation of individuals more interested in
digital and immersive research technologies.
4.3 Conclusion
This study investigated the effects on participants’
cognitive performance and affective state when
conducting an assessment in a VR scenario created
based on an image of the real world.
Regarding to technical aspects, the lab.js builder
showed through a modular principle of basic
components its suitability for the development of
experimental test in RL as well as in VR if adapted
correctly. Even the virtual environment, the handheld
controller, or the headset could not disturb the
participants. In conclusion, VR environments for
cognitive assessment seems to have no significant
effect on participants’ affective state compared to RL,
allowing a promising opportunity for further use of
VR without losing any cognitive capacity, e.g., in
treating or preventing mental illness.
Other applications of VR in educational contexts
have reported contrasting findings (Parong et al.,
2021) as the participants affective state indeed
showed slightly higher arousal. A possible difference
of the presented content and elongation of the
performance related context might have resulted in
this discrepancy from our findings. However,
Holzwarth et al. (2021) reported relatively stable
affective states for the entering phase of a VR
scenario in their meaningful groundwork of
predicting affective states within VR by using the
subjects’ head movements as predictors.
Admittedly, a more thorough assessment with a
larger and more representative sample could produce
different results and should therefore be applied in a
future study. Nevertheless, the finding is promising as
it justifies this new technology to develop new,
innovative paradigms for both basic and applied
research. In this context, we did already show the
feasibility of cognitive performance assessments with
an age-diverse sample in a comparable study (Vahle
et al., 2021). Other studies were able to successfully
induce an avatar age group specific performance
difference on physical and cognitive performance
domains, e.g., shown by Vahle and Tomasik (2021).
Thus, the present research is a crucial groundwork for
applying the present and novel technique to the self-
reflexive stereotype research, where young
participants experience the embodiment of a virtual
old age avatar.
ACKNOWLEDGEMENTS
We would like to thank all those who volunteered to
participate in our study, the team of Pointreef for its
continuous technical support of the VR scenario, and
Hannah Butt for her support during data collection.
REFERENCES
Anderson-Hanley, C., Arciero, P.J., Brickman, A.M.,
Nimon, J.P., Okuma, N., Westen, S.C., Merz, M.E.,
Pence, B.D., Woods, J.A., Kramer, A.F., &
Zimmerman, E.A. (2012). Exergaming and older adult
cognition: A cluster randomized clinical trial. American
Journal of Preventive Medicine, 42(2), 109–119.
https://doi.org/10.1016/j.amepre.2011.10.016
Berch, D.B., Krikorian, R., & Huha, E.M. (1998). The Corsi
block-tapping task: Methodological and theoretical
considerations. Brain and Cognition, 38(3), 317–338.
https://doi.org/10.1006/brcg.1998.1039
Berkovits, I., Hancock, G.R., & Nevitt, J. (2000). Bootstrap
resampling approaches for repeated measure designs:
Relative robustness to sphericity and normality
violations. Educational and Psychological
Measurement, 60(6), 877–892. https://doi.org/10.1177/
00131640021970961
Bradley, M.M., & Lang, P.J. (1994). Measuring emotion:
The self-assessment manikin and the semantic
differential. Journal of Behavior Therapy and
Experimental Psychiatry, 25(1), 49–59. https://doi.org/
10.1016/0005-7916(94)90063-9
Bridges, D., Pitiot, A., MacAskill, M.R., & Peirce, J.W.
(2020). The timing mega-study: comparing a range of
experiment generators, both lab-based and online.
PeerJ, 8, e9414. https://doi.org/10.7717/peerj.9414
Goswami, A., & Dutta, S. (2015). Gender differences in
technology usage — A literature review. Open Journal
of Business and Management, 4(1), 51–59.
Henninger, F., Shevchenko, Y., Mertens, U.K., Kieslich,
P.J., & Hilbig, B.E. (2019). lab.js: A free, open, online
study builder. https://doi.org/10.31234/osf.io/fqr49