5.2 Factors Influencing the Reading
Captioning
In Experiment 2, we compared the effects of different
caption contents A and B and different simplicity in
the original and simplified texts on the work. We
found a significant difference in processing time
depending on the content, rather than the number of
characters, of captions. It has been pointed out in
previous studies that the content of captioning has a
greater influence on reading than the number of
characters or the speed of display; an identical
tendency was found in the Japanese language, where
phonetic and ideographic characters are mixed.
In previous studies, subjective evaluations such as
questionnaires have been used; however, we were
able to show an objective evaluation index using a
simple method of key-input speed in the present study.
6 CONCLUSIONS
We conducted experiments to clarify the
effectiveness of an objective evaluation index for
considering what is appropriate captioning for people
who are deaf or hard of hearing (DHH), assuming that
we assisted such people by presenting captions using
AR technology. In an experiment in which symbols,
icons, or captioning were presented in the peripheral
vision to people who are DHH, the results showed
that keystroke speed varied appropriately with
information. In other words, the key-input can be
used to evaluate the reading of captioning.
In our experiment using captioning with varying
content and simplicity, we confirmed that the content
affected the reading of captioning using our proposed
objective evaluation method.
Compared to the experiment using symbols, the
standard deviation of task processing time tended to
be larger in the experiment using captioning,
indicating large individual differences in reading
captioning. It has been pointed out that effectiveness
of captions is strongly related to the level of
individual reading skills (Lewis, 2001); this point
must be clarified in the future.
ACKNOWLEDGEMENTS
This work was supported by the Japan Society for the
Promotion of Science (JSPS) KAKENHI (Grant
Number JP 18H03660).
REFERENCES
Bavelier, A., Tomann, C., et.al., 2000. Visual attention to
the periphery is enhanced in congenitally deaf
individuals. Journal of Neuroscience, 20(17), RC93.
Bosworth, R.G., Dobkins, K.R., 2002. Visual field
asymmetries for motion processing in deaf and hearing
signers. Brain and Cognition. 49(1), 170-181.
Brünken, R., Plass, J.L., and Leutner, D., 2003. Direct
measurement of cognitive load in multimedia learn-ing.
Educational Psychologist, 38(1), 53–61.
Constantinou, V., Loizides, F.,et.al., 2016. A personal tour
of cultural heritage for deaf museum visitors. Progress
in cultural heritage: Documentation, preservation, and
protection. EuroMed 2016. Lecture notes in computer
science, 10059. Springer, Cham, 214-221.
Díaz-Cintas, J., Remael, A., 2007. Audiovisual translation:
Subtitling. Manchester & Kinderhook, St. Jerome.
Findlater, L., Chinh, B., et.al., 2019. Deaf and Hard-of-
hearing Individuals' Preferences for Wearable and
Mobile Sound Awareness Technologies. 2019 CHI
Conference on Human Factors in Computing Systems
(CHI ’19), 46, 1–13.
Gonzalez Vargas, J.C., Fabregat, R., Carrillo−Ramos, A.,
Jove, T., 2020. Survey: Using Augmented Reality to
Improve Learning Motivation in Cultural Heritage
Studies. Applied Science, 10(3), 897.
Ishiguro, Y., Rekimoto, J., 2011. Peripheral vision
annotation: Noninterference information presentation
method for mobile augmented reality. AH’11: 2nd
Augmented Human International Conference, (8), 1-5.
Kato, N., Kitamura, M., Namatame, M., et al., 2020. How
to make captioning services for deaf and hard of hearing
visitors more effective in museums?, 12th International
Conference on Education Technology and Computers
(ICETC ’20), 157-160.
Lewis, M., & Jackson, D., 2001. Television Literacy:
Comprehension of Program Content Using Closed
Captions for the Deaf, The Journal of Deaf Studies and
Deaf Education, 6(1), 43–53,
Namatame, M., et al., 2019. Can exhibit-explanations in
sign language contribute to the accessibility of
aquariums?, HCI International 2019, 289-294.
Olwal,A., Balke, K., Votintcev,D., et al., 2020. Wearable
Subtitles: Augmenting Spoken Communication with
Lightweight Eyewear for All-day Captioning, Annual
ACM Symposium on User Interface Software and
Technology (UIST '20), 1108–1120.
Shadiev, R., Hwang, W., et al., 2014. Review of speech-to-
text recognition technology for enhancing learning.
Journal of Educational Technology & Society, 17(4),
65-84.
Yoon, J. O., & Kim, M., 2011. The effects of captions on
deaf students' content comprehension, cognitive load,
and motivation in online learning. American annals of
the deaf, 156(3), 283–289.