a characteristic of the learning support system. The
learner stopped and vocalized nonverbal voices.
As a reference, the perspectives from the Kinect
sensor are shown in Figure 17–18. From the above,
we can obtain a noncontact recording of the moving
learner’s voice when they are playing with the
learning support system. Consequently, it is
concluded that KIKIMIMI is a suitable system for
automating a post-evaluation.
5 CONCLUSIONS
In this paper, we proposed an evaluation system
called “KIKIMIMI” for automating the post-
evaluation of a learning support system by reactions
from a learner’s voice. The focus of this study was
on the post-evaluation of a learner’s voice recording.
We selected a voice-separation technique because it
could capture a noncontact recording and sound at
an objective angle locally. We could capture a
learner’s head-position information by recording the
time information from the depth sensor. We aimed
to obtain a noncontact recording of the moving
learner’s voice when they are playing with the
learning support system.
In this research, we confirmed that KIKIMIMI
operated in environment with confusing background
noise for automating a post-evaluation. As a result, it
is suggested that KIKIMIMI can possibly capture an
objective voice.
In the future, KIKIMIMI will be used as a
system for automating a post-evaluation.
ACKNOWLEDGEMENTS
This work was supported in part by Grants-in-Aid
for Scientific Research (B). I am particularly
grateful for the illustration (Figure 2) produced by
Ms. Midori Aoki.
REFERENCES
Tomohiro Nakayama, Ryuichi Yoshida, Takahiro Nakadai,
Takeki Ogitsu, Hiroshi Mizoguchi, Kaori Izuishi,
Fusako Kusunoki, Keita Muratsu, and Shigenori
Inagaki, “Immersive Observation Support System
toward Realization of 'Interactive Museum' -Observing
'Live' Extinct Animals while Walking in a Virtual
Paleontological Environment-,” Proceedings of the
11th International Conference on Advances in
Computer Entertainment Technology (ACE2014),
Poster_149(1)-(4), November 2014.
Tomoki Taguchi, Ryohei Egusa, Masanori Sugimoto,
Fusako Kusunoki, Etsuji Yamaguchi, Shigenori
Inagaki, Yoshiaki Takeda, and Hiroshi Mizoguchi,
“Developing Voice Separation System for Support
Education Research: Determining Learner Reaction
without Contact,” Journal of Convergence Information
Technology (JCIT), Vol. 9, No. 3, pp.12-17, May
2014.
Takahiro Nakadai, Tomohiro Nakayama, Tomoki
Taguchi, Ryohei Egusa, Miki Namatame, Masanori
Sugimoto, Fusako Kusunoki, Etsuji Yamaguchi,
Shigenori Inagaki, Yoshiaki Takeda, and Hiroshi
Mizoguchi, “Sound-Separation System using
Spherical Microphone Array with Three-Dimensional
Directivity-KIKIWAKE 3D: Language Game for
Children,” International Journal on Smart Sensing and
Intelligent Systems (S2IS), Vol. 7, No. 4, pp.1908-
1921, December 2014.
Masafumi Goseki, Ryohei Egusa, Takayuki Adachi,
Hiroshi Mizoguchi, Miki Namatame, Fusako
Kusunoki, Shigenori Inagaki, “Puppet Show for
Entertaining Hearing-Impaired, Together with
Normal-Hearing People—A Novel Application of
Human Sensing Technology to Inclusive Education,”
International Conference on Innovative Engineering
Systems (ICIES2012), pp.121-124, December 2012.
Nogoc-Vinh Vu, Hua Ye, J. Whittington, J. Devlin, and
M. Mason, “Small Footprint Implementation of Dual-
Microphone Delay-and-Sum Beamforming for In-Car
Speech Enhancement,” IEEE International Conference
on Acoustics, Speech, and Signal Processing, pp.1482-
1485, March 2010.
Riyad A. El-laithy, Jidong Huang, and Michael Yeh,
“Study on the Use of Microsoft Kinect for Robotics
Applications,” IEEE/ION International Conference on
Position Location and Navigation Symposium
(PLANS), pp.1280-1288, April 2012.
CSEDU2015-7thInternationalConferenceonComputerSupportedEducation
306