tion, Register, and Timbre, could be used for
emotion recognition of violin music in future work.
And the proposal could be extended to the emotion
recognition of other string instruments such as cello.
Besides the classical music, other types of music
can be performed by violin, e.g., folk music, pop
music, drawing the violin out of classical shell.
Not limited to the home environment, the
proposal could be applied to emotion analysis of
background music on other occasions such as café,
restaurants, supermarket, bar, and geracomium etc.,
where an automatic background music recommenda-
tion system will be further developed to improve the
individual emotion in one-to-one communication or
the atmosphere in many-to-many communication.
What’s more, it can also be used for medical
research such as psychotherapy, where the emotional
music is expected to help people eliminate the stress
or conquer the other psychological diseases.
ACKNOWLEDGEMENTS
This work was supported by Japan Society for the
Promotion of Science (JSPS) under grant
KAKENHI 21300080.
REFERENCES
Ayadi M. E., Kamel M. S., and Karray F., 2011. Survey
on speech emotion recognition: features, classification
schemes, and databases. Pattern Recognition. 44 (3):
572-587.
Barbour J. M., 2004. Tuning and Temperament: A
Historical Survey, Courier Dover Publications. New
York, Dover edition.
Boersma P. and Weenink D., 2008. Praat: doing phonetics
by computer. http://www.praat.org/.
Brown C., 1999. Classical and Romantic Performing
Practice 1750-1900, Oxford University Press.
Cabrera D., 1999. PSYSOUND: A computer program for
psychoacoustical analysis. Proceedings of the
Australian Acoustical Society Conference.
Grimm, M., Kroschel, K., and Narayanan S., 2007.
Support vector regression for automatic recognition of
spontaneous emotions in speech. IEEE Int. Conf. on
Acoustics, Speech and Signal Processing.
Han B.-J., Rho S., Dannenberg R. B., and Hwang E., 2009.
SMERS: music emotion recognition using Support
Vector Regression. 10th International Conference on
Music Information Retrieval.
Hargreaves D. J., 1999. The functions of music in
everyday life: redefining the social in music
psychology. Psychology of Music. 27 (1): 71-83.
Hirota K. and Dong F.-Y., 2008. Development of Mascot
Robot System in NEDO project. In Proc. 4th IEEE Int.
Conf. Intelligent Systems.
Juslin P. N., 2000. Cue utilization in communication of
emotion in music performance: relating performance
to perception. Journal of Experimental Psychology:
Human Perception of Performance. 26 (6): 1797-1813.
Kim Y. E., Schmidt E. M., Migneco R., Morton B. G.,
Richardson P., Scott J., Speck J. A., and Turnbull D.,
2010. Music emotion recognition: a state of the art
review. 11th International Society for Music
Information Retrieval Conference.
Lartillot O. and Toiviainen P., 2007. A Matlab toolbox for
musical feature extraction from audio. 10th Int.
Conference on Digital Audio Effects.
Lee C.-C., Mower E., Busso C., Lee S., and Narayanan S.,
2011. Emotion recognition using a hierarchical binary
decision tree approach. Speech Communication. 53 (9-
10): 1162-1171.
Liu Z.-T., Wu M., Li D.-Y., Dong F.-Y., Yamakaki Y.,
and Hirota K., 2011. Emotional states based 3-D
Fuzzy Atmosfield for casual communication between
humans and robots. In Proc. IEEE Int. Conf. Fuzzy
Systems.
Lu L., Liu D., and Zhang H.-J., 2006. Automatic mood
detection and tracking of music audio signals. IEEE
Transactions on Audio, Speech, and Language
Processing. 14 (1): 5-18.
Nakamura T., 1987. The communication of dynamics
between musicians and listeners through musical
performance. Perception & Psychophysics. 41 (6):
525-533.
Oatley K., Johnson-Laird P.N., 1987. Towards a cognitive
theory of emotions. Cognition & Emotion. 1: 29-50.
Sethares W. A., 1993. Local consonance and the
relationship between timbre and scale. Journal of the
Acoustical Society of America. 94 (3): 1218-1228.
Starks E., 2012. Popular instruments used in classical
music. http://www.ehow.com/about_5377304_popular
-instruments-used-classical-music.html.
Smola A. J. and Schölkopf B., 2004. A tutorial on Support
Vector Regression. Statistics and Computing. 14 (3):
199-222.
Yamazaki Y., Hatakeyama Y., Dong F. Y., and Hirota K.,
2008. Fuzzy inference based mentality expression for
eye robot in Affinity Pleasure-Arousal space. Jounal
of Advanced Computational Intelligence and
Intelligent Informatics. 12 (3): 304-313.
Yamazaki Y., Vu H. A., Le P. Q., Liu Z.-T., Fatichah C.,
Dai M., Oikawa H., Masano D., Thet O., Tang Y.-K.,
Nagashima N., Tangel M. L., Dong F.-Y., and Hirota
K., 2010. Gesture recognition using combination of
acceleration sensor and images for casual
communication between robots and humans. IEEE
Congress on Evolutionary Computation.
Yang Y.-H., Lin Y.-C., Su Y.-F., and Chen H. H., 2008. A
regression approach to music emotion recognition.
IEEE Transactions on Audio, Speech, and Language
Processing. 16 (2): 448-457.
Zweifel P. F., 2005. The mathematical physics of music.
Journal of Statistical Physics. 121 (5-6): 1097-1104.
ICINCO2012-9thInternationalConferenceonInformaticsinControl,AutomationandRobotics
14