As future works, we will finish annotating re-
maining kaomoji. We will attempt to construct clas-
sification method using deep learning tools such as
Chainer, TensorFlow, and so on to improve the ac-
curacy of estimation (c.f. emoji2vec(Eisner et al.,
2016)). In addition, we have to annotate kaomoji’s
emotions based on Plutchik model(Plutchik, 1980).
Because we do not extract emotions that kaomoji
shows in this paper.
ACKNOWLEDGEMENT
This work was supported by JSPS KAKENHI Grant
Number 15K21592.
REFERENCES
Bedrick, S., Beckley, R., Roark, B., and Sproat, R. (2012).
Robust kaomoji detection in twitter. In Proceedings
of the Second Workshop on Language in Social Me-
dia, LSM ’12, pages 56–64, Stroudsburg, PA, USA.
Association for Computational Linguistics.
Eisner, B., Rockt¨aschel, T., Augenstein, I., Bosnjak, M.,
and Riedel, S. (2016). emoji2vec: Learning emoji
representations from their description. Proceedings of
The Fourth International Workshop on Natural Lan-
guage Processing for Social Media, pages 48–54.
Hall, E. T. (1976). Beyond Culture. Anchor Books.
Kazama, K., Mizuki, S., and Sakaki, T. (2016). Study of
sentiment analysis using emoticons on twitter. The
30th Annual Conference of the Japanese Society for
Artificial Intelligence, 3H3-OS-17a-4.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and
Dean, J. (2013). Distributed representations of words
and phrases and their compositionality. In Burges, C.
J. C., Bottou, L., Welling, M., Ghahramani, Z., and
Weinberger, K. Q., editors, Advances in Neural In-
formation Processing Systems 26, pages 3111–3119.
Curran Associates, Inc.
Onishi, C. and Okumura, N. (2014). An inverstigation of the
usage of kaomoji for emotions judgment and kaomoji
recommendation. In The 13th IASTED International
Conference on Artificial Intelligence and Applications
AIA2014. #816-014.
Plutchik, R. (1980). Chapter 1 - a general psychoevolution-
ary theory of emotion. In Plutchik, R. and Kellerman,
H., editors, Theories of Emotion, pages 3 – 33. Aca-
demic Press.
Ptaszynski, M., Maciejewski, J., Dybala, P., Rzepka, R., and
Araki, K. (2010a). Cao: A fully automatic emoticon
analysis system. In Fox, M. and Poole, D., editors,
AAAI. AAAI Press.
Ptaszynski, M., Maciejewski, J., Dybala, P., Rzepka, R., and
Araki, K. (2010b). Cao: A fully automatic emoticon
analysis system based on theory of kinesics. IEEE
Transactions on Affective Computing, 1(1):46–59.
Ptaszynski, M., Maciejewski, J., Dybala, P., Rzepka, R.,
Araki, K., and Momouchi, Y. (2012). Science of
Emoticons. IGI Global.
Tanaka, Y., Takamura, H., and Okumura, M. (2005). Ex-
traction and classification of facemarks. In Proceed-
ings of the 10th International Conference on Intel-
ligent User Interfaces, IUI ’05, pages 28–34, New
York, NY, USA. ACM.
Urabe, Y., Rafal, R., and Araki, K. (2013). Emoticon rec-
ommendation for japanese computer-mediated com-
munication. In Semantic Computing (ICSC), 2013
IEEE Seventh International Conference on, pages 25–
31.
Yamada, T., Tsuchiya, S., Kuroiwa, S., and Ren, F. (2007).
Classification of facemarks using n-gram. In 2007
International Conference on Natural Language Pro-
cessing and Knowledge Engineering, pages 322–327.