and Watanabe, 2003), interpretation of avatar’s facial
expressions (Koda and Ishida, 2006), description lan-
guage for avatar’s multimodal behavior (Prendinger,
2004) and so on. However, there has been a few cases
using avatars for feeling extraction. In this section, we
will mention representative studies that use avatars for
feeling extraction.
Takahashi et al. (Takahashi et al., 2005) con-
structed TelMeA, an asynchronous communication
support system, which presents the relations among
participants and the relations between contents and
conversations by the behavior of static avatars. The
purpose of TelMeA is similar to ours, because
TelMeA was designed to ease interpretation of feel-
ings difficult to express verbally, by combining the
the contexts and the behaviors of avatars. However,
we defined feeling expressions by avatars as a part of
subjective annotation, and planned to use them like
collaborative tags for information retrieval and classi-
fication in contents sharing. For this reason, we ver-
ified the consistency of feelings elicited by avatars.
Moreover, our avatars could express feeling towards
the contents not only with clear context, but also with
unclear context such as photos.
Another case, PrEmo (Desmet, 2003), is a tool
to assess emotional responses toward consumer prod-
ucts. In PrEmo, avatars have 14 behaviors, which
consisting of 7 positive and 7 negative behaviors.
Users rate each avatar based on the feelings elicited
by the products. This tool enables product impres-
sion analysis based on user’s feelings. The purpose of
PrEmo is similar to ours because it was designed to
analyze feelings elicited by targets. However, the re-
sults of feeling analysis for each product using PrEmo
were mapped all together in the emotion space struc-
tured by 14 avatar behaviors. Therefore, users cannot
easily share their feelings elicited by each product.
Moreover, in PrEmo, the rating for each avatar only
indicates that the feeling that each avatar represents is
present in the user’s feeling elicited by products. On
the other hand, our avatar can express not only the
presence of feelings, but also degrees of them.
9 CONCLUSIONS
We proposed subjective annotation where users add
subjective information, such as feelings and intention,
to the contents. As it is particularly difficult to verbal-
ize a feeling, we adopted avatars to express feelings.
To use an avatar as the interface of subjective anno-
tation, the consistency of feelings elicited by avatars
over time for an individual, and also the consistency
in a group of people were assessed. The results indi-
cated consistency for both cases, although the varia-
tion of arousal was wider than that of valence.
In addition, a comparison was conducted regard-
ing feeling expressiveness and satisfaction level be-
tween avatars and collaborative tags. The results in-
dicated that avatars are more suitable than tags for ex-
pression of feelings, particularly in cases with con-
tents that include no context and no message, such
as photos. Overall, avatars could be used for expres-
sion of subjective annotation. In future studies, we
will improve the control interfaces of avatars to make
them more intuitive and continue to verify the practi-
cal usefulness of subjective annotation with avatars.
REFERENCES
Desmet, P. M. (2003). Measuring emotions. In Funology:
from usability to enjoyment, pages 111–123. Kluwer
Academic Publishers.
Ekman, P. and Friesen, W. V. (1971). Constants across cul-
tures in the face and emotion. Personality and Social
Psychology, 17(2):124–129.
Golder, S. A. and Huberman, B. A. (2006). Usage patterns
of collaborative tagging systems. Journal of Informa-
tion Science, 32(2):198–208.
Inoue, M. and Kobayashi, T. (1985). The research domain
and scale construction of adjective-pairs in a semantic
differential method in japan. The Japanese Journal of
Educational Psychology, 33(3):253–260.
Ishii, Y. and Watanabe, T. (2003). An embodied video com-
munication system in which self-referentiable avatar
is superimposed for virtual face-to-face scene. Jour-
nal of the Visualization Society of Japan, 23(1):357–
360.
Koda, T. and Ishida, T. (2006). Cross-cultural comparison
of interpretation of avatars’ facial expressions. Trans-
actions of Information Processing Society of Japan,
47(3):731–738.
Lang, P. J. (1995). The emotion probe: Studies of motiva-
tion and attention. American Psychologist, 50(5):372–
385.
Mathes, A. (2004). Folksonomy - cooperative classi-
fication and communication through shared meta-
data. Master’s thesis, Graduate School of Library and
Information Science University of Illinois Urbana-
Champaign.
Prendinger, H. (2004). Mpml : A markup language for con-
trolling the behavior of life-like characters. Journal of
Visual Languages and Computing, 15(2):183–203.
Takahashi, T., Bartneck, C., Katagiri, Y., and Arai, N.
(2005). TelMeA - expressive avatars in asynchronous
communications. International Journal of Human-
Computer Studies (IJHCS), 62(2):193–209.
Zajonc, R. B. (1968). Attitudinal effects of mere exposure.
Journal of Personality and Social Psychology, 9:1–27.
EFFECTIVENESS OF AVATARS FOR SUBJECTIVE ANNOTATION
81