Directed Effort
A Generic Measurand for Higher Level Behavior Analysis
Benedikt Gollan
1
and Alois Ferscha
2
1
Pervasive Computing Applications, Research Studios Austria, Thurngasse 8/20, Vienna, Austria
2
Institute for Pervasive Computing, Johannes Kepler University, Linz, Austria
Keywords:
Automatic Behavior Analysis, Movement Tracking, Pattern Recognition, Estimation of Engagement.
Abstract:
Behavior and body language are essential components of human interaction. In this paper, we propose a
meta-level representation of human behavior for interpretative, higher level applications in human-computer
interaction systems called Directed Effort. A theoretical framework is described which is derived from behav-
ioral and psychological sciences and which is designed to represent the commitment and interest of people
towards objects via behavior analysis in real-life scenarios. Directed Effort, a score which allows the inter-
pretation of detected behavior changes is introduced as a generic measurand. Furthermore, a prototypical
implementation is documented to show the potential of the computed meta-level description of behavior.
1 INTRODUCTION
Behavior and body language are crucial aspects of
any kind of interaction between humans. Besides the
pure entropy of information which can be represented
bit-wise, it is the subtle indicators like the sarcastic
tone of voice, the rolling of the eyes or the impa-
tient tapping of the foot through which we transmit
the majority of the information which is necessary to
successfully interpret messages. Human interaction
is largely based on the meticulous interpretation of
meta-information for which, even among humans, ex-
perts are rare. This poses an immense challenge to all
researchers and designers of human-computer inter-
action systems that aim at creating natural and intu-
itive user interfaces, trying to best possible generate a
human-like interaction quality experience in human-
computer interaction systems.
Human behavior encompasses any observable
physical activity. Yet, The description of behavior
not only has to cover different aspects of behavior
depending on the respective field of application, but
can also imply many different layers of abstraction.
These can range from a technical analysis of body
pose based on joint coordinates and orientations to a
higher-level interpretation of body language, or from
the pure analysis of movement speed data to an in-
terpretation of underlying motivations and intentions.
An ideal representation of behavior will of course in-
clude all behavioral aspects and layers, whereas ac-
tual applications and implementations will set limita-
tions to what is necessary, useful and technically fea-
sible.
As human interaction is largely based on the inter-
pretation of human behavior, we need to create repre-
sentations of behavior which allow further interpre-
tations of intentions that go beyond explicit user in-
put. In this work, we try to approach such a poten-
tial higher level description which may be applica-
ble to represent human commitment and interest. For
this purpose, we propose a generic meta-level descrip-
tion of behavior which is designed to be generally
valid, generic, independent from sensor technology,
and quantifiable to be suitable for various human-
computer interaction applications.
1.1 Related Work
Commitment and behavior have often been ap-
proached from different scientific fields which re-
sulted in numerous strategies towards specific aspects
and effects related to behavior, commitment and at-
tention. Elaborate surveys on human behavior analy-
sis have been composed by Ji et. al (Ji and Liu, 2010)
and Candamo et. al (Candamo et al., 2010).
Concentrating on technical realizations, the im-
plementations can be divided into the already in-
dicated categories of action or acitivity recognition
and a semantic representation of behavior. Activity
analysis can be covered by template-based or state-
83
Gollan B. and Ferscha A..
Directed Effort - A Generic Measurand for Higher Level Behavior Analysis.
DOI: 10.5220/0004696500830090
In Proceedings of the International Conference on Physiological Computing Systems (PhyCS-2014), pages 83-90
ISBN: 978-989-758-006-2
Copyright
c
2014 SCITEPRESS (Science and Technology Publications, Lda.)
space-based approaches. Employing templates, Bo-
bick and Davis (Bobick and Davis, 2001) first used
motion energy images and motion history images to
represent and categorize activities. Blank et. al
(Blank et al., 2005) extracted the 3D silhouette of
people to enable a fast and robust classification of
activities. State-space approaches interpret human
behavior as a state machine with different postures
as states. These are often represented via Hidden-
Markov-Models (HMMs) (Peursum et al., 2007), (Shi
et al., 2006).
Semantic descriptions of behavior have gained
momentum lately. The importance of body orienta-
tion in human interaction is investigated by Hieta-
nen (Hietanen, 2002) with the result that head and
upper body orientation are vital components which
nonverbally transfer the aspect of engagement in an
interaction. Guevara and Umemuro (Guevara and
Umemuro, 2010) implemented a behavior analysis to
infer on psychological states, finding that walking ve-
locity and motion load are suitable for predicting sad-
ness or neutral states of mind.
Usually, in human-computer interaction human-
computer interaction research, commitment is inter-
preted as equivalent to visual focus and no further
attention model is integrated. Smith et al. (Smith
et al., 2006) created a head tracking and gaze esti-
mation system which detects whether passing people
are actually watching a shop window. Yakiyama et
al. (Yakiyama et al., 2009) estimate commitment lev-
els towards target objects via a laser sensor and on
the basis of computed distance, basic orientation and
movement speed. According to Knudsen (Knudsen,
2007) orienting movements are used to optimize the
resolution of (visual) information about the object.
2 THE CREATION AND
CONTROL OF BEHAVIOR
Human behavior addresses any observable physical
activity of the individual. To enable a successful in-
terpretation, it is essential to understand basic under-
lying behavioral mechanisms. This chapter will de-
scribe a model derived from psychology and behavior
research of how behavior is motivated, controlled and
initialized.
Following Bongers (Bongers et al., 2009) and Di-
jksterhuis (Dijksterhuis and Aarts, 2010), behavior is
controlled via a sequential motivation chain (fig. 1):
Input stimuli are processed and filtered regarding their
value, importance and salience. The successive stim-
uli compete with already existing Motivations for
their realization. The result of this competition is a
set of prioritized Goals, that describe the intrinsic, of-
ten unconscious intentions of a person. These cannot
be assessed from the outside and may be as simple as
’being hungry’, or complex structures which can not
be verbalized at all. To actually achieve these abstract
Goals, we make concrete Plans like navigating to the
next restaurant, to satisfy the underlying motivation.
Finally, the actual realization of these plans leads to
the execution of observable physical behavior.
With outward behavior visualizing and represent-
ing inner states, a thorough analysis of pyhsical be-
havior holds great potential for a suitable analyis of
the level and even orientation of inner commitment.
Elementary changes in the motivation chain, e.g. trig-
gered by sudden extrinsic stimuli (siren, etc.) will
cause a sequence of re-prioritization of Goals and
Plans and finally result in alterations of physical be-
havior. Our approach is directed at the observation
and interpretation of such behavioral changes, to in-
fer alterations and repriorizations in the unfortunately
unobservable motivation chain.
The proposed model correlates very well with
existing behavior control findings. Posner (Posner,
1980) investigated the connection between extrinsic
behavior, e.g. head turning, eye movements towards
selected stimuli and inner processes which describe
all completely mental activities. Posner found the re-
lation between covert and overt attention to be not a
close but a functional one, showing a striking ten-
dency of attention to move to the target prior to an eye
movement’. Hoffman (Hoffman and Subramaniam,
1995) showed that, being ordered to direct gaze to-
wards a certain location, one cannot attend objects at
a different location. This existence of a neural struc-
tural connection between exterior and inner focus
was supported by Moore (Moore and Fallah, 2001),
Perry (Perry, 2000) and Rizzolatti (Rizzolatti et al.,
1987). On the other hand, experiments carried out by
Hunt and Kingstone (Hunt and Kingstone, 2003) in-
dicate that in case of bottom-up controlled, reflexive
processes overt and covert attention are strongly re-
lated whereas for top-down controlled processes, in-
ferring backwards from eye gaze alone to overt atten-
tion is error-prone. The difference between reflexive
and voluntary controlled mechanisms is supported by
M
¨
uller and Rabbitt (M
¨
uller and Rabbitt, 1989), who
detected higher reorientation performances for reflex-
ive reorientation of attentional focus.
PhyCS2014-InternationalConferenceonPhysiologicalComputingSystems
84
Figure 1: Schematic Illustration of the intrinsic and exterior processes of behavior control. (1) Incoming stimuli are filtered
according to top down and bottom up processes (SEEV model (Wickens and McCarley, 2008)) (2) Succeeding stimuli enter
and alter the Motivation Chain and influence the distribution of Attention Resources (3) Realization of Plans expresses in
observable behavior (4) Behavior changes can be tracked, quantified and interpreted.
3 EFFORT AS KEY-PARAMETER
FOR BEHAVIOR ANALYSIS
Having a model of how behavior is generated, the next
step is to find a suitable representation which is char-
acteristic for all kinds of behavior and especially de-
scribes changes of behavior in a qualitative and quan-
titative way.
Every alteration of existing plans and accordingly
of current behavior is characterized by its demand for
a certain amount of Mental Effort, which includes the
process of filtering stimuli input and deciding to com-
mit to a source of information and consequently a
rescheduling of future tasks. Furthermore, it requires
Physical Effort, ‘an important concept, ... required to
access different sources of information, using what-
ever mechanism is necessary: eyes, head, body, hands
or even the walking feet (Wickens and McCarley,
2008), to actually alter physical behavior.
In this context, the principle of the economy of
movement represents a crucial aspect. As Bitgood
states: To overcome the economy of movement moti-
vation,.. , the perceived benefits of approaching an at-
tractive object must outweigh the perceived cost of the
effort’ (Bitgood, 2006). Generally, people tend to op-
timize their behavior concerning energy consumption
and effort, physical or mental, as excessive mental ef-
fort, like excessive physical effort, generally produces
an unpleasant state that is to be avoided. Hence, peo-
ple tend to be inherently effort conserving, particu-
larly when placed in high demanding environments...
(Wickens and McCarley, 2008). Consequently, we as-
sume that one will stick to his current comportment
until given valid reason to change, thus evading un-
necessary investment of effort.
To give an example, in spatial contexts, behavior
control is influenced by the effort which is necessary
to access information sources. This physical effort in-
volves all overt processes from eye movement, over
body posture to movement parameters. Kahneman
states that because of the connection between effort
and arousal, physiological measures of arousal can
be used to measure the exertion of effort’ (Kahneman,
1973). According to Knudsen (Knudsen, 2007) Ori-
enting Movements are used to optimize the resolution
of (visual) information about the object. To demon-
strate these principles, a sample scenario of a person
moving through a shopping mall is displayed in fig-
ure 2. Passing a public display, there are different be-
haviors that can be adopted. In the sample scenario,
the described options (a)-(c) differ in the amount of
energy and time which are invested to engage with
the display and the presented content. As can be ob-
served, the more effort is invested the higher will be
the commitment to the object.
Bringing it all together as illustrated in figure 1, in
the complete process of (i) filtering stimuli, over (ii)
alteration of the motivation chain to the (iii) allocation
DirectedEffort-AGenericMeasurandforHigherLevelBehaviorAnalysis
85
Figure 2: Different kinds of behavior in a mall scenario when passing a public display. (a, white) Passers-by may not perceive
the display at all and show no reaction, (b, yellow) turn their head towards the screen but continuing current general behavior
of approaching their destination or (c, green) actually changing the path, investing time and commitment in perceiving the
presented information.
of attention resources and finally to the (iv) execution
of related plans to satisfy underlying motivations, it
is named Effort which represents the critical thresh-
old of whether attention resources are assigned and
behavior is changed. At the same time it represents
an observable indicator of physical commitment to a
source of information. With effort being (not the only
but) such an important regulating factor in behavior
control, we propose physical effort as the basis for a
generic and generally applicable higher level repre-
sentation of behavior.
4 BEHAVIOR ANALYSIS
FRAMEWORK
Providing a valid measurement of behavior, demands
three important elements: (a) measuring directly sig-
nificant target behavior, (b) measuring a relevant di-
mension of target behavior and (c) ensuring that the
data is representative for the given use-case (Cooper
et al., 2007). In this chapter, a behavior measure
framework is proposed based on interpreting behav-
ioral changes which tries to best possibly match these
prerequisites.
To approach an actual implementation, changes of
behavior need to be detected, quantified and evalu-
ated. We assume an interaction system with any kind
and number of sensor-based tracking system(s) (cam-
eras, distance sensors, depth sensors, etc.) with which
behavioral data can be collected. The collected be-
havior data depends on the choice of sensor and appli-
cation scenario and may include any measurable data
that describes movement in a characteristic way like
skeleton joint coordinates, gaze direction, etc.
Our proposed framework (fig. 3) defines four im-
portant variables, which are behavioral parameters
b
i
(t), Effort e
i
(t), Effect f
i
(t) and Directed Effort
DE(t).
b
i
(t) = s(i,t) (1)
e
i
(t) = (b
i
(t),B
i
(t
0
;t 1))) (2)
f
i
(t) = Φ(b
i
(t),
x
) (3)
DE(t) =
α
i
· e
i
(t) · f
i
(t) (4)
Behavior parameters b
i
(equ. 1) describe the sen-
sor data s(t) extracted to exclusively describe a single
aspect of behavior i. These could range from move-
ment speed, orientation or location of single body
joints to a emitted volume of whole groups of peo-
ple. The selection of these behavioral features heav-
ily depends on the application and choice of sensor.
For later evaluation of the data, it is necessary to best
possible isolate the characteristic parameters of the
aspired behavior parameter in the feature extraction
process. Please note, the behavior parameters may
range from mere numeric to directed dimensions like
vectors, angles, etc. Due to this variety, the notation
has been confined to an abstract placeholder b
i
.
The extracted behavior data is used to calculate
changes in behavior individually for each behavior
parameter per frame t. For this purpose, training data
B
i
is collected to describe recent reference behavior
which is used to detect the amount of alterations. The
process of calculating the amount of effort e
i
(t) which
is required to execute the detected change of behavior
(equ. 2) is represented via (equ. 5). In this func-
tion, effort is represented via a percental represen-
PhyCS2014-InternationalConferenceonPhysiologicalComputingSystems
86
Figure 3: Visualization of behavior analysis framework components. Tracking data is collected from sensor(s). Feature
extraction algorithms provide behavior data which is used to calculate reference behavior and current behavior changes. The
changes of behavior are processed to effort scores and evaluated in relation to the location of the object of interest to obtain
effect information. Finally, effort and effect are combined to a single expressive feature called Directed Effort which describes
the level of engagement of behavior changes to an object of interest.
tation in relation to the maximal possible change of
behavior for the respective behavior parameter. E.g.,
turning the head with an angle of 20
and assuming
a maximum turn of 180
would result in an effort of
20
/180
= 11, 1% for the behavioral parameter of
head turn. In spite of the issue of defining these maxi-
mum values, this process provides a normalized level
of effort scores throughout completely unrelated pa-
rameters of behavior, making them comparable.
(b
i
(t),B
i
[t
0
; t 1]) =
b
i
(t) B
i
(t)
b
i,max
(5)
These effort scores e
i
represent the overall and un-
evaluated invested effort scores. Accumulating over i
would deliver the overall amount of invested effort.
Yet, the proposed isolated representation of the ef-
fort per behavior parameter is necessary for the fol-
lowing evaluation of the orientation and effect of the
invested effort. Assuming that all behavior changes
are causally determined, our approach tries to inter-
pret the effect of the detected behavior to deduce the
source of the behavioral change and thus draw con-
clusions on the underlying motivations.
Given the vast amount of potential objects of in-
terest and the high possibility that the actual sources
of the behavior change are unknown and outside of
the observable area, the evaluation has to be restricted
to distinct objects. This restriction will not allow a
general identification of sources of interest but at least
enables the observation and evaluation of distinct, sin-
gle targets. By defining the location
x
of an object of
interest (OOI) in the observed scene, it is possible to
analyze the effect f
i
of the change on potential com-
mitment to this OOI. The evaluation of the behavior
data (equ. 3) in relation to the location of a reference
OOI is represented by Φ (equ. 6). Again, this func-
tion needs to be accustomed to the specific behavior
parameter and a percental representation is proposed.
Φ(e
i
(t),
x
) =
b
i
(t,
x
)
b
i,opt
(6)
As a final step, extracted information concerning
the amount and the effect of the invested Physical Ef-
fort are combined into a single expressive value called
Directed Effort (equ. 4) which represents the effective
invested Effort which is has been evaluated as con-
tributive regarding potential OOI. In other words, Di-
rected Effort scores describe how much effort which
is directed at a specific object, has been invested.
The already calculated values e
i
(t) and f
i
(t) hold the
amount and effect of the respective behavior parame-
ters i. To combine them to a single expressive score,
the products of the corresponding effort and effect
scores are accumulated and weighted with a factor
α
i
. This weighting again depends on the application
scenario and choice of parameters. By interpreting
these Direct Effort scores, we hope to be able to draw
conclusions on the distribution of attention resources
which has evoked the observed behavior.
5 EXEMPLARY
IMPLEMENTATION
To demonstrate the functionality of the proposed
framework, a sample implementation is described in
the following. A public display scenario has been se-
lected, in which the commitment of passers-by to the
displayed content is supposed to be investigated. To
enable behavior analysis, a large-scale public display
has been equipped with a depth sensor which allows
an accurate tracking of body pose and extraction of
movement features of passers-by.
First, the aspired behavior parameters b
i
need to
be identified and defined. In the given scenario, be-
havior can be unraveled into the three existing degrees
DirectedEffort-AGenericMeasurandforHigherLevelBehaviorAnalysis
87
(a) (b)
Figure 4: (a) Visualization of behavior parameters b
v
, b
ϕ
and (b) b
τ
.
of freedom of movement which are movement direc-
tion, body orientation and velocity.
Body orientation is set identical to head orienta-
tion, as it turned out to be the orientation component
of the highest relevance and the best tracking results
in our implementation. The effort component which
derives from movement direction and body orienta-
tion is calculated as the difference of the detected
current angles b
τ
and b
ϕ
(fig. 4) to learned current
reference values B
τ
and B
ϕ
. In this application, the
’learned’ components B
i
are implemented as an expo-
nentially decreasing low-pass filter (equ. 10). Finally
the alteration is set in relation to the maximum change
per parameter, which are ±180
for movement direc-
tion and ±90
for head orientation.
To calculate the effort invested in a change of
speed, acceleration values b
v
can be analyzed. Yet,
here a different approach is followed, since relation
to a theoretical maximal acceleration value does not
adequately describe real circumstances. In this case,
an acceleration from 5
m
s
to 10
m
s
would result in the
same effort as an increase from 15
m
s
to 20
m
s
, although
in the first case the speed has been doubled. This is
why a different percentaged representation has been
chosen which describes the percentaged change in re-
lation to recent velocity. Note, that only effort from
behavior changes are analyzed at this point, although
of course, maintaining a high speed will involve im-
mense physical effort. The inclusion of these constant
aspects of effort are part of current research for the
generalization and enlargement of the framework.
e
v
(t) =
b
v
(t)
B
v
(7)
e
ϕ
(t) =
b
ϕ
(t) B
ϕ
90
(8)
e
τ
(t) =
|
e
τ
(t)
|
B
τ
180
(9)
B
i
=
1
30
·
30
n=0
1
n
· b
i
(t n) (10)
To analyze the effect of the invested effort towards
the display location, the orientation of the behavioral
changes has to be evaluated in relation to the loca-
tion of the display, which results in a measure of
the contributivity of the detected activities to the dis-
play location. E.g. a person making a turn of 90
will always result in the same detected physical ef-
fort no matter where in the scene the activity took
place. But is the interpretation of the orientation to-
wards the display which elevates the detected physical
effort from a pure mathematical description of activ-
ity to an expressive representation of potential com-
mitment, which is Directed Effort. It enables us to in-
terpret whether activities are directed towards or away
from the display, bringing the person to a more or less
attentive state.
To evaluate the acceleration parameter b
v
(t) (fig.
5(a)) a rule-based evaluation function φ
v
is selected.
First, the absolute effective fraction of the accelera-
tion vector
a to the display location is analyzed to
evaluate to what degree the activity is related to the
display. Second, a rule-based evaluation of the con-
tributivity is applied which relies on the assumption
that any activity which increases the stay in the range
of the display or improves the perception of the dis-
play is considered as contributive. Hence, all decel-
eration and movement into the display sector are in-
terpreted as positive effect, whereas all accelerations
which are directed away from the display are evalu-
ated as negative effect (equ. 13).
(a) (b) (c)
Figure 5: Computation of the Effect of (a) the acceleration
b
v
(t) by computing the component of f
v
(t) that is di-
rected at the object (orthogonal projection of b
v
(t) onto
the direct connection to the object location), (b) b
ϕ
(t) by the
change of movement angle towards the object and (c) b
τ
(t)
by change of orientation angle f
τ
(t) towards the object.
Φ
v
(t,
x ) = ρ
v
(t) · b
v, f
(t) (11)
b
v,x
(t) =
x
b
v
(t)
|
x
|
2
·
x (12)
ρ
v
(t) =
(
1, if b
v
(t) < 0
||
ω[t] ε
1
,ε
2
,
1, else.
(13)
PhyCS2014-InternationalConferenceonPhysiologicalComputingSystems
88
Figure 6: Directed Effort Scores plotted over time with corresponding behavior illustration. The scene is divided into phases
(I-V). (I): Person walking parallel to display with head turned towards the screen. (II) and (III) Person stopping to watch,
movement direction still parallel and head turned. (IV) Person approaching the screen changing direction assuming a close and
comfortable position with head and shoulders oriented in the same direction. (V) Turning and leaving the scene. The dotted
curves show the result for the single components of Directed Effort which are aggregated to an overall score of Directed Effort
(solid line). Positive amplitudes express positive contributions, negative amplitudes indicate activities which are directed away
from the display.
Considering the effect of variations in movement
direction and orientation, changes are considered ben-
eficial, if the movement causes the angle towards the
display location to decrease (equs. 14, 15). Again, the
effect scores are percentaged in relation to the maxi-
mal value.
Φ
f
(t) =
(B
ϕ,x
b
ϕ,x
(t))
180
(14)
Φ
τ
(t) =
(B
τ,x
b
τ,x
(t))
180
(15)
DE =
i
α
i
· e
i
(t) · f
i
(t) (16)
Bringing it all together, the effort and effect scores
are weighted with parameters α
i
and accumulated to
the final Directed Effort score. The weighting param-
eters are necessary to achieve realistic, well-balanced
effort scores. They have to be established during a
test phase of the system for each application.
Referring to the stated requirement from chap-
ter 4, it can be summarized, that the framework and
the exemplary implementation can claim to fulfill the
specifications. The described behavior measurement
procedure directly measures significant behavior in a
useful dimension and works on data from real life sce-
narios.
6 RESULTS
To evaluate the framework functionality, plotting DE
scores over time provides useful information. It re-
sults in effort curves which are useful to present the
applicability and the potential of our effort-based be-
havior analysis approach. The resulting curves suc-
ceed in adequately describing behavior and signal
changes of behavior via strong signal peaks. In figure
6, the effort curves have been plotted for the single DE
parameters acceleration, moving direction, and shoul-
der and head orientation which contribute to the over-
all Directed Effort curve, using an example from a
database which was gathered during an installation at
a public event. This sample has been divided into five
sections which mark different behavioral segments.
As can be observed, the calculated DE curve
gives a suitable expression of invested physical ef-
fort and enables detection and interpretation of be-
havioral changes. The beginning of each of the sec-
tions which indicate different kinds of activities is
marked with a strong peak in the DE curve, indicat-
ing a high effort level and a substantial change in
behavior as demanded in our theoretical approach.
The height and width of the peak represent an in-
dicator for the strength of the behavior modification
and the algebraic sign clearly separates contributive
from detrimental actions. Smaller behavioral changes
like the deceleration in sections II and III result in a
lower peaks than changes which include strong devi-
ations in in more than one of the single parameters,
as in section IV and V. In the latter sections, the per-
son not only alters movement speed, but as well the
movement direction and body orientation resulting in
higher DE scores.
The noise-like patterns which occur throughout
the sample are mainly caused by oscillations in the ac-
DirectedEffort-AGenericMeasurandforHigherLevelBehaviorAnalysis
89
celeration parameter, which derive from the gait fre-
quency and the slightly strolling walking style of the
subject. Overall, DE curves show promising stability
and at the same time reactivity to behavior modifica-
tions and seem to adequately describe the qualitative
commitment of people towards objects like public dis-
plays.
7 CONCLUSIONS
AND OUTLOOK
In this paper, we have presented an approach towards
a higher level interpretative description of behavior
to express engagement and commitment of via detec-
tion of behavior changes. Such an approach can never
claim to be able to predict the exact focus of atten-
tion of a person but can only try to provide a model
which approximates reality through iterative refine-
ment. The more we accomplish a detailed description
of behavior and context, the better we will perform in
interpreting human behavior. Yet, the proposed meth-
ods may provide a first step towards a behavior-based
attention estimation.
REFERENCES
Bitgood, S. (2006). Not another step! economy of move-
ment and pedestrian choice point behavior in shopping
malls. Environment And Behavior, 38(3):394–405.
Blank, M., Gorelick, L., Shechtman, E., Irani, M., and
Basri, R. (2005). Actions as space-time shapes. In
Computer Vision, 2005. ICCV 2005. Tenth IEEE Inter-
national Conference on, volume 2, pages 1395–1402
Vol. 2.
Bobick, A. and Davis, J. (2001). The recognition of human
movement using temporal templates. Pattern Analy-
sis and Machine Intelligence, IEEE Transactions on,
23(3):257–267.
Bongers, K. C., Dijksterhuis, A., and Spears, R. (2009).
Self-esteem regulation after success and failure to at-
tain unconsciously activated goals. Journal of Exper-
imental Social Psychology, 45(3):468 – 477.
Candamo, J., Shreve, M., Goldgof, D., Sapper, D., and Kas-
turi, R. (2010). Understanding transit scenes: A sur-
vey on human behavior-recognition algorithms. Intel-
ligent Transportation Systems, IEEE Transactions on,
11(1):206–224.
Cooper, J. O., Heron, T. E., and Heward, W. L. (2007).
Applied behavior analysis. Pearson/Merrill-Prentice
Hall, Upper Saddle River and N.J, 2 edition.
Dijksterhuis, A. and Aarts, H. (2010). Goals, Attention,
and (Un)Consciousness. Annual Review of Psychol-
ogy, 61(1).
Guevara, J. E. and Umemuro, H. (2010). Unobtrusive esti-
mation of psychological states based on human move-
ment observation. e-Minds, 2(6).
Hietanen, J. (2002). Social attention orienting integrates vi-
sual information from head and body orientation. Psy-
chological Research, 66(3):174–179.
Hoffman, J. E. and Subramaniam, B. (1995). The role of vi-
sual attention in saccadic eye movements. Perception
& psychophysics, 57(6):787–795.
Hunt, A. R. and Kingstone, A. (2003). Covert and overt
voluntary attention: linked or independent? Brain
research. Cognitive brain research, 18(1):102–105.
Ji, X. and Liu, H. (2010). Advances in view-invariant hu-
man motion analysis: A review. Systems, Man, and
Cybernetics, Part C: Applications and Reviews, IEEE
Transactions on, 40(1):13–24.
Kahneman, D. (1973). Attention and effort. Prentice-Hall,
Englewood Cliffs, N.J.
Knudsen, E. I. (2007). Fundamental components of atten-
tion. Annual Review of Neuroscience, 30(1):57–78.
Moore, T. and Fallah, M. (2001). Control of eye move-
ments and spatial attention. Proceedings of the Na-
tional Academy of Sciences of the United States of
America, 98(3):1273–1276.
M
¨
uller, H. J. and Rabbitt, P. M. (1989). Reflexive and vol-
untary orienting of visual attention: time course of
activation and resistance to interruption. Journal of
experimental psychology. Human perception and per-
formance, 15(2):315–330.
Perry, R. J. (2000). The neurology of saccades and covert
shifts in spatial attention: An event-related fMRI
study. Brain, 123(11):2273–2288.
Peursum, P., Venkatesh, S., and West, G. (2007). Tracking-
as-recognition for articulated full-body human motion
analysis. In Computer Vision and Pattern Recognition,
2007. CVPR ’07. IEEE Conference on, pages 1–8.
Posner, M. I. (1980). Orienting of attention. Quarterly Jour-
nal of Experimental Psychology, 32(1):3–25.
Rizzolatti, G., Riggio, L., Dascola, I., and Umilt
´
a, C.
(1987). Reorienting attention across the horizontal
and vertical meridians: evidence in favor of a premo-
tor theory of attention. Neuropsychologia, 25(1A):31–
40.
Shi, Y., Bobick, A., and Essa, I. (2006). Learning tem-
poral sequence model from partially labeled data. In
Computer Vision and Pattern Recognition, 2006 IEEE
Computer Society Conference on, volume 2, pages
1631–1638.
Smith, K. C., Ba, S. O., Odobez, J.-M., and Gatica-Perez, D.
(2006). Tracking attention for multiple people: Wan-
dering visual focus of attention estimation. Idiap-RR
Idiap-RR-40-2006, IDIAP. Submitted for publication.
Wickens, C. D. and McCarley, J. S. (2008). Applied atten-
tion theory. CRC Press, Boca Raton.
Yakiyama, Y., Thepvilojanapong, N., Iwai, M., Mihirogi,
O., Umeda, K., and Tobe, Y. (2009). Observing real-
world attention by a laser scanner. IPSJ Online Trans-
actions, 2:93–106.
PhyCS2014-InternationalConferenceonPhysiologicalComputingSystems
90