THE IMPORTANCE OF HUMAN MENTAL WORKLOAD IN WEB
DESIGN
Luca Longo, Fabio Rusconi, Lucia Noce and Stephen Barrett
Department of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland
Keywords:
Human Mental Workload, Interaction Design, Web Design, Usability, Human Factors.
Abstract:
The focus of this study is the introduction of the construct of Human Mental Workload (HMW) in Web
design, aimed at supporting current interaction design practices. An experiment has been conducted using
the original Wikipedia and Google web-interfaces, and using two slightly different versions. Three subjective
psychological mental workload assessment techniques (NASA-TLX, Workload Profile and SWAT) with a
well-established assessments usability tool (SUS) have been adopted. T-tests have been performed to study the
statistical significance of the original and modified web-pages, in terms of workload required by typical tasks
and perceived usability. Preliminary results show that, in one ideal case, increments of usability correspond to
decrements of generated workload, confirming the negative impact of the structural changes on the interface.
In another case, changes are significant in terms of usability but not in terms of generated workloads, thus
raising research questions and underlying the importance of Human Mental Workload in Interaction Design.
1 INTRODUCTION
Human Mental Workload (HMW) is a multidimen-
sional complex construct mainly applied in Cognitive
Science and sporadically used in Human Computer
Interaction. Although a well-established definition is
absent in the literature, the goal of measuring mental
workload is to quantify the mental cost of perform-
ing tasks and having estimates of system and oper-
ator performance. Intuitively, mental workload is the
amount of mental work necessary for a person to com-
plete a task over a period of time. To be more pre-
cise, the construct emerges from the interaction be-
tween the requirements of a given task, the circum-
stances under which it is performed, the context and
the skills, behaviours, emotional state and perceptions
of the operator (Kantowitz, 1988). In the field of In-
teraction Design, the application of the construct of
Human Mental Workload may have important practi-
cal implications in the interface design process and in
the evaluation of system usability. For instance, as-
sessments of the mental workload of users on tasks
performed on a web-site may be useful for evaluating
both the behavior of end-users upon it and the usabil-
ity of the web-site itself.
This research investigates the application of the
construct of Human Mental Workload in the field
of Web Design. In detail, the main objective is to
compare different HMW assessments techniques over
tasks performed on selected web-sites, along with the
study of their correlation with a well-known subjec-
tive assessments tool of interface usability. The issues
investigated are:
analysis of the distribution of mental workload
levels over selected web-tasks, produced by three
subjective assessment procedures, along with the
study of their inter-correlations;
examination of the correlation of the outputs pro-
duced by the three HMW procedures with the one
by a well-established usability assessment tech-
nique (SUS - Subjective Usability Scale)
impact analysis of HMW in Web Design.
The reminder of this paper is organized as follows.
We review related works in the field of Human Mental
Workload and we describe the main subjective assess-
ment tools used in the research. We analyse applica-
tions of HMW in Human-Computer Interaction. We
then introduce the experiments conducted in line with
the research questions showing the outcomes follow-
ing a critical discussion. We conclude highlighting
the reasons and advantages that HMW can potentially
bring in the field of Interaction and Web Design.
403
Longo L., Rusconi F., Noce L. and Barrett S..
THE IMPORTANCE OF HUMAN MENTAL WORKLOAD IN WEB DESIGN.
DOI: 10.5220/0003960204030409
In Proceedings of the 8th International Conference on Web Information Systems and Technologies (WEBIST-2012), pages 403-409
ISBN: 978-989-8565-08-2
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
2 RELATED WORK
Human Mental Workload (HMW) is a multifaceted
complex construct mainly applied in psychology and
other cognitive sciences. A plethora of definitions ex-
ists in the literature (Hancock and Meshkati, 1988;
Wickens, 1987; Cain, 2007; Gopher and Donchin,
1986). Intuitively, mental workload or cognitive
workload is the amount of mental work necessary for
a person to complete a task over a given period of
time. Generally, it is not an inherent property, rather
it emerges from the interaction between the require-
ments of a task, the circumstances under which it is
performed, and the skills, behaviors and perceptions
of the operator. The operational and practical na-
ture of the construct of human mental workload, in
the last few decades, has been acquiring interest in
Neuro-science, Physiology and even Computer sci-
ence (Kramer and Sirevaag, 1987; Kantowitz, 1988;
Donnell and Eggemeier, 1998; Young and Stanton,
2001). There is a wide application field (Donnell and
Eggemeier, 1998; Tracy and Albers, 2006; Xie and
Salvendy, 2000; Gwizdka, 2009a; Gwizdka, 2009b)
and this new research domain may have an important
impact in the future, above all in Human-Computer
Interaction. The concept has become increasingly
important since modern interactive systems and in-
terfaces may impose severe requirements on mental
workload or information-processing capabilities.
There exist three major types of mental work-
load measures: performance-based, subjective and
physiological. The rationale behind performance-
based measures is that performance on a selected sec-
ondary task will decrease as a function of the de-
mands of a selected primary task. Subjective mea-
sures include self-assessments using uni-dimensional
or multi-dimensional scales. The former consider a
measures of overall mental workload, the latter take
into consideration individual dimensions of mental
workload, therefore being more accurate in determin-
ing the source of any potential workload problem.
Physiological measures are based on the premise that
mental workload will generate changes in the body
such as pupil dilation, changes in skin conductance,
body pressure and heart rate. Although they are ac-
curate and can work on a continuous scale, the equip-
ment they require is generally impractical for experi-
ments as it requires trained staff.
In this paper we focus on subjective multi-
dimensional measures and we use three well-
established tools: The NASA Task Load Index
(NASA-TLX) (Kantowitz, 1988); The Simplified
Subjective Workload Assessment Technique (SWAT)
(Luximon and Goonetilleke, 2001); The Workload
Profile (WP) (Tsang and Velazquez, 1996). In the
following paragraphs, we briefly describe each tech-
nique, introducing the formal models in section 3.
NASA-TLX (Hart, 2006) uses six dimensions to
estimate mental workload: mental demand, physical
demand, temporal demand, performance, effort and
frustration. Each of these is in a scale from 0 to 100.
The final mental workload index is a weighted av-
erage of the six areas that provide an overall score.
The weights are obtained via a paired comparisons
which requires the operator to choose which dimen-
sion, across all pairs with the six dimensions, is more
relevant to mental workload. The number of times a
dimension has been chosen by the operator represents
the weight of that dimension scale, for a given task
(Kantowitz, 1988).
SWAT is a subjective multi-dimensional rating
procedure that uses three areas to evaluate mental
workload: time load, mental effort load, psychologi-
cal stress load, each of them in a three-levels scale. In
this paper we have adopted a simplified version of the
SWAT model, the Continuous SWAT dimensions with
weight (Luximon and Goonetilleke, 2001) which uses
a paired comparisons among the three dimensions ex-
actly as in the NASA-TLX model. The final mental
workload is the average of the weighted areas (Luxi-
mon and Goonetilleke, 2001).
WP is a subjective workload assessment tech-
nique, based on the Multiple Resource Theory (MRT)
of Wickens (Wickens, 1987). In this procedure eight
dimensions are considered: perceptual/central pro-
cessing, response selection and execution, spatial pro-
cessing, verbal processing, visual processing, audi-
tory processing, manual output and speech output.
The WP procedure asks the operators to provide the
proportion of attentional resources, in the range 0 to 1,
used after the execution of a task. The overall work-
load rating is computed summing each of the 8 scores.
The three subjective techniques have low imple-
mentation requirements along with low intrusiveness
and high subject acceptability. These peculiarities
have promoted new research in which the construct of
Human Mental Workload has been adopted for evalu-
ating alternative interfaces. Tracy and Albers adopted
three different techniques for measuring mental work-
load applied to web-site design: NASA-TLX, The
Sternberg Memory Test and a tapping test (Tracy and
Albers, 2006) (Albers, 2011). They proposes a tech-
nique to individuate sub-areas of a web-site, in which
end-users manifested higher mental workload during
interaction. In turn, this allowed designers to mod-
ify those critical regions for enhancing their interface.
Zhu and Hou (Zhu and Hou, 2009) noted how roles
can be useful in interface design and proposed a role-
WEBIST2012-8thInternationalConferenceonWebInformationSystemsandTechnologies
404
based method to measure the mental workload. This
can be applied in the field of Human-Computer Inter-
action for dynamically adjusting workload levels of
humans to enhance their performance of interaction.
3 METHODOLOGY
To investigate the research issues, we have designed
four web-tasks as depicted in figure 1. Nineteen peo-
ple aged between 19 and 35 years with different cul-
tures, native language and ethnic background partic-
ipated in the experiment. Web-tasks were designed
on top of two major web-sites: wikipedia.com and
google.com.
(a) W1 (Original Wikipedia) (b) W2 (Mod. Wikipedia)
(c) G1 (Original Google) (d) G2 (Mod. Google)
Figure 1: Web-interfaces used in experiment web-tasks.
Task W1 was designed to be performed on the
original interface of Wikipedia (screenshot 1 a) while
task W2 on a modified version (screenshot 1 b). Sim-
ilarly task G1 was designed to be executed on the
original google interface (screenshot 1 c) and task G2
on the modified version (screenshot 1 d). For the in-
terface of picture 1 b, the layout has been modified,
removing the left menu and the searching box from
the original wikipedia along with the various tabs at
the top of the page. For the interface of picture 1 d,
the structural changes involved the removal of the left
menu and the application of a different background.
Volunteers needed to naturally interact with the web-
browser and execute as best as they could the de-
signed tasks. After the completion of each task, they
were asked to fill in 4 questionnaires (random order):
NASA-TLX questionnaire, as shown in table 2,
along with the pair-wise comparisons among the
6 dimensions. The question related to the “physi-
cal demand” dimension (NT Q
2
) was omitted due
to the cognitive nature of tasks, thus its pair-wise
comparison was automatically assigned to the
other 5 dimensions (Answers scale: 0 [strongly
disagree] to 100 [strongly agree]);
WP questionnaire, as in table 3 (Answers scale: 0
[strongly disagree] to 100 [strongly agree]);
SWAT questionnaire, as described in table4.
Users were asked to choose, for each of the three
areas (time load, mental effort load, psychological
load) one of three possible levels;
SUS questionnaire, as reported in (Brook, 2008)
and shown in table 5, for evaluating the usability
of the interface used in the performed task.
NASA-TLX. The NASA-TLX model is not based
on a simple average of the 6 questions of Table 2
(NTQs), rather on a weighted aggregation.
These questions are formally expressed as:
NT Q
x
: [0..100] , x
1
: {M|PD|T |E|F} x
2
: {P}
The weight of each of the 6 considered areas needs to
be computed. Users needed to decide, for each possi-
ble pair of the 6 areas (binomial coefficient), ‘which of
the two contributed more to their workload during the
task’ (Example: Mental or Physical Demand? Phys-
ical Demand or Performance? and so forth). This
cross tabulation generates 15 preferences:
6
2
=
6!
2!(6 2)!
= 15
The weights are the number of preferences, for each
area, in the 15 answers set (number of times that each
factor was selected). They range from 0 (not relevant)
to 5 (more important than any other factor). Formally:
NTW
x
: [0..5] . The final human mental workload score
(HMW) of users is computed by multiplying the score
of each NTQ question by its computed weight NTW
(Note the Performance value is brought back to the
original NASA-TLX scale):
NASA
HMW
: [0..100]
NASA
HMW
=
x
1
NT Q
x
· NTW
x
+
x
2
(100 NTQ
x
) · NTW
x
!
1
15
WP - Workload Profile. For each questions of the
Workload Profile questionnaire (table 3) subjects pro-
vided a number between 0 and 100. A rating of 0 in-
dicates that the task placed no demand on the dimen-
sion being rated, while a rating of 1 means that the
task required full attention. Formally: W PQ
i
: [0..100] R
The ratings of each individual dimensions, that means
the answer for each of the questions in table 3, are
summed toward an overall mental workload rating.
THEIMPORTANCEOFHUMANMENTALWORKLOADINWEBDESIGN
405
Table 1: Experiment Tasks on Google and Wikipedia.
Task # Description
W 1 Using www.wikipedia.com find out how many people currently live in Sydney.
W 2 Only using http://en.wikipedia.org/wiki/Main Page find out how many people currently live in Sydney.
G1 Using www.google.com find how many years passed between the foundation of Apple Computer Inc. and the year of the 14th FIFA world cup.
G2 Using www.google.com find out how many years passed between the foundation of the Microsoft Corp. and the year of the 23rd Olympic games.
Table 2: Nasa Task Load Index (NASA-TLX) questionnaire.
Label Question Area
NT Q
1
How much mental and perceptual activity was required (e.g. thinking, deciding, calculating, remembering, Mental
looking, searching, etc.)? Was the task easy or demanding, simple or complex, exacting or forgiving? Demand
NT Q
2
How much physical activity was required (e.g. pushing, pulling, turning, controlling, activating, etc.)? Physical
Was the task easy or demanding, slow or brisk, slack or strenuous, restful or laborious? Demand
NT Q
3
How much time pressure did you feel due to the rate or pace at which the tasks or task elements occurred? Temporal
Was the pace slow and leisurely or rapid and frantic? Demand
NT Q
4
How hard did you have to work (mentally and physically) to accomplish your level of performance? Effort
NT Q
5
How successful do you think you were in accomplishing the goals, of the task set by the experimenter (or yourself)?
Performance
How satisfied were you with your performance in accomplishing these goals?
NT Q
6
How insecure, discouraged, irritated, stressed and annoyed versus secure, gratified, content,
Frustration
relaxed and complacent did you feel during the task?
Table 3: Workload Profile (WP) questionnaire.
Label Question Area
W PQ
1
How much attention was required for activities like remembering, problem-solving decision-making, Perceiving / Remembering /
perceiving (detecting, recognizing and identifying objects)? Solving / Deciding
W PQ
2
How much attention was required for selecting the proper response channel Selection /
(manual - keyboard/mouse, or speech - voice) and its execution? Execution of Response
W PQ
3
How much attention was required for spatial processing (spatially pay attention around you)? Task and Space
W PQ
4
How much attention was required for verbal material (eg. reading, processing linguistic material, Verbal
listening to verbal conversations)? Material
W PQ
5
How much attention was required for executing the task based on the information visually received (eyes)? Visual Resources
W PQ
6
How much attention was required for executing the task based on the information auditorily received (ears)? Auditory Resources
W PQ
7
How much attention was required for manually respond to the task (eg. keyboard/mouse usage)? Manual Response
W PQ
8
How much attention was required for producing the speech response Speech
(eg. engaging in a conversation, talk, answering questions)? Response
Table 4: Simplified Subjective Workload Assessment Technique (SWAT) questionnaire.
Label Possibilities Value Area
SWATQ
1
Often have spare time. Interruptions or overlap among activities occur infrequently or not at all. 1
Time LoadOccasionally have spare time. Interruptions or overlap among activities occur infrequently. 2
Almost never have spare time. Interruptions or overlap among activities are very frequent, or occur all the time. 3
SWATQ
2
Very little conscious mental effort or concentration required.
1 Mental
Activity is almost automatic, requiring little or no attention.
Moderate conscious mental effort or concentration required. Complexity of activity is moderately high
2 Effort Load
due to uncertainty, unpredictability, or unfamiliarity. Considerable attention required.
Extensive mental effort and concentration are necessary. Very complex activity required total attention 3
SWATQ
3
Little Confusion, risk, frustration, or anxiety exists and can be easily accommodated 1
Psychological
Moderate stress due to confusion, frustration, or anxiety noticeably adds to workload.
2
StressSignificant compensation is required to maintain adequate performance
High to very intense stress due to confusion, frustration, or anxiety.
3
LoadHigh extreme determination and self-control required
W P
HMW
: [0..8] R W P
HMW
=
8
i=1
(W PQ
i
) ·
1
100
SWAT - Simplified Subjective Workload Assess-
ment Technique. For the SWAT questions (table
4) subjects provided a number between 1 and 3 for
selecting the appropriate option among 3 possibili-
ties. SWAT Q
i
: [1..3] N After they indicated a prefer-
ence, for each pair combination among the 3 dimen-
sions, (Time Load, Mental Effort and Psychological
stress). This cross tabulation generates 3 preferences,
WEBIST2012-8thInternationalConferenceonWebInformationSystemsandTechnologies
406
Table 5: System Usability Scale (SUS) questionnaire.
Label Question Label Question
SUSQ
1
I think that I would like to use this interface frequently SUSQ
6
I thought there was too much inconsistency in this interface
SUSQ
2
I found the interface unnecessarily complex SUSQ
7
I would imagine that most people would learn to use this interface quickly
SUSQ
3
I thought the interface was easy to use SUSQ
8
I found the interface very unmanageable (irritating or tiresome) to use
SUSQ
4
I think that I would need the support of a technical
person to be able to use this interface
SUSQ
9
I felt very confident using the interface
SUSQ
5
I found the various functions in this interface were
well integrated
SUSQ
10
I needed to learn a lot of things before I could get going with this interface
so each dimension has max 2 occurrences:
3
2
=
3!
2!(3 2)!
= 3 SWATW
i
: [0..2]
The number of occurrences, for each dimension, are
used to weight the original scores, producing a final
human mental workload assessment as in the follow:
SWAT
HMW
: [1..3] R SWAT
HMW
=
3
i=1
(SWAT Q
i
·SWATW
i
)
·
1
3
SUS - System Usability Scale. The original an-
swers of the SUS questionnaires (Bangor et al., 2008)
use a Likert scale, bounded in the range 1 to 5. Volun-
teers were asked to answer the questions with a scale
ranged 0 to 100:
SU SQ
i
: [1..100] N Individual scores are not mean-
ingful on their own. For odd questions (SUS
i
with
i = {1|3|5|7|9}), the score contribution is the scale
position (SUSQ
i
) minus 1. For even questions (SUS
i
with i = {2|4|6|8|10}), the contribution is 5 minus the
scale position. Formally:
SU S
TOT
: [0..100] , i
1
= {1, 3, 5, 7, 9} i
2
= {2, 4, 6, 8, 10}
SU S
TOT
=
i
1
(SU SQ
i
) +
i
2
(100 SUSQ
i
)
·
1
10
4 RESULTS AND DISCUSSION
Experimental results are shown in table 6 and in figure
2 while the correlations between the outcomes of each
computational model are presented in table 7.
Figure 2: Boxplot of data grouped by model.
The Paired T-Test procedure was used to compare
the mean difference between the results of the HMW-
based models (NASA-TLX, WP, SWAT) and the ones
of the usability model (SUS) on the tasks (W1 against
Table 6: Distributions of data grouped by model.
Task
NASA SWAT WP SUS
Avg Std Avg Std Avg Std Avg Std
W1 21.3 14.3 46.1 15.2 26.0 14.8 78.3 18.7
W2 52.4 13.4 74.3 13.2 38.7 15.0 34.3 22.5
G1 41.9 15.1 66.6 18.7 34.9 16.2 84.9 13.7
G2 43.0 14.7 66.5 19.4 33.7 15.7 45.2 19.3
Table 7: Correlations of models.
Task
NASA/ SUS/ WP/ SUS/ WP/ WP/
SWAT NASA NASA SWAT SWAT SUS
W1 0.75 -0.17 -0.10 -0.08 -0.07 -0.27
W2 0.52 -0.29 0.50 0.04 0.35 0.27
G1 0.82 -0.22 0.75 0.01 0.48 -0.14
G2 0.88 -0.03 0.80 -0.15 0.76 -0.12
W2, and G1 against G2). In particular, we tested the
null hypothesis H
0
with a confidence interval CI =
95%. The results are listed in tables 8 and 9.
The experiments conducted are preliminary,
aimed at showing the potential of Human Mental
Workload in Interaction and Web Design. From ta-
ble 7 a clear correlation between the three HWM-
based tools used (NASA-TLX, WP, SWAT) against
the usability assessment technique (SUS) (Columns
3, 5, 7) does not appear. Further investigation needs
to be carried out to discover correlations (if any). On
the other hand, the three HMW-based models highly
correlate, underlying a common view about the gen-
erated workloads on tasks (Columns 2, 4, 6). This
suggests further experiments could consider only one
of them, reducing the volunteers’ answer-set. The
only exception is for task W1 for the correlations
WP/NASA and WP/SWAT: here the proportions of at-
tentional resources required by the task do not clearly
emerge. A possible explanation may rely in the un-
certainty faced by volunteers, in answering the WP
questions (table 3) for task W1. Tables 8 and 9 show
the paired T-test of the four models for the task con-
ducted on Wikipedia (W1 against W2) and on Google
(G1 against G2). The first line (W1, W2) clearly
shows that the null hypothesis is rejected in apply-
ing the four models (NASA-TLX, WP, SWAT, SUS):
the two interfaces W1 and W2 have generated statis-
tically significant workloads and usability scores. In
turn this fact might be interpreted, by a web-designer,
THEIMPORTANCEOFHUMANMENTALWORKLOADINWEBDESIGN
407
Table 8: Paired T-test for NASA-TLX and WP.
Task
NASA WP
T P H
0
T P H
0
W1, W2 5.98 < 0.001 Rej. 3.39 0.003 Rej.
G1, G2 0.37 0.718 Acc. 0.81 0.428 Acc.
Table 9: Paired T-test for SWAT and SUS.
Task
SWAT SUS
T P H
0
T P H
0
W1, W2 5.56 < 0.001 Rej. 6.59 < 0.001 Rej.
G1, G2 0.02 0.987 Acc. 7.57 0.000 Rej.
negatively: the structural changes developed in the in-
terface W2 negatively affect perceived usability and
have an higher impact on the required mental work-
load of end-users. The second line of both the tables
(G1, G2) shows that the null hypothesis is accepted
for the three HMW-based tools, underling no statis-
tical difference, in workload levels, between the two
interfaces. On the contrary, for the paired T-test of the
two SUS outcomes, the hypothesis is rejected: the us-
ability of the two interfaces (G1 and G2) are perceived
being statistically different. Considering the above in-
terpretations, some potential research question arise:
Is SUS, (or other usability assessment tool), suffi-
cient for designing usable interfaces?
Can the application of Human Mental Workload
be a alternative or supporting procedure in the
field of Interaction and Web Design?
5 CONCLUSIONS
The main aim of this contribution is to show the po-
tential of the construct of Human Mental Workload in
Interaction and Web Design. It has been shown how
assessments of workload can be achieved on web-
based tasks and how they can be applied for evalu-
ating the impact of structural changes on web-sites.
Three subjective psychological techniques for assess-
ing mental workload were described: NASA Task
Load Index, Workload Profile and a simplified Sub-
jective Workload Assessment Technique. An exper-
iment on two popular web-sites’ interfaces was con-
ducted: Wikipedia and Google. Four similar informa-
tion search tasks were performed by 19 volunteers,
two on the original interfaces and two on modified
versions. Results show how, in the tasks performed on
Wikipedia, increments in required mental workload
correspond to decrements in usability perception and
vice-versa, underlying an inverted correlation. This
ideal case does not occur on the tasks performed on
the two different Google interfaces where, for a decre-
ment of perceived usability, the mental workload re-
mains stationary. This suggests that usability should
not be used in isolation for evaluating interactive in-
terfaces but instead an analysis of the behaviour of
end-users on typical web-based task, on a given inter-
face, should be accounted. Our proposal is for the ap-
plication of mental workload as a evaluation measure.
This preliminary evidence should be supported by fur-
ther investigations and applications. Future work will
be focused on further experiments on other web-sites
towards a general-applicable paradigm for aggregat-
ing workload and usability scores.
ACKNOWLEDGEMENTS
We are grateful to Noce L. and Rusconi F. who con-
tributed to the design and execution of experiments.
REFERENCES
Albers, M. (2011). Tapping as a Measure of Cognitive
Load and Website Usability. Proceedings of the 29th
ACM international conference on Design of commu-
nication, pages 25–32.
Bangor, A., Kortum, P., and Miller, J. (2008). An empirical
evaluation of the System Usability Scale (SUS). In-
ternational Journal of Human-Computer Interaction,
24(6):574594.
Brook, J. (2008). SUS. A quick and dirty usability scale. In-
ternational Journal of Human-Computer Interaction,
24(6):574594.
Cain, B. (2007). A Review of the Mental Workload Litera-
ture. Technical Report, Defence Research and Devel-
opment.
Donnell, R. O. and Eggemeier, F. (1998). Modeling mental
workload. Cognitive Technology, 3:9–31.
Gopher, D. and Donchin, E. (1986). Mental Workload Dy-
namics in Adaptive Interface Design. Handbook of
Perception and Human Performance, 2(41):1–49.
Gwizdka, J. (2009a). Assessing Cognitive Load on Web
Search Tasks. Ergonomics Open Journal, 2:114–123.
Gwizdka, J. (2009b). Distribution of Cognitive Load in Web
Search. Journal of the American Society for Informa-
tion Science & Technology, 61(11):2167–2187.
Hancock, P. and Meshkati, N. (1988). Human Mental Work-
load. Elsevier.
Hart, S. G. (2006). Nasa-Task Load Index (Nasa-Tlx); 20
Years Later. Human Factors and Ergonomics Society
Annual Meeting Proceedings, 50(9):904–908.
Kantowitz, B. (1988). Development of Nasa-TLX (Task
Load Index): Results of Empirical and Theoretical
Research. Human Mental Workload, 52:139–183.
Kramer, A. and Sirevaag, E. (1987). A Psychophysiological
Assessment of Operator Workload During Simulated
Flight Missions. Human Factors, 29(2):145–160.
WEBIST2012-8thInternationalConferenceonWebInformationSystemsandTechnologies
408
Luximon, A. and Goonetilleke, R. S. (2001). Simpli-
fied subjective workload assessment technique. Er-
gonomics, 44(3):229–243.
Tracy, J. P. and Albers, M. J. (2006). Measuring Cognitive
Load to Test the Usability of Web Sites. Usability and
Information Design, pages 256–260.
Tsang, P. and Velazquez, V. (1996). Diagnosticity and
multidimensional subjective workload ratings. Er-
gonomics, 39(3):358–381.
Wickens, C. (1987). Information processing, decision mak-
ing, and cognition. Cognitive engineering in the de-
sign of humancomputer interaction and expert sys-
tems.
Xie, B. and Salvendy, G. (2000). Prediction of Men-
tal Workload in Single and Multiple Task Envi-
ronments. International Journal of Cognitive Er-
gonomics, 4(3):213–242.
Young, M. and Stanton, N. (2001). Mental Workload: The-
ory, Measurement, and Application. International
Encyclopedya of Ergonomics and Human Factors,
1:507–509.
Zhu, H. and Hou, M. (2009). Restrain mental workload with
roles in hci. In Proceedings of Science and Technology
for Humanity, pages 387 – 392.
THEIMPORTANCEOFHUMANMENTALWORKLOADINWEBDESIGN
409