For example, a user may adopt the following monthly
password change practice that complies with strict
policies, but it is obviously insecure:
January: P1a1s1s1@
February: P2a2s2s2@
March: P3a3s3s3@
April: P4a4s4s4@
…
Although most CISOs are aware of the above,
they may retain ineffective policies for reasons that
are more “political” rather than scientific. They might
be afraid of “passing the wrong message that we relax
our security requirements” or getting blamed in case
of compromise.
The relation between business politics and
information systems’ use and management has been
studied since the early ‘80s (Markus, 1983). The field
of information security management could also
benefit from an interdisciplinary approach that takes
into account the political and power aspects of
information security decision making and behaviour.
The political and power-related perspective could
also illuminate the adoption of similarly ineffective
practices, such as publishing complex policies and
lengthy notifications. CISOs may develop several
policies that span hundreds of pages and send lengthy
emails to employees, although they know that few
people will actually read them. This paradoxical
behaviour could be explained, if we examine how
executives in an organization deal with responsibility.
In this case, CISOs may aim to make sure that the
responsibility for compliance lies with users.
Understanding business politics and power
balances in organizations could also shed light on top
management’s attitude towards information security.
Although top management may acknowledge the
importance of information security, they could be
reluctant to enforce policies that require a large
amount of “political capital” to be spend.
The information systems research community has
used various frameworks to study socio-technical
issues, such as the above. A socio-technical
framework that applies to information security
management is Actor-Network Theory (Latour, 2007;
Tsohou et al., 2015).
Actor-Network Theory (ANT) provides a
theoretical framework for studying how networks of
actors are formed to achieve agreed objectives. ANT
considers both human and non-human (i.e.,
technological) actors that enroll in a network with a
specific objective. In previous research, Tsohou et al.
(2015) have applied ANT to show how information
security executives can achieve the effective
implementation of an information security awareness
programme.
Concluding, we may note that information
security behaviour depends on a complex system of
motives, interests, alliances, and conflicts.
Accordingly, we should enhance our theoretical
models to account for the social, political and power-
related aspects of information security practice.
3 THE PRIVACY PARADOX
The term privacy paradox has been used to refer to
situations where people although they state that they
value their privacy, they disclose their personal
information for very small rewards (Kokolakis,
2017). Addressing the privacy paradox requires a
study of human psychology and behaviour. Several
researchers have focused on how people make
decisions concerning information disclosure
(Acquisti et al., 2015). They noted that human
decision making is biased and, thus, people often fail
to make decisions that are optimal and in alignment
with their values and objectives. Some of the biases
that influence privacy decision making are the
following:
Optimism Bias. People systematically tend to
believe that others are at higher risk to experience a
negative event compared to themselves (Baek et al.,
2014). As a result, people are immune to fear appeals.
They understand the risks, but they are still optimistic
that “it won’t happen to them”.
Affect Heuristic. The affect heuristic refers to a
cognitive shortcut, in which current emotion
influences judgements and decisions. Privacy-related
decisions are hard to predict, they may depend on the
moment's emotion.
Hyperbolic Time Discounting. Hyperbolic time
discounting refers to the common tendency to
attribute greater importance to present gains or losses
than to future ones. The consequences of information
disclosure might come sometime in the future. Thus,
the immediate gratification of sharing information
may outweigh future privacy risks.
If we assume that personal information disclosure
decisions are based on a calculation of the expected
loss of privacy and the potential gains of disclosure,
then we should expect that the above biases would
affect these calculations. Optimism bias would lead
to an undervaluation of expected loss of privacy,
since people tend to be optimistic that a privacy
violation would not affect them.
Also, due to hyperbolic time discounting,
individuals would underestimate future consequences