rationally done if people are fully aware of how their
data is used and and can control it. Many have worked
on the problem of closing the privacy perception gap
by educating users, and the strategies for this vary.
One strategy is to make privacy policies easier to
understand. Kelley et al. propose standardized quick
summary charts (“nutrition labels”) for privacy poli-
cies to enable users quick access to the personal in-
formation being used (Kelley et al., 2009). There
were many iterations of these labels culminating in
a grid of colorful symbols representing what infor-
mation would always be taken, what information the
user could opt-out of being taken, what information
the user could opt-into being taken, and what infor-
mation would not be taken. The researchers found
subjects were receptive to this display and also that
subjects could more quickly and accurately answer
questions about privacy policies represented in this
fashion. While this is a successful way to educate
users, the authors did not measure how this affected
a participant’s desire to use the software behind the
privacy policy.
Another gap-closing strategy is to standardize the
format and language of End User License Agreements
(EULAs) themselves. Kunze et al. assert that a
standard format would balance power between users
and developers and provide a fair mechanism for dis-
putes (Kunze, 2008). This work argued that improv-
ing EULAs could improve “virtual world” software,
arguing standardization provides both legal and eco-
nomic improvements. By forcing EULAs to be in
plain language, the authors assert companies will ben-
efit from informed consent agreement stronger than
what is often standard practice (signing the EULA
without reading it). When these standard EULAs giv-
ing all the power to the developers are challenged
in court, the agreements often failed to hold (Kunze,
2008). By giving more power to the user, these agree-
ments become more reliable, and not just easier to un-
derstand. By examining both sides of EULAs, Kunze
et al. assert this standardization would help both com-
panies and users by balancing power and making the
agreements more binding (Kunze, 2008).
A similar practice of making a unified format
could be used with privacy policies; many have tried
similar approaches, such as icons (Holtz et al., 2011;
?) or short privacy notices (Utz et al., 2019) (cookie
notices often used to comply with GDPR). Hoping
to automatically standardize privacy policies, Hark-
ous et al. developed Polisis (a machine learning-
based privacy policy analyzer) to interpret and present
privacy policies at a higher level more accessible to
users (Harkous et al., 2018).
Other work turns privacy notices and policies into
something users can interact with to learn more about
the site’s privacy practices. The Pribots project (Hark-
ous et al., 2016) attempts to close the privacy per-
ception gap with a virtual entity with which users
may converse to learn about their privacy choices and
settings. Other work suggests boosting the usabil-
ity of privacy settings themselves will aid educated
users (Lipford et al., 2008; Liu et al., 2011).
In “Noticing notice,” Good et al. used a partic-
ipant’s acceptance of software as an indication that
education was successful (Good et al., 2007). Their
work is similar to this paper’s contributions, but does
not capture the case where the users identify a prod-
uct as beneficial yet still avoid using it. In “Noticing
notice,” participants were asked to read EULAs and
were shown a summary of the EULA before or af-
ter the consent page (depending on which group they
were assigned). When compared with the group that
was not given any summary, Good et al. noted that
those who were pre-briefed with a summary spent
more time installing the software and often declined
agreements. The authors asserted that effective edu-
cation leads to a change in behavior, but it is not ob-
vious that “knowing” leads to “doing”.
In this paper, we further examine the “Noticing
notice” underlying assumption (Good et al., 2007)
that education about a site’s data practices will change
users’ behaviour.
3 DO PEOPLE ACT ON THEIR
EDUCATION?
We hypothesize that educating people on risks associ-
ated with a social networking site has a smaller effect
than expected from rational actors. To limit the scope
of this paper we focused on one social network: Face-
book. To break down our hypothesis into questions
we can test, we split it into three parts.
Question 1. Do people realize that there is an in-
herent risk to using social media?
We expect they do since many security breaches
and controversial policies have been in the news re-
cently, and thus educated people more about risks of
using the Internet. This combined with a broad defini-
tion of risk (a possible negative impact on one’s life)
suggests that most people will be aware of the risks.
Identifying whether people see this inherent risk will
motivate the other two questions because people must
make non-trivial decisions based on their education.
Understanding How People Weigh the Costs and Benefits of using Facebook
527