A Formal Modeling Scheme for Analyzing a Software System Design
against the GDPR
Evangelia Vanezi, Georgia M. Kapitsaki, Dimitrios Kouzapas and Anna Philippou
Department of Computer Science, University of Cyprus, Nicosia, Cyprus
Keywords:
GDPR, Privacy Protection, π-calculus, Static Analysis, Privacy by Design.
Abstract:
Since the adoption of the EU General Data Protection Regulation (GDPR) in May 2018, designing software
systems that conform to the GDPR principles has become vital. Modeling languages can be a facilitator for
this process, following the principles of model-driven development. In this paper, we present our work on the
usage of a π-calculus-based language for modeling and reasoning about the GDPR provisions of 1) lawfulness
of processing by providing consent, 2) consent withdrawal, and 3) right to erasure. A static analysis method
based on type checking is proposed to validate that a model conforms to associated privacy requirements. This
is the first step towards a rigorous Privacy-By-Design methodology for analyzing and validating a software
system model against the GDPR. A use case is presented to discuss and illustrate the framework.
1 INTRODUCTION
Privacy protection is now more crucial than ever, and
this is reflected in the new EU General Data Protec-
tion Regulation (GDPR) (European Parliament and
Council of the European Union, 2015). The GDPR
has been applied since May 25th, 2018 not only to or-
ganizations established in the EU, but also to non-EU
established organizations that process personal data
1
of individuals who are located in the EU. The GDPR
defines multiple principles and rights for EU residents
in regard to personal data collection, storage and pro-
cessing. All existing systems must be reviewed and
adapt their processes accordingly, and new systems
must be built and function in a way that ensures their
GDPR compliance.
As this change is recent, there have only been few
attempts to understand the GDPR, mainly from the
legal perspective, and even fewer discussing mech-
anisms to support GDPR compliance (Gjermundrød
et al., 2016; Hintze and LaFever, 2017). In the con-
text of this work, our aim is to discuss the GDPR with
regard to data collection and processing as this need
arises in software systems, and to propose a rigor-
ous methodology for embedding GDPR compliance
into a software engineering development methodol-
ogy, specifically into the design and modeling phase,
1
As defined in GDPR Article 4(1), personal data is any
information relating to an identified or identifiable natural
person (’data subject’)
using a π-calculus (Milner et al., 1992) based for-
mal modeling scheme exploiting the Privacy calcu-
lus (Kouzapas and Philippou, 2017). Additionally,
we present an extension to the Privacy calculus in-
tegrating the notion of granting and withdrawing con-
sent. The ulterior, long-run objective of our work is to
provide a rigorous framework for developing GDPR-
compliant systems that can support the entire soft-
ware development methodology, thus pursuing con-
formance to Privacy by Design (PbD) or, as referred
to in the GDPR, Data Protection By Design (Rubin-
stein, 2011).
Specifically, in this paper we describe the integra-
tion of the following principles of the GDPR within a
formal framework for reasoning about privacy-related
properties: lawfulness of processing by providing
consent, consent withdrawal, and right to erasure. In
particular, we build on the Privacy calculus, a pro-
cess calculus based on the π-calculus with groups
extended with constructs for reasoning about private
data (Cardelli et al., 2000). We extend the Privacy
Calculus syntax and semantics, in order to include
the aforementioned GDPR principles. Furthermore,
we associate the calculus with a type system that
validates the proper processing of data inside a sys-
tem and we prove that a well-typed system satisfies
the above mentioned GDPR requirements. We argue
that the resulting framework can be used for provid-
ing models of software systems while supporting the
static analysis of GDPR compliance during the design
68
Vanezi, E., Kapitsaki, G., Kouzapas, D. and Philippou, A.
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR.
DOI: 10.5220/0007722900680079
In Proceedings of the 14th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE 2019), pages 68-79
ISBN: 978-989-758-375-9
Copyright
c
2019 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
phase. Furthermore, we discuss the possibility for fu-
ture additions to the framework, incorporating more
GDPR provisions, in order to support an almost fully
compliant software model design and verification.
We consider the type system accompanying our
framework to be a promising dimension of our ap-
proach, since it enhances the methodology with the
ability to statically validate models against formally-
defined privacy policies. This approach is comple-
mentary to verification by model-checking and has
the advantage that it does not require the construction
of a system’s state space, which may yield the state-
space explosion problem. Moreover, this dimension
of our proposal can be transferred to the development
phase of software systems through proper tools to-
wards the generation of GDPR-compliant code. Thus,
as future work we envision the development of the
framework into programming semantics and analy-
sis tools for supporting the software engineering de-
velopment phase and the construction of privacy-
respecting code following the Privacy by Design prin-
ciple.
The remainder of the paper is structured as fol-
lows. In Section 2, we discuss some background
work: we present the Principle of Data Protection by
Design and the use of modeling languages during the
design phase of the software development cycle. We
then provide a quick overview of the π-calculus and
the Privacy calculus. In Section 3, we introduce the
Privacy calculus as a language to model a software
system handling private data, and in Section 4 we pro-
pose an extension to the Privacy calculus that incorpo-
rates the GDPR principles of lawfulness of processing
by providing consent, consent withdrawal and right to
erasure into the software’s model. Section 5 presents
this framework formally and it develops a type system
for guaranteeing compliance to privacy-related prop-
erties pertaining to the GDPR principles under inves-
tigation. Finally, Section 6 concludes the paper and
discusses possible directions for future work. Due to
lack of space, proofs of results are omitted. Further-
more, for the complete exposition to the Privacy cal-
culus the reader is referred to (Kouzapas and Philip-
pou, 2017).
2 BACKGROUND AND RELATED
WORK
2.1 Data Protection by Design
Privacy by Design was introduced by the Dutch Data
Protection Authority (Data Protection and Privacy
Commissioners, 2010) and the former Information
and Privacy Commissioner of Ontario, Canada, Ann
Cavoukian (Cavoukian, 2008) having as a result of
their joint project a published report (Hes and Bork-
ing, 1998) as explained in (Hustinx, 2010). It advo-
cates that privacy should be incorporated into systems
by default and should be a priority from the beginning
of a system’s design. Similarly, the GDPR (European
Parliament and Council of the European Union, 2015)
defines Data Protection by Design in Article 25(1) as
“implementing the appropriate technical and organi-
zational measures in order to meet the requirements
of the Regulation and protect the rights of data sub-
jects”. Consequently, the obligation of software de-
velopers to build systems designed to be compliant to
the GDPR becomes apparent.
By its definition, Data Protection by Design needs
no specific distinct analysis, handling and mecha-
nisms in order to be fulfilled. If all principles and
rights are correctly embedded in a system’s specifi-
cations model and are transferred into the succeeding
implementation, then Data Protection by Design will
also be guaranteed. Moreover, Data Protection by
Design advocates the proactive consideration of pri-
vacy issues: ensuring a software’ compliance should
take place from the design phase and possible pitfalls
should be anticipated and prevented before they can
materialise, rather than remedied on a reactive basis.
In the literature, different approaches to PbD have
been proposed. PriS is a security requirements engi-
neering method that integrates privacy requirements
in the early stages of the system development pro-
cess (Kalloniatis et al., 2008). It focuses on the impact
of privacy requirements onto the organizational pro-
cesses and on suggesting implementation techniques
for realizing these requirements. privacyTracker con-
siders the GDPR and proposes a framework that sup-
ports some of its basic principles including data trace-
ability and allowing a user to get a cryptographically
verifiable snapshot of her data trail (Gjermundrød
et al., 2016). PbD for Internet of Things and relevant
steps that should be followed for assessing applica-
tions is discussed in (Perera et al., 2016).
2.2 Software Design and Modeling
During the design phase, a software system model is
usually developed capturing the requirements speci-
fied during the requirements analysis step. Such a
model presents a view of the software at a suitable
level of abstraction by visualizing, specifying and
documenting all artifacts and aspects of software sys-
tems. To this effect, modeling languages have been
developed and are employed for providing system
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR
69
models, a notable example being the Unified Mod-
eling Language (UML) (Fowler, 2004). Extensions
to UML have been proposed, such as UMLsec that
allows to express security relevant information for
a system specification (J
¨
urjens, 2002), the Privacy-
aware Context Profile (PCP) that can be exploited in
context-aware applications (Kapitsaki and Venieris,
2008) and the Privacy UML profile that can be used
to capture the privacy policies of a software sys-
tem (Basso et al., 2015). These models can be sub-
sequently used to test whether the design meets the
system specifications, but also to produce the sys-
tem source code skeleton following the paradigm of
Model Driven Engineering (MDE) (Schmidt, 2006;
Kapitsaki et al., 2008). A previous work using mod-
eling to analyze the system design and propose secu-
rity and privacy controls that can improve it can be
found in (Ahmadian et al., 2018). In this framework,
a privacy impact assessment (PIA) methodology is in-
troduced. Other works rely on ontologies and model-
ing in the Web Ontology Language (OWL) or the Re-
source Description Framework (RDF), such as Linked
USDL Privacy (Kapitsaki et al., 2018). However, due
to their semi-formal nature and lack of precise seman-
tics, modeling languages as the above do not support
the formal verification of property satisfaction of sys-
tem models.
Formal methods aim to address this challenge by
providing languages and tools for the rigorous speci-
fication, development and verification of systems. In
this paper, we focus on one such formalism, namely
the π-calculus and we argue that it can provide the
basis of a modeling language for software systems
that may additionally support verification, including
the validation of conformance to privacy requirements
from the design phase of a software system therefore
fulfilling the Data Protection by Design GDPR re-
quirement. The π-calculus has been used for formal-
izing aspects of the UML modeling language (Lam,
2008), as well as for transforming the basic work-
flow patterns of Business Process Model and Nota-
tion (BPMN) into equivalent π-calculus code (Bous-
setoua et al., 2015). It has also formed the theo-
retical basis of the Business Process Modeling Lan-
guage (BPML) (Havey, 2005) and of Microsoft’s
XLANG (Thatte, 2001).
2.3 Modeling Software in a π-calculus
based Language
The π-calculus is a simple, concise but very expres-
sive process calculus that can describe concurrent pro-
cesses whose configuration may evolve during com-
putation, such as mobile and web applications (Mil-
Table 1: Privacy Calculus Basic Syntax.
Functionality π-Calculus Term
Concurrency P|Q (1)
Input c?(x).P (2)
Output c!hvi.P (3)
Restriction (ν n)P (4)
Replication P (5)
Matching if e then P else Q (6)
Nil (stopped) process 0 (7)
Store
r [ι δ] (8)
Systems G[P] | G[S] | S || S | (ν n)S (9)
ner et al., 1992). It is associated with formal se-
mantics which precisely define the behavior of a sys-
tem, based on which dynamic (e.g. model checking)
and static (e.g. type checking) analyses of systems
properties can be performed (Sangiorgi and Walker,
2003). A software system model expressed in a π-
calculus based language will include representations
of all components and entities of the system as pro-
cesses that are able to interact and communicate with
each other through communication channels. Chan-
nels can by dynamically created and even sent from
one process to another creating new communication
links between the processes, thus enabling the dy-
namic evolution of a system.
The core functionalities provided by the basic π-
calculus syntax are summarized in Table 1, lines (1)-
(7). Concurrent composition P | Q allows the parallel
execution of processes, i.e. system entities. Simi-
larly to software systems, concurrent processes may
proceed independently, each executing its own code,
while communicating with each other on shared chan-
nels referred to as names. In its basic form, the π-
calculus supports synchronous communication which
occurs through input and output of messages on chan-
nels. The input term, c?(x).P, describes a process
waiting to receive a message on channel c, result-
ing in variable substitution storing the message in
x, before proceeding as P. The output construct,
c!hvi.P, describes a process sending v as a message
on channel c and then proceeds as P. Restriction,
or new-name creation, (ν n)P, allows for the allo-
cation of a new name n inside the scope of process
P. Replication, P, allows the creation of multiple
copies of process P. Matching or conditional con-
struct, if e then P else Q, can evolve as P if the
condition e is evaluated as true, or as Q otherwise. Fi-
nally, the terminated process, 0, defines a process that
has no interactions.
The operational semantics of the π-calculus con-
sists of a set of rules that describe the behavior of
each of the above constructs, thus allowing to con-
struct the possible executions of a defined system. For
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
70
instance, in the case of the parallel composition con-
struct, it specifies that whenever two processes are
able to send/receive a message respectively on the
same channel, then computation proceeds by the ex-
change of the message in a synchronized step after
which the processes will evolve to their subsequent
state. As an example, consider the following com-
munication implementing the transmission of value
42 on channel a between two concurrent processes,
where captures the evolution and where τ speci-
fies the name of the transition being an internal com-
munication.
a!h42i.P| a?(x).Q
τ
P |Q{
42
/
x
}
Note that, on the receiving side after the interaction
the value 42 is substituted in the process continuation
on variable x as described by the notation Q{
42
/
x
}.
2.4 The Privacy Calculus
Many different extensions of the π-calculus have been
considered in the literature. Among these works, nu-
merous studies have focused on security and privacy
properties. One such extension created with the pur-
pose of describing and analyzing cryptographic pro-
tocols is the spi-calculus (Abadi and Gordon, 1997),
whereas in (Cardelli et al., 2000) the π-calculus is ex-
tended with the notion of groups as types for chan-
nels which are used to statically prohibit the leakage
of secrets. Type systems have also been employed
in process calculi to reason about access control that
is closely related to privacy. For instance, the work
on the Dπ-calculus has introduced sophisticated type
systems for controlling the access to resources ad-
vertised at different locations (Hennessy et al., 2005;
Hennessy and Riely, 2002; Hennessy, 2007). Further-
more, discretionary access control has been consid-
ered in (Bugliesi et al., 2009) which similarly to our
work employs the π-calculus with groups, while role-
based access control (RBAC) has been considered
in (Braghin et al., 2006; Dezani-Ciancaglini et al.,
2010; Compagnoni et al., 2008). In addition, autho-
rization policies and their analysis via type checking
has been considered in a number of papers includ-
ing (Fournet et al., 2007; Backes et al., 2008; Bengt-
son et al., 2011).
In this paper, we extend the work of (Kouzapas
and Philippou, 2017), where the authors propose a
formal framework, the Privacy calculus, for captur-
ing privacy requirements. This framework is based
on (Cardelli et al., 2000). Specifically, as shown in
Table 1, lines 8 and 9, the Privacy calculus introduces
the notion of personal data, id c that associate iden-
tities, id, with values, c. It also extends process terms
with stores of private data r [id c], which are pro-
cesses that store and provide access to private data.
These stores are accessed through names, r, called
references. Processes are organized in groups, G[P],
which associate each entity P with an identifier/role G
in the system.
As such, personal data as defined in the GDPR can
be represented in the Privacy calculus by associating
each piece of plain data with an identifier value, idc.
The privacy calculus also enables the form c to
associate data with a hidden identity. The collection
of stores in a system can be thought of as a database
table, managed by a database manager process. Other
processes can hold a reference to a store, to access,
read, and process the data. The type system ensures
that data manipulation conforms to a system policy,
which specifies how each process/role in the system
may process private date.
Furthermore, the Privacy calculus provisions al-
low the analysis of privacy-related requirements. To
begin with, the notion of a group ensures secrecy.
This is achieved by associating channels with groups,
which are types for channels. No channel of a cer-
tain group can be communicated to processes outside
the scope of the group, as in (Cardelli et al., 2000).
This task can be checked statically. The Privacy cal-
culus goes a step further by associating processes with
groups, and uses the group memberships of processes
to distinguish their roles within systems. The frame-
work is accompanied by a type system for capturing
privacy-related notions and a privacy language for ex-
pressing privacy policies. The policies enable to spec-
ify how the various agents, who are referred to by
their groups, or roles, may or may not handle the sys-
tem’s sensitive data. In the current work we will not
be defining privacy policies in contrast to (Kouzapas
and Philippou, 2017). Instead, the proposed valida-
tion by type checking will specifically analyze con-
formance to the specific GDPR principles and rights
under discussion.
3 A SOFTWARE SYSTEM
SPECIFICATION MODEL CASE
In order to present the rationale of our approach, as-
sume that we want to build a part of an online banking
system with the following specifications:
1. The system waits to receive the following data
from the user: her phone number.
2. The user’s phone number is classified as personal
data.
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR
71
3. The system receives and saves the data provided
by the user to the system database.
4. When a transaction occurs, a notification system
process reads the user’s phone number from the
database and sends a text message to the number.
The system’s functionality is then terminated.
These specifications can be modeled in the Privacy
calculus as follows. A user component will provide
a private data input. The system entities will receive
the input, manage the input’s storage, read the data
stored, and manipulate them. Initial communication
between the user and the system is realized through
channel c. Groups will be used in order to define the
distinct system entities, i.e. the interacting subsys-
tems expressing different functionality. A user entity
process, U, of group User, creates the private session
communication channel a, and shares it with the sys-
tem. Next, she waits to receive on channel a a store
reference and, when received, she writes her phone
number on it.
The functionalities of writing and reading data
directly to or from the store are considered black
boxes. This is a typical practice during the modeling
phase, when some aspects of a system are abstracted
and only become concrete during the implementation
phase. As such, the abstractions of writing and read-
ing directly from the store will be implemented during
the system development, for instance, by calling a re-
spective RESTful service that offers an interface for
communicating with the system data store. In addi-
tion, these abstractions enable the Right of Access of
the data subject to her private data as required by the
GDPR. The behavior of the user can thus be modeled
as follows:
U ::= (ν a)(User[c!hai.a?(x).x!hid phonei.P
0
])
The system entity DBase of group DBaseService
is responsible for holding and managing the data stor-
age, i.e. the user’s store. For the purpose of the ex-
ample we will assume that a store already exists for
the specific user, filled with the user identifier and a
placeholder for the phone number:
DBase ::= DBaseService[r [id ]]
Another system entity S
1
of group
InterfaceService, already considered to be hold-
ing a reference to the store, waits for an input from
the user on the public communication channel c to
be used for establishing a private communication
session between the two entities. It then proceeds
to send the store reference to the user through that
channel:
S
1
::= InterfaceService[c?(x).x!hri.0]
Finally, the system entity S
2
of group
NotificationService, also considered to be hold-
ing the reference to the user’s store, takes as input
from the store the respective data, and uses it to send
a text notification to that phone number. This process
will be triggered when a new transaction occurs and
can be modeled as follows:
S
2
::= NotificationService[r?(x y).
y!hnotificationi.0]
The whole system is defined as the parallel com-
position of the above processes comprising a system
of group
System ::= Sys[U |DBase |S
1
|S
2
]
To further understand how the Privacy calculus
models interact, we proceed to provide an execution
of the system.
System
τ
Sys[(ν a)(User[a?(x).x!hid phonei.P
0
]
|InterfaceService[a!hri.0])| DBase|S
2
]
τ
Sys[(ν a)(User[r!hid phonei.P
0
]
|InterfaceService[0]) |DBase| S
2
]
τ
Sys[(ν a)(User[P
0
]| InterfaceService[0])
|DBaseService[r [id phone]]|S
2
]
τ
Sys[(ν a)(User[P
0
]| InterfaceService[0])
|DBaseService[r [id phone]]
|NotificationService[phone!hnotificationi.0]]
The first step captures the synchronization be-
tween the user process and S
1
, where the former send
the fresh channel a through public channel c result-
ing in a substitution within S
1
of variable y by name
a. In the second step, S
1
sends the store reference r
to the user process, with reference r substituting vari-
able x and subsequently used by the user to update
her private information on the store managed by the
DBaseService. After the interaction we can observe
that the empty store now contains the phone of the
user. Finally, process S
2
receives the phone number
of the user from the database and is ready to send a
notification to the user via her phone number.
4 GDPR AND THE PRIVACY
CALCULUS
The GDPR defines multiple principles and rights re-
garding personal data collection, storage, and pro-
cessing. While these principles were mostly wel-
comed, various challenges have been recognized in
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
72
terms of effectively integrating the new requirements
into computing infrastructures. Furthermore, a ques-
tion that arises is how to verify that a system adheres
to privacy requirements as enunciated by the GDPR.
In this work, we address this challenge by providing
an initial step towards a formal framework for rea-
soning about privacy-related concepts. On the one
hand, such a framework would provide solid foun-
dations towards understanding the notion of privacy
while, on the other hand, it would provide the basis
for system modeling and analysis tools for privacy-
respecting code. Beyond proposing the incorpora-
tion of such a framework into a software’s engineer-
ing methodology, we concentrate on GDPR Article 6,
Lawfulness of Processing, and the requirements de-
fined therein regarding consent, Article 7(3), Consent
Withdrawal and Article 17, The Right to Erasure. We
then evaluate the Privacy calculus as a formalism to
reason about the GDPR, we discuss necessary addi-
tions to address the above articles and we demonstrate
them with the use of examples. In the next section we
formally define these extensions and present a static
methodology for assessing the conformance of a sys-
tem to these requirements.
4.1 Storing and Processing Personal
Data
We recall that according to the first principle of Ar-
ticle 5 (“Lawfulness, fairness and transparency”),
Article 6 (“Lawfulness of Processing (1)”), and in
Recital 40 (“Lawfulness of data processing”), pri-
vate data may be collected, stored, and processed by
a system, only if the users (data owners) have given
their consent for the intended collection and process-
ing. The consent should be given explicitly, freely and
willingly by the user, based on one or more specific
purposes stated and described in the privacy policy.
In order to reason about the above concepts, a for-
malism should contain the notion of private data as
a basic entity. This is already the case in the Privacy
calculus, where private data are considered to be data
structures representing pieces of information related
to individuals along with the identities of the associ-
ated individuals. Furthermore, the formalism should
provide a clear definition of the notion of consent and
ensure that consent is given before any storage or pro-
cessing of the data is performed. Naturally, this con-
sent should be given by the owner of the data to be
processed.
In order to enable the above, we propose the fol-
lowing extension in Privacy calculus. Initially, the
calculus should enable the provision (and withdrawal)
of consent, whereas the notion of a user’s identifier
should become a first-class entity to allow the verifi-
cation that a consent (or withdrawal of consent) for
a piece of private data is indeed given by the appro-
priate party. To achieve this, we extend the Privacy
calculus with (i) additional constructs for providing
consent, receiving consent and creating an associated
store upon the receipt, and withdrawing consent and
emptying the associated store, and (ii) the introduc-
tion of special identifiers and their association with
system processes. The implementation of the con-
sent construct should ensure that provision of consent
should precede any storage of personal data in a store,
except when the system is handling anonymized data.
No processing of private data will be enabled before
the consent of the user. As such, we associate the
grant of consent by a user with the creation of a store
within the software. Once consent is obtained, the
data is saved within the store and its reference can be
communicated to other entities in the system, which
may in turn process the data as needed by the software
specification and specified by its privacy policy. Fur-
thermore, the store reference is shared with the con-
senting entity, i.e. the user, giving the user direct ac-
cess to the data associated with her within the system
at any point in time.
To ensure the association between a process entity
and its associated data, a user process is annotated by
a unique identifier value, {P}
id
, where id is embedded
within all communications requiring the matching be-
tween a data owner and their data. Note that while
the actions of providing consent and writing personal
data to the store are implemented by single constructs
within the calculus, they should be thought of as
black boxes including functionality that will be im-
plemented in detail at coding level. We point out that
this abstraction will need to be appropriately imple-
mented for safeguarding identifiers and channel com-
munication from potential hostile entities, possibly by
embedding a communication protection mechanism,
such as a public-private key cryptography scheme, as
in (Abadi and Gordon, 1997). We illustrate the above
concepts by extending the example of Section 3.
Example 4.1. Section 3 model case with the follow-
ing additions:
1. The user should provide consent for the storage
and usage of her phone number by the system
prior to any storage and processing of the phone
number by the system.
2. No store will be associated with the user within
the system before the user explicitly provides her
consent.
To model these additions, we extend the descrip-
tion of the various entities as follows.
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR
73
User entity process U is annotated by its identifier.
Furthermore, once a private session is achieved with
the database manager it provides its consent via chan-
nel a while simultaneously receiving a reference to
its newly-created store. This is achieved via construct
a consent(x). As in the previous version of the ex-
ample, the user proceeds to write her data to the store.
This is modeled as follows:
{U}
id
::= (ν a)(User[c!hai.a consent(x).
x!hid phonei.0])
Moving on to the process of receiving and stor-
ing the private data of the user by the software sys-
tem, in the proposed extension a new entity S
1
of
group DBInterfaceService is defined, combining the
database and the interface service. This process dy-
namically creates a new reference for a store and pro-
vides it to the disposal of the user as soon as she gives
her consent (the actual store created is not present in
the definition below since it will be dynamically cre-
ated once consent is given, as implemented by the se-
mantics of our calculus presented in the next section).
The fresh store reference needs also to be communi-
cated to the notification service, which is the purpose
of the communication on channel d below. This is
modeled as follows:
S
1
::= DBInterfaceService[c?(y).
(ν r)(y consent(r).d!hri.0)]
System process S
2
receives the reference to the
store and then continues to perform its task as in the
previous example:
S
2
::= NotificationService[d?(p).p?(x y).
y!hnotificationi.0]
The system definition as a whole is comprised of
the parallel composition of the three processes:
System ::= Sys[{U}
id
|S
1
|S
2
]
4.2 Withdrawing Consent and
Forgetting Personal Data
The Right to Erasure is stated in Article 17 of the
GDPR, defining the right of a user to have her per-
sonal data erased from a system without any undue
delay, if one of multiple reasons stated is applied, in-
cluding consent withdrawal. As defined in GDPR Ar-
ticle 7 (3): the data subject shall have the right to
withdraw his or her consent at any time”. In this case,
the individual’s personal data should be deleted or be-
come anonymized in a way that the data subject can
no longer become identified by them.
In this work, we examine consent withdrawal as
the cause for erasure enforcement, though the ma-
chinery can be easily applied to other erasure causes.
In this case, we may assume that consent withdrawal
originates directly from the user process and, given
such a directive, the software is obliged to forget the
user’s data. Thus in the Privacy calculus, when a
deletion instruction is given, the associated store will
be emptied with the consequence of having its refer-
ence leading to no data. Subsequently, no data will be
available to any entity possessing a reference to the
store and attempting to read or write to it. The above
mentioned functionality is implemented by appropri-
ate rules added to the Privacy calculus semantics, and
specifying an abstraction of a GDPR-compliant be-
havior to be appropriately instantiated in the imple-
mentation process.
Example 4.2. Let us consider Example 4.1 where a
user provided consent and sent her phone number to
a system. To implement the Right to Erasure, let us
assume that the user has the right at a later stage to
withdraw her consent. As such, the example must be
expanded with the following specifications:
1. The user should be able to withdraw her consent.
2. When a user’s consent is withdrawn all her per-
sonal data should be deleted.
3. No further storage or processing of the specific
user’s personal data is allowed unless the user
provides again her consent.
The model of a user process {U}
id
as in Example
4.1 who withdraws her consent at some point after
submitting her data can be modeled as follows:
{U}
id
::= (ν a)(User[c!hai.a consent(x).
x!hid phonei.x withdraw.0])
The semantics of consent withdrawal instruction
ensure that such an instruction will synchronize with
the associated store resulting in the automatic dele-
tion of all content within the store causing all attempts
to process the store futile (again through appropriate
enunciation of the semantics rules for store process-
ing). Note that system entities S
1
and S
2
require no
change in their definitions. Furthermore, the entire
system’s definition remains unaltered.
5 THE EXTENDED PRIVACY
CALCULUS
In this section we describe formally the Extended Pri-
vacy calculus and its operational semantics and we
propose a type-checking methodology for ensuring
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
74
Table 2: Extended Syntax of the Privacy Calculus.
Name Definition
(identity values) ι ::= id | | x
(data values) δ ::= c | | x
(private data) ι δ where ι 6= x δ = c and ι = x δ = y
(identifiers) u ::= a | r | x
(terms) t ::= a | r | ι δ | d | x
(constants) v ::= a | r | id d | d
(placeholders) k ::= x | x y | ⊗ x
(processes) P ::= 0 | u!hti.P | u?(k).P | (ν n)P
| P|P | P | if e then P else P
| r [ι δ] | u consent(x).P
| u consent(r).P | u withdraw.P
(systems) S ::= G[P] | G[{P}
id
] | G[S] | S ||S | (ν n)S
that well-typed processes respect the GDPR require-
ments pertaining to consent, as discussed in the previ-
ous section. Due to the lack of space we only present
the extensions we propose to the Privacy calculus and
we refer the reader to (Kouzapas and Philippou, 2017)
for its complete exposition.
5.1 Formal Definition
The extended syntax of the Privacy calculus proposed
for our modeling scheme can be found in Table 2. The
syntax first defines the values used. Assume a set of
constants c C, and a set of variables x,y V . A
special case of values denoted by ι refer to identities.
Symbol ι ranges over (i) identities, id, (ii) the hidden
identity,
, denoting anonymized data, and (iii) vari-
ables x. Data values ranged over by δ include con-
stants, c, the empty constant, , and variables. Private
data ι δ associate identities id with data c. Variable
placeholders for private data are written as x y.
The set of identifiers used for the definition of the
Privacy calculus are defined next. Assume a set of
names n N that are partitioned over names, a, and
store references, r. We use meta-variable u to range
over names or variables. Values, ranging over v, in-
clude names, private data, or data; placeholders, rang-
ing over k, include the variable structures of the calcu-
lus, whereas terms, ranging over t, include both val-
ues and placeholders.
The syntax of processes follows. It includes
the Privacy calculus constructs together with con-
structs for creating private data stores upon consent
and emptying private data stores upon consent with-
drawal. Term u consent(x).P defines the term, where
a process provides with consent on creating a new
store on channel u. The dual operation is term u
consent(r).P, where the process receives a consent
request on channel u and creates a new store on ref-
erence r. During such an interaction the consenting
process (u consent(x).P) will receive the new store
reference on variable x. Term, uwithdraw.P denotes
a process ready to withdraw consent on a store refer-
ence u.
The syntax of systems is also extended with sys-
tem G[{P}
id
] that denotes a system that runs on a
process with an identity. Processes with identities
are used to abstract individual users within a system,
whose private data are being processed.
The computation steps of the Privacy calculus are
defined using a relation called labeled transition re-
lation. Specifically, we write S
S
0
to denote that
a system S can take a step and evolve into system S
0
by executing the action indicated by label . When-
ever two parallel processes/systems can execute two
dual actions then they synchronize with each other
and their composition executes a computation inter-
action, indicated by τ, as follows:
S
1
1
S
0
1
S
1
2
S
0
2
1
dual
2
S
1
|S
2
τ
S
0
1
|S
0
2
The labels of the Privacy calculus are extended
with the labels:
a consent(r) a consent(r)@id a consent(r)@id
r withdraw r withdraw@id r withdraw@id
where
a consent(r)@id dual a consent(r)@id
r withdraw@id dual r withdraw@id
Label a consent(r) denotes the basic action for pro-
viding consent on channel a and receiving a refer-
ence on channel r, whereas label a consent(r)@id
is the same action lifted to a user identity, id. Du-
ally action a consent(r)@id is the acceptance of a
consent on channel a and the creation of a new store
with reference r and identity id. Withdraw labels are
r withdraw that denotes a withdraw on reference
r; r withdraw@id that lifts a withdraw at the user
level; and r withdraw@id that denotes the receipt of
a withdraw on reference r with identity, id.
The labeled transition semantics definition fol-
lows in Table 3. The first rule extends structural
congruence with a garbage collection rule to collect
stores that do not contain useful private data. Rule
[SConsentU1] describes the interaction of a process
giving consent on channel a; the process observes the
a consent(r) label. Next, rule [SConsentU2] lifts a
consent action to a process associated with an identity,
using label a consent(r)@id. The dual action of re-
ceiving consent is done via label a consent(r)@id.
After the label is observed a new store, r [id ]
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR
75
Table 3: Additions to Privacy Calculus Labeled Transition Semantics.
Rule Name Structure
[GarbageC] (ν r)(r [ ] 0)
[SConsentU1] a consent(x).P
aconsent(r)
P{
r
/
x
}
[SConsentU2]
P
aconsent(r)
P
0
{P}
id
aconsent(r)@id
{P
0
}
id
[SConsentS1] a consent(r).P
aconsent(r)@id
P| r [id ]
[SWithdraw1] r withdraw.P
rwithdraw
P
[SWithdraw2]
P
rwithdraw
P
0
{P}
id
rwithdraw@id
{P
0
}
id
[SWithdraw3] r [id c]
rwithdraw@id
r [id ]
[PId]
P
P
0
6=aconsent(r),rwithdraw
{P}
id
→{P
0
}
id
[SId]
{P}
id
→{P}
id
S[{P}
id
]
S[{P}
id
]
[SOut2] r [ ]
r!h ⊗∗i
r [ ]
[SInp2] r [ ]
r?(idc)
r [ ]
is created on reference r with identity id. Follow-
ing the dual definition of labels, a process that gives
consent interacts with a process that receives consent
to create a new store and exchange the correspond-
ing reference. Withdraw follows a similar fashion.
A process with a withdraw prefix observes a with-
draw label r withdraw (rule [SWithdraw1]), and it
is lifted to a user process via label r withdraw@id
(rule [SWithdraw2]). Finally, rule [SWithdraw3] ob-
serves a store receiving a withdraw request via label
r withdraw@id and as a result it deletes the cor-
responding private data and assigns the anonymous
identity and the empty value to its memory. The next
two rules bridge the gap between the new process
{P}
id
and System S[{P}
id
] and the rest of the calcu-
lus. Finally, two new rules define the interaction of
the empty store.
5.2 Typing GDPR Compliance
As we have already discussed, the objective of this
work is to propose a formal framework for modeling
software systems and associated analysis techniques
for guaranteeing that systems comply to privacy-
related requirements. In this section, we discuss such
an analysis technique and we show that the intended
requirements we have discussed regarding the pro-
vision and withdrawal of consent can be statically
checked by typechecking.
For instance, consider the following system:
User[a!hid ci.P]|| Sys[a?(x y).Q]
This system describes the situation, where pri-
vate data are sent (resp. received) via a non-reference
channel. However, this leads to the dissemination
and processing of private data without proper permis-
sion and consent and, while it is a legal process, it
clearly fails to satisfy the GRPR requirements. Note
that exchange of private data can instead be achieved
by communicating the reference to the associated pri-
vate data store. A similar error would have occurred,
if upon acquisition of private data from a store, a
subcomponent of a system created a new store and
thereby copied the data. However, this would lead to a
situation, where the withdrawal of consent by the user
would not reach the newly-defined store and thus, the
system would continue to hold the data violating the
right of the user for her data to be forgotten. This
suggests that copying of private data should (if car-
ried out) be done with care and ensuring that a link is
maintained between the various copies of user data.
In the sequel, we propose an approach for recognizing
these and other undesirable situations via typecheck-
ing, and we prove that well-typed processes do not
present error, such as the ones discussed above. We
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
76
Table 4: Typing Rules.
Rule Structure
[TCons]
Γ,x:G[t[g]];Λ `P Γ `u:G
0
[G[t[g]]]
Γ;Λ ` uconsent(x).P
[TStore]
Γ,r:G[t[g ]];Λ ` P Γ ` u:G
0
[G[t[g]]]
Γ;Λ,r `uconsent(r).P
[TWithdraw]
Γ;Λ ` P Γ `u:G[t[g ]]
Γ;Λ ` uwithdraw.P
[TId]
Γ;Λ ` P
Γ;Λ,id ` {P}
id
first define the set of types that are used to character-
ize the channels and the values that are used by the
typing system. A type, T , is of the following form:
g ::= nat | bool | (g
1
,. .., g
n
) | ...
T ::= t[g] | G[T] | g
A type g is a primitive data type, e.g. natural num-
bers and boolean, or a structure of primitive data
(g
1
,. .., g
n
). A type t[g] is used to type private data
values id c, where constant c is of type g. Channels
are typed the G[T ] type, that denotes a channel that
can only send channels or values of type T between
participants beloning to group G.
Our type system extends the one of the Privacy
calculus by the rules in Table 4. It is based on the
notion of a typing context defined as follows:
Γ ::= Γ,t : T |
/
0 Λ ::= Λ,r | Λ,id |
/
0
A typing context Γ, or shared typing environment,
maps values and variables to types. Operator Γ
1
,Γ
2
is defined as Γ
1
Γ
2
. A typing context Λ, or linear
typing environment, includes store references r and
identifiers id. It is used to track names in a system and
ensure that store references and identities are unique
in a system. Operator Λ
1
,Λ
2
is defined as Λ
1
Λ
2
,
whenever Λ
1
Λ
2
=
/
0 and undefined otherwise. Typ-
ing judgments are of the following form:
Γ ` t : T Γ;Λ ` P Γ;Λ ` S
The first judgment states the a meta-variable t has a
type T following a typing context. The second judg-
ment states that a process P is well typed given in
the typing contexts Γ and Λ. Similarly for the typing
judgment for systems.
The first typing rule in Table 4 checks for the cor-
rect usage of the u consent(x).P term. It first checks
that process P is correctly typed under a typing envi-
ronment. It then checks whether the type of variable
x can carry private data, i.e. it is of store reference
type. Moreover, it checks that the type of channel u
can carry an x variable type. In the premise of the
rule variable x is not present because x is bounded in
process P. Rule [TStore] follows a similar logic to
the previous rule. However, it is checked that name r
does not already have a store present in P by adding
r in environment Λ. If r was already present in Λ it
means that a store (or a consent for a store) is already
present in process P. Rule [TWithdraw] performs a
check that a withdraw can be performed on a channel
that carries private data, i.e. it is a reference channel.
The final rule, rule [TId], types a process which is
equipped with an identity. The rule tracks the identity
in the Λ environment. The presence of the identity in
the Λ environment ensures that it cannot be used by
another process as an identity in the system, because
Λ cannot be composed with another linear environ-
ment that contains the same identity.
A main property of the system is that the execu-
tion of computations steps by a statically-typed sys-
tem, yields systems that are also well typed.
Theorem 5.1 (Type Preservation). Let S be a system
such that Γ;Λ ` S for some Γ and Λ. Whenever S
S
0
then there exists Γ
0
,Λ
0
such that Γ
0
;Λ
0
` S
0
.
Type preservation is an important theorem that
states the soundness of the typing system. Moreover,
as we discussed above, the cases of mishandling pri-
vate data are not allowed in statically checked pro-
cesses. This is formally given by the a type safety
theorem. First, we define the notion of the error pro-
cess, which are processes that mishandle private data.
Definition 5.1 (Error Process). A process P is an er-
ror process whenever it is of one of the forms:
r [ι δ]|r [ι
0
δ
0
],
a consent(x).P if r fn(P),
a!hid ci.P, a?(x y).P,
r!hvi.P, and r?(x).P
A system is an error system whenever, either it is of
the form G[P] and P (ν ˜n)(P
1
| . .. | P
n
) and i,1
i n such that P
i
is an error processes; or of the
form G[S] and S is an error system; or of the form
(ν ˜n)(S
1
|| . .. || S
n
) and i,1 i n such that S
i
is an
error system.
The class of error systems captures cases where
a system mishandles private data. The first case is
where a system defines more than one stores on the
same reference. This can lead to situations where the
two stores have inconsistent private data.
The next case defines the situation, where one
store is created with consent and another without con-
sent. For example, in the system:
Sys[a consent(r).(P | r [id c])]
a second store will be created without consent.
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR
77
The next two cases define the situation, where pri-
vate data are sent (resp. received ) via a non-reference
channel with the possibility of leading to data pro-
cessing without consent from the user.
The final two cases characterized as errors is when
non private data are sent (resp. received) via a refer-
ence, e.g. the system
User[r!hai.P]|| Sys[r [id c]]
cannot observe any meaningful interaction on refer-
ence r.
Based on a sound type system we can statically in-
fer that a system is safe with respect to handling pri-
vate data, i.e. a statically-checked system will never
reduce to an error.
Theorem 5.2 (Type Safety). Let system S be a system
such that Γ ` S for some Γ. Whenever S
S
0
then
S
0
is not an error system.
The proof follows from the fact that error sys-
tems cannot be successfully type checked and Theo-
rem 5.1, that states that reduction preserves the typing
property.
6 CONCLUSIONS
In this paper, we have extended the Privacy calculus
of (Kouzapas and Philippou, 2017) with features to
support the granting and withdrawal of consent by
users, and for checking associated requirements as
enunciated by the GDPR. While we have not con-
sidered privacy policies as in (Kouzapas and Philip-
pou, 2017), where type checking is performed to en-
sure satisfaction of requirements pertaining to the pur-
pose of usage, as well as the permissions endowed
to each user in the system, we believe that the two
approaches can easily be integrated being orthogonal
to each other. Our vision is that a π-calculus-based
framework can be developed and used by software
engineers to model systems during the design phase,
as well as in order to analyze and validate privacy-
related properties, such as the GDPR-based provi-
sions of Lawfulness of Processing, and the Right to
Erasure and Consent Withdrawal. Analyzing and val-
idating a system’s specifications design model is a re-
quired step of software system creation. The mod-
eling scheme introduced will provide information on
conformance or may reveal any infringements found.
We point out however, that this process cannot guar-
antee that the software engineers will implement the
system to precisely conform to its specifications de-
sign, unless MDE techniques relying on the Privacy
calculus are employed.
As future work, we intend to embed more GDPR
provisions into the Privacy calculus. We will also
work towards offering tools for easily expressing
a system’s specifications model in π-calculus-based
formalism. Furthermore, tools for static checking
actual code implementations, and tools for formally
translating verified models to verified developed sys-
tems can be created, thus assisting the transition from
the design phase to the development phase, by ex-
ploiting the ability of type-checking techniques at the
coding level. Relevant work, that does not capture pri-
vacy requirements, but was developed in the context
of the π-calculus and applies automated static analysis
techniques to software code can be found in (Yoshida
et al., 2013; Ng et al., 2015). Moreover, in future
work we will focus on the notion of purpose by de-
scribing and validating purpose-based properties us-
ing a purpose-based, customizable privacy policy lan-
guage, providing also a mechanism for privacy policy
validation as embedded in the Privacy calculus. Our
ultimate goal is to provide a holistic approach to pri-
vacy management in software system development.
REFERENCES
Abadi, M. and Gordon, A. D. (1997). A calculus for crypto-
graphic protocols: The spi calculus. In Proceedings of
the 4th ACM Conference on Computer and Communi-
cations Security, pages 36–47. ACM.
Ahmadian, A. S., Str
¨
uber, D., Riediger, V., and J
¨
urjens,
J. (2018). Supporting privacy impact assessment by
model-based privacy analysis. In ACM Symposium on
Applied Computing, pages 1142–1149.
Backes, M., Hritcu, C., and Maffei, M. (2008). Type-
checking zero-knowledge. In Proceedings of the 2008
ACM Conference on Computer and Communications
Security, CCS 2008, pages 357–370.
Basso, T., Montecchi, L., Moraes, R., Jino, M., and
Bondavalli, A. (2015). Towards a uml profile for
privacy-aware applications. In Proceedings of the
IEEE International Conference on Computer and In-
formation Technology; Ubiquitous Computing and
Communications; Dependable, Autonomic and Se-
cure Computing; Pervasive Intelligence and Com-
puting (CIT/IUCC/DASC/PICOM, 2015), pages 371–
378. IEEE.
Bengtson, J., Bhargavan, K., Fournet, C., Gordon, A. D.,
and Maffeis, S. (2011). Refinement types for secure
implementations. ACM Transactions on Program-
ming Languages and Systems, 33(2):8.
Boussetoua, R., Bennoui, H., Chaoui, A., Khalfaoui, K.,
and Kerkouche, E. (2015). An automatic approach to
transform bpmn models to pi-calculus. In Proceedings
of the International Conference of Computer Systems
and Applications (AICCSA, 2015), pages 1–8. IEEE.
Braghin, C., Gorla, D., and Sassone, V. (2006). Role-based
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
78
access control for a distributed calculus. Journal of
Computer Security, 14(2):113–155.
Bugliesi, M., Colazzo, D., Crafa, S., and Macedonio, D.
(2009). A type system for discretionary access con-
trol. Mathematical Structures in Computer Science,
19(4):839–875.
Cardelli, L., Ghelli, G., and Gordon, A. D. (2000). Secrecy
and group creation. In International Conference on
Concurrency Theory, pages 365–379. Springer.
Cavoukian, A. (2008). Privacy by design. Information
Commissioner’s Office.
Compagnoni, A. B., Gunter, E. L., and Bidinger, P. (2008).
Role-based access control for boxed ambients. Theo-
retical Computer Science, 398(1-3):203–216.
Data Protection and Privacy Commissioners (2010). Res-
olution on privacy by design. In Proceedings of
ICDPPC’10.
Dezani-Ciancaglini, M., Ghilezan, S., Jaksic, S., and Pan-
tovic, J. (2010). Types for role-based access control
of dynamic web data. In Proceedings of WFLP’10,
LNCS 6559, pages 1–29. Springer.
European Parliament and Council of the European Union
(2015). General data protection regulation. Official
Journal of the European Union.
Fournet, C., Gordon, A., and Maffeis, S. (2007). A type
discipline for authorization in distributed systems. In
20th IEEE Computer Security Foundations Sympo-
sium, CSF 2007, 6-8 July 2007, Venice, Italy, pages
31–48.
Fowler, M. (2004). UML distilled: a brief guide to the stan-
dard object modeling language. Addison-Wesley Pro-
fessional.
Gjermundrød, H., Dionysiou, I., and Costa, K. (2016).
privacytracker: A privacy-by-design gdpr-compliant
framework with verifiable data traceability controls.
In Proceedings of the International Conference on
Web Engineering, pages 3–15. Springer.
Havey, M. (2005). Essential business process modeling.
O’Reilly Media, Inc.”.
Hennessy, M. (2007). A distributed Pi-calculus. Cambridge
University Press.
Hennessy, M., Rathke, J., and Yoshida, N. (2005). safedpi:
a language for controlling mobile code. Acta Infor-
matica, 42(4-5):227–290.
Hennessy, M. and Riely, J. (2002). Resource access control
in systems of mobile agents. Information and Compu-
tation, 173(1):82–120.
Hes, R. and Borking, J. (1998). Privacy enhancing tech-
nologies: the path to anonymity. ISBN, 90(74087):12.
Hintze, M. and LaFever, G. (2017). Meeting upcoming gdpr
requirements while maximizing the full value of data
analytics.
Hustinx, P. (2010). Privacy by design: delivering
the promises. Identity in the Information Society,
3(2):253–255.
J
¨
urjens, J. (2002). Umlsec: Extending uml for secure
systems development. In Proceedings of the Inter-
national Conference on The Unified Modeling Lan-
guage, pages 412–425. Springer.
Kalloniatis, C., Kavakli, E., and Gritzalis, S. (2008). Ad-
dressing privacy requirements in system design: the
pris method. Requirements Engineering, 13(3):241–
255.
Kapitsaki, G., Ioannou, J., Cardoso, J., and Pedrinaci, C.
(2018). Linked usdl privacy: Describing privacy poli-
cies for services. In 2018 IEEE International Confer-
ence on Web Services (ICWS), pages 50–57. IEEE.
Kapitsaki, G. M., Kateros, D. A., Pappas, C. A., Tselikas,
N. D., and Venieris, I. S. (2008). Model-driven devel-
opment of composite web applications. In Proceed-
ings of the 10th International Conference on Informa-
tion Integration and Web-based Applications & Ser-
vices, pages 399–402. ACM.
Kapitsaki, G. M. and Venieris, I. S. (2008). Pcp: privacy-
aware context profile towards context-aware applica-
tion development. In Proceedings of the 10th Inter-
national Conference on Information Integration and
Web-based Applications & Services, pages 104–110.
ACM.
Kouzapas, D. and Philippou, A. (2017). Privacy by typing in
the π-calculus. Logical Methods in Computer Science,
13(4).
Lam, V. S. (2008). On π-calculus semantics as a formal
basis for uml activity diagrams. Prooceedings of the
International Journal of Software Engineering and
Knowledge Engineering, 18(04):541–567.
Milner, R., Parrow, J., and Walker, D. (1992). A calculus
of mobile processes, parts I and II. Information and
Computation, 100(1):1–77.
Ng, N., de Figueiredo Coutinho, J. G., and Yoshida, N.
(2015). Protocols by default - safe MPI code gener-
ation based on session types. In Proceedings of Inter-
national Conference on Compiler Construction, CC
2015, pages 212–232.
Perera, C., McCormick, C., Bandara, A. K., Price, B. A.,
and Nuseibeh, B. (2016). Privacy-by-design frame-
work for assessing internet of things applications and
platforms. In Proceedings of the 6th International
Conference on the Internet of Things, pages 83–92.
ACM.
Rubinstein, I. S. (2011). Regulating privacy by design.
Berkeley Technology Law Journal, 26:1409.
Sangiorgi, D. and Walker, D. (2003). The pi-calculus: a
Theory of Mobile Processes. Cambridge University
Press.
Schmidt, D. C. (2006). Model-driven engineer-
ing. COMPUTER-IEEE COMPUTER SOCIETY-,
39(2):25.
Thatte, S. (2001). Xlang: Web services for business process
design. Microsoft Corporation, 2001.
Yoshida, N., Hu, R., Neykova, R., and Ng, N. (2013). The
Scribble protocol language. In Proceedings of TGC
2013, Revised Selected Papers, pages 22–41.
A Formal Modeling Scheme for Analyzing a Software System Design against the GDPR
79