Secure Benchmarking using Electronic Voting
Vivek Agrawal and Einar Arthur Snekkenes
Department of Information Security and Communication Technology,
Norwegian University of Science and Technology, Gjøvik, Norway
Keywords:
Ballot, Benchmarking, Electronic Voting, Response, Secure Benchmark.
Abstract:
It is a common practice in the industry to organize benchmark processes to establish information security
performance evaluation standards. A benchmarking system collects information security-related data from
the organization to establish a standard. The information shared by the organization often contains sensitive
data (details of the vulnerability, Cyber attacks). The present benchmarking systems do not provide a secure
way of exchanging sensitive information between the submitter and the benchmark authority. Furthermore,
there is a lack of any mechanism for the submitters to verify that the final benchmark result contains the
response submitted by them. Hence, people are reluctant to take active participation in sharing their sensitive
information in the benchmarking process. We propose a novel approach to solve the security limitations of
present benchmarking systems by applying the concepts of electronic voting to benchmark. Our solution
provides secrecy to submitters’ identity and to the benchmark responses. Our approach also ensures that all
the submitted responses have been correctly counted and considered in the final benchmark result.
1 INTRODUCTION
Researchers and experts suggest that the development
and use of sound and repeatable Information Secu-
rity Management (ISM) practices bring organizations
closer to meeting their business objectives. Organiza-
tions can measure the quality of ISM practices, either
by comparing their processes to other organizations or
by measuring compliance according to established se-
curity standards (Whitman and Mattord, 2014). Infor-
mation security is considered to be one of the business
requirements that should be appropriately addressed
by the enterprises. Enterprises hold a large volume
of valuable information which is required to follow
compliance with regulations and law about informa-
tion security.
Benchmarking is a well-known process of im-
proving performance by continuously identifying, un-
derstanding, and adapting security practices and pro-
cesses found inside and outside an organization (Hi-
dalgo and Albors, 2008). Benchmarking requires
sharing organization-specific sensitive information to
compare the performance in a specific domain. Typ-
ically, it requires Benchmark Submitters (members
who possess valuable information) to submit the an-
swers to a set of questions to establish a benchmark
standard. However, the most significant barrier to
benchmarking is the fact that many organizations are
not willing to share their organization-specific sensi-
tive data. The submitter may need to share the critical
information, i.e., information related to security inci-
dents that they often face. Information related to any
successful attack is often perceived as a failure and is
kept secret by the organization. The details of these
events can create a bad image for the organization in
the marketplace (Whitman and Mattord, 2014). Any
security incident within the company can jeopardize
the business operation and reputation (Kanno, 2009).
Therefore, it may be considered risky to participate in
the benchmarking process as illegitimate access to the
sensitive information may hamper the business oper-
ation of the organization.
Currently, benchmarking is practiced almost all
over the world (O’Rourke et al., 2012). There is a
variety of methods by which different forms of data
are developed, collected, and transmitted during the
benchmarking event. There may be conflicts of inter-
ests in and incentives for the benchmark authorities
to manipulate the benchmark process (IOSCO, 2013).
The current benchmarking models fail to provide a
secure way to share sensitive information (Kanno,
2009). Benchmark does not provide an efficient way
for the data submitters to verify that the final bench-
mark result contains the response submitted by them
(ABB, 2017), (ISF, 2017). Hence, it lacks the sense
Agrawal, V. and Snekkenes, E.
Secure Benchmarking using Electronic Voting.
DOI: 10.5220/0006827800250040
In Proceedings of the 15th International Joint Conference on e-Business and Telecommunications (ICETE 2018) - Volume 2: SECRYPT, pages 25-40
ISBN: 978-989-758-319-3
Copyright © 2018 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
25
of transparency with collecting the data, analyzing the
data, and publishing the final result.
We are establishing ’UnRizkNow’ (Agrawal et al.,
2017), (Agrawal and Snekkenes, 2017), (Agrawal
and Szekeres, 2017) as an open electronic community
of practice (eCoP) (Mathwick et al., 2008), (Wiertz
and de Ruyter, 2007) to allow information security
practitioners (ISP) to share InfoSec knowledge
without violating the information security require-
ments. We are working towards providing a secure
benchmarking service on UnRizkNow eCoP. We aim
to protect the identity of the members who participate
in the benchmark task. We also target to protect the
sensitive data shared by the members/organization in
the benchmarking process. Therefore, we propose
applying the concepts of electronic voting to the
benchmarking process on eCoP. We formulate the
current benchmarking model based on a literature
review. We also establish the requirements of a
secure benchmarking system. Furthermore, we map
the benchmarking system to electronic voting system
by mapping their protocol, structure, and concepts.
We also demonstrate how a secure benchmark can
be conducted on the UnRizkNow platform using the
electronic voting approach. We have identified the
following research questions in this study.
RQ1 What are the requirements of a secure bench-
marking system?
RQ2 How can a secure benchmarking system be
mapped to a electronic voting system?
RQ3 How can a benchmarking system be built us-
ing the electronic voting concepts
RQ4 To what extent does the EV approach make
the benchmarking system secure?
The paper is organized as follows: In Section 2,
we overview of benchmarking and describe the
benchmarking model that is widely used. In Section
3 the research method used in this study is described.
In Section 4, an overview of the electronic voting
system is presented. The essential phases of an EV
system, the structure of vote, and security require-
ment of EV systems and schemes are also presented.
A mapping of benchmarking concepts to EV concepts
is presented in Section 5. In Section 6, an application
of EV concepts to a benchmarking system is de-
scribed to demonstrate how secure benchmarking can
be conducted using the EV approach. A discussion
based on the security analysis follows in Section 7.
The limitation of the current study and the scope of
further improvement is highlighted in Section 8. We
conclude in Section 9.
2 OVERVIEW OF
BENCHMARKING
The aim of this section is to provide a general
overview of benchmarking. We identify major activ-
ities and actors involved in a typical benchmarking
system. The benchmark model that is widely used ev-
erywhere is also presented.
2.1 Benchmarking Protocol
Development of benchmarks is an iterative and on-
going process that is likely to involve sharing infor-
mation with other organizations working towards an
agreeable method (Kelessidis, 2000). Benchmark-
ing is pioneered by Xerox Corporation in the 1979s
to perform better in the international competition in
the photocopier market (Kelessidis, 2000). The idea
of benchmarking was restricted to very few compa-
nies, e.g., AT&T, Motorola, Xerox in the beginning.
However, governmental and non-profit organizations
have begun implementing benchmarking as late as the
early 1990’s. Information security Forum (ISF) pro-
vides benchmarks in the form of their premium ser-
vice (ISF, 2017). We derive the benchmarking proto-
col from (ESMA-EBA, 2013) report. A benchmark-
ing system typically comprises the following activi-
ties and actors:
1. Benchmark Administration: It includes all the
stages and processes involved in the benchmark-
ing process. The establishment, design, produc-
tion, and dissemination of a benchmark from the
gathering of the input data and the calculation of
the benchmark based on the input data to the dis-
semination of the Benchmark to users including
any review, adjustment, and modification of this
process. The legal person or entity responsible for
executing this phase is called Benchmark Admin-
istrator (BA). BA also takes care of of publishing
Benchmark values, which includes making avail-
able such values on the internet or by any other
means, whether free of charge or not. According
to (ESMA-EBA, 2013) report, the activity of pub-
lishing benchmark values can be carried out by a
separate entity, Benchmark Publisher.
2. Benchmark Submission: The activity of con-
tributing to Benchmark data submissions to a BA.
The Benchmark submission is done by Bench-
mark Submitter (BS). The data submitted by BS
are used exclusively for the calculation of the
Benchmark.
3. Benchmark Calculation: The activity of per-
forming the calculation of the Benchmark based
on the methodology provided by a Benchmark
SECRYPT 2018 - International Conference on Security and Cryptography
26
Administrator and the data collected by the entity
performing the calculation or the BA or submit-
ted by BS. A legal person or entity responsible for
performing this phase is called Benchmark Calcu-
lation Agent (BCA).
4. Benchmark Service: The activity of evaluating
the performance in a certain domain by fetching
benchmark data from BA. A profession client (by
paragraphs 1, 2 and 3 of Section I of Annex II
to Directive 2004/39/EC) who is interested in tak-
ing benchmark data from BA is called Benchmark
User (BU).
The task of BA and BCA may be performed by
distinct legal entities or may be grouped such that
one entity performs more than one. Figure 1 shows
the information flow among the actors involved in the
benchmarking process.
2.2 Structure of a Benchmark
The structure of benchmark depends on the overall
objective of the benchmark. A benchmark typically
consists of some questions created to assess the per-
formance of the various organization in a particular
domain. The question has options which indicate the
possible answer to the question. The structure con-
taining the answers to the questions is called a re-
sponse. There are the following types of question for-
mats:
Yes/No questions: Submitter’s answer is either
Yes or No. The benchmark result of this ques-
tion is a histogram chart consists of the frequency
distribution of yes and no generated from the valid
responses.
Multiple-option Question: A question consists of
various options, but the submitter can submit only
one option. The benchmark result of this question
will be a histogram chart consists of the frequency
distribution of all the options calculated from the
valid response.
Open question (Numerical): Submitter can for-
mulate the answer and write it down. However,
the answer must be a numeral that follows the
condition provided in the question. For instance,
the age of the submitter question can only take
numbers in the range of 1-100. The benchmark
result of this question will be an average value cal-
culated on the total valid responses submitted by
the benchmark submitter.
2.3 Benchmarking Model
In this study, the principles of benchmarking model
are set up according to the guideline given by Euro-
pean Securities and Markets Authority (ESMA) and
European Banking Authority (ESMA-EBA, 2013).
The same principles are widely followed by many
organizations, e.g., ISF (Forum, 2017), ABB (ABB,
2017) and ISM-Benchmark (Kanno, 2009). The
benchmarking process is usually conducted in two
phases, i.e., Benchmark standard establishment and
Benchmark as service. An overview of a complete
benchmarking process is shown in Figure 1. The de-
tails of the two phases are given as follows:
3. Methodology
BCA
BA
BS
5. Calculated data
Benchmark standard
establishment
Benchmark as Service
Phase I
Phase II
BCA
3. Methodology
5. Calculated data
BU
Figure 1: The information flow among the benchmarking
actors in a benchmarking system. Phase I is carried out
among BA, BCA, and BS. Phase II is carried out among
BA, BCA, and BU.
2.3.1 Benchmark Standard Establishment
The first phase of the benchmark process is called
Benchmark standard establishment. The aim of per-
forming this phase is to collect data from the relevant
organization to understand how well they perform in
a given domain. This phase is usually executed syn-
chronously, i.e., all the participants involved in the
benchmarking task work simultaneously. Typically,
BA hires or establishes a contract with an entity that
can act as a BCA in the process. BA also makes a
list of all the potential entities who can serve as a sub-
mitter. The details of step 1-6 in Phase I are given as
follows:
1. BA sends a formal request to the members to par-
ticipate in the benchmarking process and ask for
response submission. The status of the member is
marked as BS when the member agrees to partici-
pate
2. BA sends questions to assess a particular domain
to BS.
3. BA sends the details of the question format and
calculation method to BCA. The methodology is
used by BCA to calculate the benchmark result.
4. BS sends the response to BCA.
Secure Benchmarking using Electronic Voting
27
5. BCA applies the methodology on the aggregated
benchmark data and calculate the result of each
question. BCA sends the benchmark result to BA.
6. BA sends the benchmark result to BS or posts it on
a common web portal.
2.3.2 Benchmark as Service
The second phase of the benchmarking is called
Benchmarking as a service. It is often provided by
a private organization as a paid service, and by a pub-
lic organization (government) as a free service. A user
(BU) who is interested to know the status of its perfor-
mance usually go for this type of service. This phase
is executed asynchronously, i.e., it is not necessary
that all the users contact BA at the same time. How-
ever, there is some service-level-agreement involved
between BA and BU. The details of the steps 1-6 in
phase II are given as follows:
1. BU establishes a contract with BA to get the latest
benchmarked data in the given domain. BU sends
the details of the requirements, i.e., the domain of
the benchmark, the format of the outcome, deliv-
ery time to BA.
2. BA chooses the relevant questions from the list
used in phase I and creates a new set of questions
specific to the requirement received from BU.
3. BA sends the details of the question format and
calculation method to BCA. The methodology is
used by BCA to calculate the benchmark result.
4. BU answers the questions of benchmark and
sends the response to BCA.
5. BCA applies the methodology on the aggregated
benchmark data from BU and calculate the result
of each question. BCA sends the benchmark result
to BA.
6. BA sends the benchmark result to BU. This bench-
mark result contains the response submitted by
BU to the given questions and the values that
have been collected by BA in phase I. In this way,
BU can compare its response with the benchmark
standard and assess its performance.
2.3.3 Requirements of a Secure Benchmarking
System
In this section, we answer RQ1 by establishing the se-
curity requirements of the benchmarking system. As
far as we know, no comprehensive list of benchmark-
ing security requirements have been published. Hav-
ing carefully considered security issues in the context
of benchmarking, we state what we believe are the
key benchmarking security requirements.
1. Completeness: All valid responses should be
counted correctly in the final calculation.
2. Uniqueness: Benchmark submitter can submit
the response only once. The submitters should
be allowed to submit their responses only once
to control any practice to manipulate the overall
result of the benchmark by submitting many re-
sponses.
3. Universal Verifiability: Anyone can verify that
the published result is correctly computed from
the responses that were correctly submitted. This
is an important requirement as it signifies that the
benchmarked data is calculated using the original
submitted responses and it is not manipulated.
4. Individual Verifiability: Each eligible submitter
can verify that his valid response was counted.
5. Eligibility: Only entitled benchmark submitter
can submit a response.
6. Secrecy: Neither benchmark authorities nor any-
one else can find out which submitter submitted
which response.
7. Soundness: Any invalid response should not be
counted in the final calculation.
3 RESEARCH METHOD
We applied the concepts of Design Science Research
(DSR) (Hevner et al., 2004) to develop the scientific
approach in this study. This research aims to solve an
existing practical problem in the domain of Informa-
tion system by creating an artifact based on the ex-
isting theories of electronic voting and cryptography.
The problem is solved by applying creativity, inno-
vation, and problem-solving capabilities. The created
artifact would then be applied to UnRizkNow eCoP
to enhance information sharing without compromis-
ing the sensitivity of the information. We adopted the
five-step research process (Johannesson and Perjons,
2014) to research this study. Figure 2 shows the es-
sential steps in the DSR model.
Explicate
problem
Define
Requirements
Design and
Develop artefact
Demonstrate
artefact
Evaluate
artefact
Explicated
problem
Requirements
Artefact
Demonstrated
artefact
Initial
Problem
Evaluated
artefact
Figure 2: An overview of research method in Design
Science Research Methodology (Johannesson and Perjons,
2014).
The first step of the DSR process, i.e., explicate
problem is to investigate and analyze the practical
problem. We defined the problems in the present
benchmarking system, i.e., the lack of security. Fur-
SECRYPT 2018 - International Conference on Security and Cryptography
28
ther, define requirements outlines a solution to the ex-
plicated problem in the form of an artifact and elic-
its requirements, which can be seen as a transforma-
tion of the problem into demands on the proposed ar-
tifact. We suggest a novel approach of conducting
benchmark using the concepts of the Electronic vot-
ing scheme. In the phase, Design and develop artifact
an artifact is created to address the explicated problem
and fulfill the defined requirements. Our artifact con-
sists of mapping the structure, protocol, and the con-
cepts of benchmarking to electronic voting. Demon-
strate artifact uses the developed artifact and applies
this to a real-life case or any illustrative case. This
phase aims to show that the artifact can solve an in-
stance of the defined problem. We incorporate the
proposed artifact to the UnRizkNow platform so that
a secure benchmarking can be conducted on the plat-
form. The final step is evaluate artifact, which deter-
mines how well the designed artifact solves the pri-
mary problem. We perform the security analysis on
the developed artifact to show to what extent it fulfills
the security requirements of a secure benchmarking
system.
We also applied the DSR knowledge contribution
framework (Gregor and Hevner, 2013) to highlight
the nature of the contribution of our study. Figure
3 presents a 2X2 matrix of DSR research contribu-
tions. The x-axis, i.e., Application Domain Maturity
(ADM) shows the maturity of the problem from high
to low. The y-axis, i.e., Solution Maturity (SM) repre-
sents the current maturity of the artifacts from high to
low that exist as potential starting points for solutions
to the questions. The 2x2 matrix also identifies four
kinds of design science contribution. A low ADM
and low SM defines a new solution for new prob-
lems, and it is referred as Invention. A high ADM and
Low SM defines new solutions for known problems,
also known as Improvement. A low ADM and High
SM indicates known solutions for new problems, also
known as Exaptation. Finally, A high ADM and high
SM indicates known solution for known problems,
referred as routine design. Unlike other entities of
the matrix, the routine design does not have a major
knowledge contribution.
The idea of using the concepts of electronic vot-
ing conducting benchmarking tasks makes the bench-
marking process more secure and trustworthy. The
concept of electronic voting has been evolving in last
two decades to facilitate election process in the demo-
cratic setting. However, it was never applied and
tested in the setting of benchmarking. Therefore, our
approach of solving the security and trust challenges
in the present benchmarking process by extending the
design knowledge that exists in the electronic voting
place our contribution is in the exaptation quadrant of
the DSR knowledge contribution framework.
Improvement
Develop new
solutions for known
problems
Invention
New solution for
new problems
Routine Design
Known solution for
known problems
Exaptation
Known solutions
for
new problems
Solution Maturity
High
High
Low
Low
Application domain maturity
Figure 3: Design science contributions, adapted from (Gre-
gor and Hevner, 2013).
4 AN OVERVIEW OF
ELECTRONIC VOTING (EV)
The aim of the section is to present a detailed descrip-
tion of the electronic voting protocol, the structure
of electronic voting along with the security require-
ments.
4.1 Electronic Voting
Electronic voting (EV) is appearing as an efficient
and cost-effective way for conducting a voting pro-
cess. The term e-voting is used to denote a voting
process which allows voters to cast a secure and se-
cret ballot over a network (Gritzalis, 2002). The first
EV scheme was proposed by David Chaum (Chaum,
1981) in 1981. There have been many other schemes
proposed by researchers since 1981, e.g., EV schemes
with publicly verifiable secret sharing (Schoenmak-
ers, 1999), (Neff, 2001); EV based on homomorphic
encryption (Hirt and Sako, 2000); EV based on secret
sharing techniques with a secure multiparty computa-
tion (Chen et al., 2014). (Gerlach and Gasser, 2009)
describes EV experiences by mentioning how EV sys-
tems worked in Geneva and Zurich in Switzerland.
Similarly, the EV systems of Estonia are studied in
(Madise and Martens, 2006), (Vinkel, 2012). An EV
protocol has many essential phases to carry out a suc-
cessful election. We have compiled a list of phases
that are very common across different EV protocols.
The phases are as follows:
Election administration: The process of setting up
the election, publication of the identities of eligi-
ble voters, the list of candidates and the result of
the election.
Secure Benchmarking using Electronic Voting
29
Registration: The process of distributing secret
credentials to voters and registering the corre-
sponding public credentials.
Tallying: The process of validating votes and de-
termine the number of votes each party has re-
ceived.
Voting: The process of casting a vote in an elec-
tion
Ballot Processing: The processing of ballots and
storing valid ballots in the bulletin board.
EV protocols involve several parties executing
some specific set of roles (Cortier et al., 2014a). How-
ever, different schemes use different terms to denote
the parties involved in the EV process. Table 1 de-
scribes the actors who are responsible for performing
the EV tasks in five EV schemes.
4.2 Structure of Electronic Voting
The structure of voting depends on the nature of the
election and the expected outcome. An election has
a candidacy which consists of some candidates run-
ning in the election. A structure containing the vote
is called a ballot. We identify the following typical
election types:
Yes/No voting: Voter’s answer is yes or no. A
typical example of this election where a voter is
asked to reply to the question, ”Do you agree
with”regarding ’Yes’ or ’No’ answers.
1-out-of-L voting: Voter has L possibilities but
can choose only one. This election format is used
to select a leader (e.g., president) from a list of L
candidates.
K-out-of-L voting: Voter selects K different ele-
ments from the set of L possibilities. This type
of election is used to choose council members in
which the voter selects K from L candidates. The
candidates who are selected the most number of
times will be appointed as the council members.
The order of the selection of the candidates is not
important. (1,L) {K N : 1 K L}
K-out-of-L ordered voting: Voter puts into or-
der K different elements from the set of L pos-
sibilities. This type of election can be used to
choose council members, but the candidate who
is marked by the voter as first will get the most
points.
Write-in Voting: Voter can formulate the answer
and write it down. This type of election is done
when the answers are not fixed at the beginning
and voters are asked to give their opinion on the
given matter.
4.3 Requirements of the Secure
Electronic Voting
Several researchers have proposed schemes for secure
electronic voting processes with varying assumption.
Therefore, different schemes fulfill different security
requirements. We have compiled a list of require-
ments from different literature sources to highlight all
the useful requirements that have been identified in
the existing literature. We made a distinction between
schemes and systems while compiling the list. There-
fore, we have different criteria for the study selection
in scheme and system.
Electronic Voting Scheme: Scheme is referred to the
study where the conceptual model of electronic voting
is presented regarding algorithm or theory. We used
the search terms in Figure 4a to select the primary
studies on the security requirements of electronic vot-
ing schemes. Additionally, we applied the following
criteria on the search result to narrow down the rele-
vant study.
The literature is published on and after the year
2000.
The literature has over 50 citations in the aca-
demic literature.
Published in the English language.
The list is by far complete, but we restricted this
study to include six schemes (Zu02 (Rja
ˇ
skov
´
a, 2002),
Le02 (Lee and Kim, 2003), Le00 (Lee and Kim,
2000), Hi10 (Hirt, 2010), Ch05 (Chaum et al., 2005),
Li04 (Liaw, 2004).
Electronic <OR> online <OR> Virtual
<OR> digital
Voting <OR> ballot <OR> electoral
Scheme <OR> protocol <OR>
methdology
electronicvotingscheme
AND
AND
(a)
Electronic <OR> online <OR> Virtual
<OR> digital
Voting <OR> ballot <OR> electoral
system <OR> open source <OR>
freeware <OR> implementation
electronicvotingsystem
AND
AND
(b)
Figure 4: Search terms used to find a) Electronic voting
scheme b) Electronic voting system.
Electronic Voting System: We defined the system
as those studies which are available as open source
code, and it has been implemented in the real case
studies. We used the search terms in Figure 4b to se-
lect the primary studies on the security requirements
of the electronic voting system. Additionally, we ap-
plied the criteria that the source code is available to
download on a reliable server (e.g. GitHub). The
source code is also supported by English documen-
tation or user manual. The list is by far complete, but
we restricted this study to include four electronic vot-
ing systems (eVote (Pierro, 2017), Belenios (Cortier
SECRYPT 2018 - International Conference on Security and Cryptography
30
Table 1: Actors involved in EV process in different schemes.
EV task LE02 (Lee and
Kim, 2003)
Belenios
(Cortier et al.,
2014b)
IVXV (of Es-
tonia, 2017)
CHVote
(Haenni et al.,
2017)
eVote (Pierro,
2017)
Election
administration
Election Ad-
ministrator
Election Ad-
ministrator
Organiser Election Ad-
ministrator
Managers
Registration Certificate Au-
thority
Registrar Collector Printing Au-
thority, election
authorities
Managers
Tallying Tallier Trustee Tallier Election Au-
thorities
Managers
Ballot Pro-
cessing
Tallier & Ad-
ministrator
Bulletin Board
Manager
Processor Bulletin Board Managers
Voting Voter Voter Voter Voter Voter
et al., 2014b), Chvote (Haenni et al., 2017), IVXV
(of Estonia, 2017)). The details of the requirements
of EV protocol are as follows:
1. Completeness/ Correctness: All valid ballots
should be counted correctly in the final tally (Lee
and Kim, 2003), (Hirt, 2010).
2. Uniqueness/ Unreusability: Voters can submit
only one single ballot (Hirt, 2010).
3. Universal Verifiability: Anyone can verify that
the published tally is correctly computed from
the ballots that were correctly cast (Hirt, 2010),
(Rja
ˇ
skov
´
a, 2002).
4. Individual Verifiability: Each eligible voter can
verify that his ballot was counted. This property
enables the voter to exclude with high probabil-
ity the possibility that the vote has been manip-
ulated by a compromised voting client (Haenni
et al., 2017).
5. Eligibility: Only entitled voters are able to cast a
ballot (Hirt, 2010).
6. Anonymous/Secrecy/Privacy: Neither voting
authorities nor anyone else can find out which
voter submitted which ballot (Liaw, 2004), (Hirt,
2010).
7. Soundness: Any invalid ballot should not be
counted in the final tally (Hirt, 2010).
8. Fairness: No one can get extra information about
the tally result before the publication phase (Liaw,
2004).
9. Receipt-freeness/Incoercibility: The voter can-
not be coerced into casting a particular vote by
a coercer. He must neither obtain nor be able to
construct a receipt proving the content of his vote
(Lee and Kim, 2003), (Liaw, 2004).
10. Non-cheating: Voters can accuse the authority of
cheating without revealing ballots to others (Liaw,
2004).
11. Robustness: The voting system should be suc-
cessful regardless of the partial failure of the sys-
tem (Lee and Kim, 2000).
12. Convenience: Voters to cast their ballots quickly,
in one session, and with minimal equipment or
special skills (Liaw, 2004).
13. Efficiency: The whole election should be held
promptly, for instance, all computations done in
a reasonable amount of time and voters are not
required to wait for other voters to complete the
process (Liaw, 2004).
14. Mobility: Voters are not restricted by physical lo-
cation from which they can cast their votes (Liaw,
2004).
15. Auditability: The system must be technically
sufficiently simple so that a widest possible range
of specialists could audit it (of Estonia, 2017).
Table 2 shows the list of the EV security require-
ments that are compiled from six EV schemes and
four EV systems. The presence of + indicates that
the given requirement is addressed. The require-
ment is considered as addressed if the author explic-
itly defines the given requirement in literature and
justifies how the given EV protocol satisfies the re-
quirement. - indicates that the given scheme/system
does not address the requirement. It is also impor-
tant to note that different schemes/systems address a
security requirement under the different assumption
and adversary models. For instance, the Hi10 (Hirt,
2010) scheme addresses soundness’ for K-out-of-L
voting structure, and Zu02 scheme (Rja
ˇ
skov
´
a, 2002)
addresses ’soundness’ for 1-out-of-L voting structure.
Similarly, the uniqueness requirement is addressed
by LE02 (Lee and Kim, 2003) scheme under the as-
sumption that an adversary cannot access the random-
ness and any internal information saved inside the
tamper-resistant randomizer distributed to the voters.
The Li04 scheme (Liaw, 2004) addressed uniqueness
requirement under the assumption that an adversary
Secure Benchmarking using Electronic Voting
31
cannot obtain a random number generated by the vot-
ing center.
5 MAPPING OF A
BENCHMARKING TO AN EV
SYSTEM
The aim of this section is to answer RQ2. We demon-
strate how a benchmarking system can be mapped to
the electronic voting system. To achieve our goal, we
first map the benchmark protocol to the EV protocol,
then we map the structure of benchmark to the struc-
ture of EV system. Finally, we map the overall con-
cepts of benchmark to the EV concepts using ontol-
ogy.
5.1 Mapping of the Benchmark
Protocol to EV Protocol
The protocol mapping consists of the mapping of the
benchmark phases and actors to the EV system phases
and the actors. Table 3 shows the mapping of bench-
mark protocol to EV protocol. The main entities in-
volved in the benchmark protocol are: a Benchmark
Administrator BA, N Benchmark Calculating Agents
BCA
j
( j = 1,...,N), and M Benchmark submitter BS
i
(i = 1, ...,M). The roles of each entity are as follows:
Benchmark Administrator - BA verifies the iden-
tities and the eligibility of M submitters. BA
manages the whole benchmarking process (cre-
ates questions and announces the benchmark re-
sult).
Benchmark Submitter - There are M submitter BS
i
(i = 1,...,M). They have their digital signature
keys certified by a certification authority (CA).
Benchmark Calculating agent - There are N cal-
culating agents BCA
j
( j = 1,...,N) who coopera-
tively decrypt the collected responses to open the
result of benchmarking. A threshold t denotes the
lower bound of the number of authorities that is
guaranteed to remain honest during the protocol.
The main entities involved in the electronic voting
protocol are: an Election administrator EA, N Tallier
T
j
( j = 1, ...,N), and M voter V
i
(i = 1,..., M). The
roles of each entity are as follows:
Election Administrator - EA verifies the identities
and the eligibility of M voters. EA manages the
whole voting process (creates candidacy and an-
nounces the election result).
Voter - There are M voter V
i
(i = 1,..., M). They
have their own digital signature keys certified by
a certification authority (CA).
Tallier - There are N Tallier T
j
( j = 1,..., N) who
cooperatively decrypt the collected ballots to open
the result of the election. A threshold t denotes the
lower bound of the number of authorities that is
guaranteed to remain honest during the protocol.
Table 3 shows the mapping of the protocol be-
tween benchmark and EV system. It is clear from the
table that the activity of Benchmark calculation can
be mapped to Tallying, a benchmark submitter can be
mapped to the voter.
5.2 Mapping of the Benchmark
Structure to EV Structure
We map the structure of benchmarking system to an
EV system with the help of the mapping of ballot,
vote, candidacy, and candidates to response, answer,
questions, and options respectively. Question QU is
mapped to Candidacy Cd, Option o is mapped to Can-
didate C, answer a is mapped to vote v, response B is
mapped to ballot BT . It is important to note that there
is only one candidacy in an election, but a benchmark
needs to have more than one question. Therefore, a
benchmarking system needs the x number of EV in-
stances to execute, where x is the number of questions
in the benchmark.
In the electronic voting scheme ω, a candidacy
Cd consists of L number of candidates C
i
(where i =
1,...,L) who participate in the election to be elected
to some position based on the outcome of the elec-
tion. A voter can decide to vote for only 1 candidate
(1-out-of-L voting) or more than 1 candidate (K-out-
of-L voting) based on the requirement of the election.
A voter casts his ballot in the election. A ballot BT
consists of a vector of votes,
v = (v
1
,...,v
K
), where
v
i
is the vote for the ith candidate in the election.
In K-out-of-L election, the following condition holds
(1,L) {K N : 1 K L}.
A benchmarking system β consists of a number
of questions Q
i
(where i = 1,...,x). The idea of
having the questions is to collect the feedback from
the submitters to establish a performance standard.
Each question Q
i
comes with the list of options o
i
(where i = 1,...,L). BS generates a vector of answers,
a = (a
1
,...,a
L
), where a
i
is the answer of the ith
option and a
i
{0,1}. BS finally generates a response
B consists of the answer vector
a . The number of re-
sponse is equal to the number of questions available
in the benchmark. The final response B
f
in contains
all the responses B
i
(i = 1,...,x). Table 4 shows how
the structure of benchmark can be completely mapped
to the structure of EV. The structure of benchmarking
system can be constructed using the K-out-of-L vot-
ing structure where (1,L) {K N : 1 K L}.
SECRYPT 2018 - International Conference on Security and Cryptography
32
Table 2: The security properties of EV system, AB: Applicability to Benchmark, + indicates that the given security require-
ment is implemented in the scheme, - indicates that the given security requirement is not implemented in the scheme.
ID Property AB Zu02 Le02 Le00 Hi10 Ch05 Li04 Ch Be eV IV
1 Completeness/ Cor-
rectness
Y - + + + + + - - + -
2 Uniqueness/ Un-
reusability
Y - + + + - + - - + +
3 Universal Verifiability Y + + + + - - + + + -
4 Individual Verifiability Y + + - + - + + + + +
5 Eligibility Y + + + + + - - - - -
6 Anonymous/ Se-
crecy/privacy
Y + + + + + + + - + +
7 Soundness Y - + + + - - - - + -
8 Fairness N + + + + - + - - + -
9 Receipt-freeness/
Incoercibility
N + + + + - + - - - +
10 Non-cheating N - - - - - + - - + -
11 Robustness N + + + - - + - - - -
12 Convenience N - - - - - + - - - -
13 Efficiency N - - - - - + - - - -
14 Mobility N - - - - - + - - - -
15 Auditability N - - - - + - + - - +
Table 3: Mapping of the protocol.
Phase Actor
Benchmark
β
EV ω Benchmark
β
EV ω
Benchmark
Administra-
tion [BAdm]
Election Ad-
ministration
[EAdm]
Benchmark
Administrator
[BA]
Election Ad-
ministrator
[A]
Benchmark
calculation
[Bcal]
Tallying
[ETal]
Benchmark
calculating
agent [BCA]
Tallier [T ]
Benchmark
submission
[BSub]
Voting [Vo] Benchmark
submitter
[BS], user
[BU ]
Voter [V]
It is important to notice that 1-out-of-L voting struc-
ture is not suitable for the mapping between bench-
mark and electronic voting. 1-out-of-L voting struc-
ture expects only one vote in the ballot unlike K-out-
of-L voting where a ballot contains a vector of votes.
Therefore, BCA cannot calculate the frequency of in-
dividual option in the benchmark result using 1-out-
of-L voting structure.
The structure of the benchmark for different ques-
tion types are as follows:
5.2.1 Yes/No or True/False Questions
For this type of question in the benchmark L = 2, i.e.,
there are two options o
1
and o
2
available for the ques-
tion. The answer vector will consist of
a = (a
1
,a
2
).
As submitter can select only option in the answer, the
Σa
i
= 1. Therefore, the structure of B = (a
1
,a
2
) The
total response for this question is M B (where M is
the number of submitter). The total number of yes can
be counted by adding the a
1
answer vector and total
number of No can be counted by adding the a
2
answer
vector from all the submitters.
Result of Q
j
= {Frequency of Yes , Frequency of No
}
= Σ
M
i=1
BS
i
[B
j
(a
1
)],Σ
M
i=1
BS
i
[B
j
(a
2
)] (1)
where BS
i
[B
j
(a
1
)] denotes the response B
J
sub-
mitted by BS
i
; B
j
(a
1
) denotes the answer component
a
1
of response B
j
This type of question in benchmark is mapped to
a K-out-of-L voting system (where K = 1) according
to the mapping presented in Table 4. Yes, and No op-
tions are presented with candidate c
1
and c
2
respec-
tively. The ballot BT contains the vote vector {v
1
,v
2
}
against the candidate c
1
and c
2
. The frequency of yes
and no can be counted by adding the votes cast by M
voters in the favor of the candidates. Equation 1 takes
the following form in EV.
result of Cd
j
= {votes received by c
1
, votes received
by c
2
}
= Σ
M
i=1
V
i
[BT
j
(v
1
)],Σ
M
i=1
V
i
[BT
j
(v
2
)] (2)
where V
i
[BT
j
(v
1
)] denotes the ballot BT
J
cast by
V
i
; BT
j
(v
1
) denotes the vote component v
1
of Ballot
BT
j
Secure Benchmarking using Electronic Voting
33
Table 4: Mapping of the benchmark structure to EV structure. There are M number of voters and submitters, x number of
questions and candidacy, L number of option, answer, candidates, and votes.
Benchmark EV
Question Option Answer Response Candidacy Candidate Vote Ballot
Q
1
o
1
...o
L
a
1
....a
L
B
1
Cd
1
c
1
...c
L
v
1
...v
L
BT
1
Q
2
o
1
...o
L
a
1
....a
L
B
2
Cd
2
c
1
...c
L
v
1
...v
L
BT
2
... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ...
Q
x
o
1
...o
L
a
1
....a
L
B
x
Cd
x
c
1
...c
L
v
1
...v
L
BT
x
5.2.2 Multiple Option Question
This type of question contains L possible option to
choose from where L > 2. The answer vector will
consist of
a = (a
1
,..,a
L
). As submitter can select
only one valid option out of L option, the Σa
i
= 1.
Therefore, the structure of B = (a
1
,..,a
L
) The total
response for this question is M B (where M is the
number of submitter). The frequency histogram can
be generated by adding the answer vectors from all
the submitters.
Result of Q
j
= {Frequency of o
1
,...., Frequency of
o
L
}
Σ
M
i=1
BS
i
[B(a
1
)],....,Σ
M
i=1
BS
i
[B(a
L
)] (3)
where BS
i
[B
j
(a
1
)] denotes the response B
J
submitted
by BS
i
; B
j
(a
1
) denotes the answer component a
1
of
response B
j
This type of question in benchmark is mapped to
K-out-of-L voting system according to the mapping
presented in Table 4. L possible options are mapped
to L candidates. The ballot BT contains the vote vec-
tor {v
1
,...,v
L
} against the candidate c
1
,...c
L
.The fre-
quency of the ith option is calculated by adding the
votes received to ith candidate. Therefore, the equa-
tion 3 takes the following form in EV.
Σ
M
i=1
V
i
[BT
x
(v
1
)],....,Σ
M
i=1
V
i
[BT
x
(v
L
)] (4)
where V
i
[BT
j
(v
1
)] denotes the ballot BT
J
cast by V
i
;
BT
j
(v
1
) denotes the vote component v
1
of Ballot BT
j
5.2.3 Open Question (Numerical)
This type of question does not provide any pre-
defined options to the submitters. However, the sub-
mitter can enter a numeric value in the option field.
Option field consists of a number of empty bits based
on the numerical range provided to the submitter. The
value of L in the option field is calculated as the ceil-
ing function of log
2
MX, i.e., L = dlog
2
MXe where
MX is the range. The number entered by the submit-
ter is converted into the equivalent binary string to be
saved into the answer vector
a . Let’s take the case of
question 2 in the appendix, ”What percentage of the
employee recognize a security issue? [range 0-100]”.
The valid values this question takes is 101. Therefore,
the value of L can be calculated by applying the ceil-
ing function to dlog
2
101e, i.e., L = 7. Let’s assume
that BS submit 50 as the answer of the question. The
answer vector
a = (0100110). The total number of
the response for question Q
j
is M B (where M is the
number of submitter). For option i (where i = 1,...,L),
the ith components of each valid response of M sub-
mitters are summed up, i.e, aa
i
= Σ
M
w=1
BS
w
[B
j
(a
i
)],
where aa
i
is a count of the number of answers that
has been received for the ith bit of the binary rep-
resentation to the question by all the submitters. The
mean value of the Question Q
j
is calculated by adding
all aa
i
in the following equation
µ =
1
M
Σ
L
i=1
aa
i
2
i1
(5)
Open numerical question in benchmark is mapped to
K-out-of-L voting system according to the mapping
presented in Table 4. L possible options are mapped to
L candidates. The ballot BT contains the vote vector
{v
1
,...,v
L
} against the candidate c
1
,...c
L
. The mean
of the Question Q
x
is calculated by firstly adding
the ith components of each valid ballot in vv, and
then adding all vv and converting them to the decimal
value. Equation 5 takes the following form:
vv
i
= Σ
M
w=1
V
w
[BT
j
(v
i
)];µ =
1
M
Σ
L
i=1
vv
i
2
i1
(6)
where vv
i
is a count of the number of votes that has
been received for the ith bit of the binary represen-
tation of the candidates to the candidacy by all the
voters.
5.3 Mapping of Overall Concepts
We map the concepts of benchmarking system to elec-
tronic voting system using an ontology. The idea,
of using and developing an ontology to explain the
concepts, is derived from (Agrawal, 2016). Figure
5 presents ontologies of benchmarking system and
electronic voting system. In our proposed ontology,
there are ten main concepts (circular boxes) and ten
SECRYPT 2018 - International Conference on Security and Cryptography
34
relationships (solid arrow lines). The text above the
horizontal dotted line corresponds to the benchmark-
ing system, while the text below the horizontal dotted
line corresponds to the electronic voting system. The
dotted horizontal line also demonstrates how can a
concept and relationship from benchmark be mapped
to electronic voting. Thus, figure 5 helps to under-
stand the relationship between the benchmark and
electronic voting clearly. It is evident from the given
ontology that the concepts of the benchmark can be
mapped to EV system.
The ontology of benchmark states that Benchmark
Administrator performs benchmark administration by
creating Benchmark. Benchmark has some Questions
that consists of options. Submitter from different Or-
ganization participates in the Benchmark by submit-
ting their response. A response contains answer of
the questions. A response can be considered valid or
invalid on the basis of the benchmark rules. Bench-
mark calculating agent (BCA) counts response based
on a given methodology, and finally, BA publishes the
Benchmark result.
The ontology also depicts that an election admin-
istrator (EA) performs administration by creating an
election. The election has Candidacy that consists of
some Candidate running for a certain post in the elec-
tion. Voters from different Constituency area partic-
ipate in the election by submitting their ballot which
contains the vote for the candidates. A ballot can be
valid or invalid based on the election rule. A tallier
collects and counts the valid ballot. EA finally de-
clares the election result.
6 SECURE BENCHMARK ON
UNRIZKNOW
In this section, we answer our final research question
RQ3 by demonstrating the practical application of EV
scheme to benchmarking system using Hi10 scheme
(Hirt, 2010). We present the model, set-up, response
submission, benchmark calculation using the EV ap-
proach. The aim of this section is to present how
we can conduct a secure benchmark on UnRizkNow
platform. The members of UnRizkNow are infor-
mation security practitioners who possess knowledge
about their organization regarding people, process,
and technology. We use the cryptography tools men-
tioned in (Hirt, 2010) to establish our model.
6.1 Preliminaries
Σproofs- A Σproof is a three-move special
honest-verifier zero-knowledge proof of knowledge.
A Σproof is called linear if the verifier’s test pred-
icate is linear, i.e., the sum of two accepting conver-
sations is accepting as well. The details of Σproof
s is given in section 2.1 of (Hirt, 2010). BA acts as a
verifier in our benchmark model.
Identification Scheme - An identification scheme is
an interactive protocol between two parties, a prover
(benchmark submitter) and a verifier (Benchmark Ad-
ministrator). If the protocol is successful, then at the
end of the protocol the BA is convinced he is interact-
ing with the BS, or more precisely, with someone who
knows the secret key that corresponds to the prover’s
public key. For benchmark submitter identification,
we assume an identification scheme where the identi-
fication protocol can be written as a linear Σproof .
It is easy to verify that Schnorr’s identification scheme
(Schnorr, 1991) satisfies this requirement. The secret
key of BS is denoted by z
v
, and the corresponding
public key by Z
v
= g
z
v
for an appropriate generator
g.
Designated-Verifier Proofs- A designated-verifier
proof is a proof which is convincing for one particu-
lar (designated) verifier, but completely useless when
transferred from this designated verifier to any other
entity (Jakobsson et al., 1996). The requirements of
the encryption function are drawn from (Hirt, 2010).
A semantically-secure probabilistic public-key en-
cryption function E
Z
: VX R E,(a,α) 7→ e, where
Z denotes the public key, V denotes a set of answers,
R denotes the set of random strings, and E denotes
the set of encryptions. The decryption function is
D
z
: E V,e 7→ a, where z denotes the secret key.
It is also required to have E to be q invertible for
a given q Z. It implies that for every encryption e,
the decryption a and the randomness α of qe can be
efficiently computed. It is also required that there is
a number u 6 q, large enough that 1/u is considered
negligible (Hirt, 2001). Furthermore, we use modified
ElGamal and Pailier homomorphic encryption func-
tion.
Modified ElGamal Encryption - A traditional ElGa-
mal system with an encryption function E with the
property: E(M
1
)XE(M
2
) = E(M
1
+ M
2
).
Pailier Encryption - As mentioned in section 3.3 of
(Hirt, 2010).
Re-encrypting and Proving Re-encryptions - A
random re-encryption e
0
of a given encryption e =
E(a,α) is an encryption with the same answer a, but
a new (independently chosen) randomness α
0
. Such
a re-encryption can be computed by adding a random
encryption of 0 to e. The rest of the details can be
obtained from (Hirt, 2010) by substituting vote v with
answer a, .
Secure Benchmarking using Electronic Voting
35
Benchmark
Administrator
--------------------------------
Election Administrator
Benchmark calculating agent
---------------------------------
Tallier
Question
---------------
Candidacy
Benchmark
-----------------
Electronic Voting
Submitter
----------------
Voter
creates
hasQuestion
-----------------------
hasCandidacy
Response
---------------------
Ballot
submitsResponse
-----------------------
castsBallot
isSubmittedIn
------------
isCastedIn
Counts
Benchmark result
-----------------------
Election result
hasBenchmarkResult
--------------------------
hasElectionResult
Option
----------------
Candidate
hasOption
----------------
hasCandidate
Xsd:Integer
validResponse
----------------
validBallot
invalidResponse
------------------
invalidBallot
publishes
hasSubmitter
------------------
hasVoter
Xsd:datetime
declarationTime
Answer
---------------
Vote
containsAnswer
-------------------
containsVote
getsAnswer
-------------
getsVote
Organization
---------------------
Constituency area
Figure 5: An ontology of benchmarking system and electronic voting system. The diagram shows that the concepts, actors,
phases of benchmarking system can be mapped to electronic voting system.
6.2 Details of The Benchmark Protocol
We use the non-receipt free K-out-of-L voting proto-
col of (Hirt, 2010) to establish our benchmark pro-
tocol. Figure 6 shows the various steps involved in
carrying out the benchmark on UnRizkNow platform.
UnRizkNow
Platform
1. Participation request
2. List of submitter,
Questions
6. Benchamark
Result
BA
BS BCA
Bulletin
Board
3. Methodology
Figure 6: An overview of the benchmark model on UnRiz-
kNow portal.
Model - We use the benchmark entities as men-
tioned in the section 5.1. The communication among
the benchmark entities happens through UnRizkNow
platform. The platform has a bulletin board to post
any announcement. BS post their encrypted response
on the bulletin board with their signature. This also
prevents re-submission of the responses on the bul-
letin board. Anyone can read and verify the posted re-
sponse on bulletin board, but nobody can delete from.
The bulletin board can be considered as an authen-
ticated public channel with memory. The commu-
nication channel between BA and BS is secured us-
ing TLS. A threshold t denotes the lower bound of
the number of authorities that is guaranteed to remain
honest during the protocol.
Benchmark Structure - The structure of question
follows the structure mentioned in the section 2.2.
We assume that we have yes/no, multiple choice, and
open question (numerical) in the benchmark. A sam-
ple of the list of questions is given in Appendix. The
mapping of the question to candidacy is performed
as mentioned in the mapping section 5.2. The Un-
RizkNow platform maintains a double array a[x][y] to
save the label for the question format, and bit requires
to generate the option for the question. The labels are
yn for yes/no question, mc for multiple choice ques-
tion, and op for the open numerical question. The
submitter sees the questions in the form as presented
in Appendix. The platform has a program module m
that reads the value from the array a[x][y] and takes
care of the translation of option o to the required bits.
Benchmark Administration - N calculating agents
(BCA
1
,...BCA
N
) execute the key generation proto-
col using ElGamal encryption scheme. The resulting
public key of the benchmarking system is announced
to the registered members of UnRizkNow community,
and the corresponding secret key is shared among
BCA. BA also publishes the questions, and response
format on the bulletin board of UnRizkNow.
Benchmark Submission - Benchmark submitter con-
structs a random encryption
e = E(
a ,
α ) for his
answer vector
a and randomness
α
R
R
K
, and
posts it onto the bulletin board of UnRizkNow. The
submitter also posts a proof of validity. A response
B =
a = (a
1
,...a
K
) is valid if and only if a
i
{0,1}
for i = 1, ...,K and Σa
i
= K. A validity proof for
the encrypted response
e = (e
1
,...,e
K
) is also con-
structed. The details of the construction of validity
proof is given in section 5.4 in (Hirt, 2010). The en-
crypted response is submitted by BS to the bulletin
board of UnRizkNow.
SECRYPT 2018 - International Conference on Security and Cryptography
36
Benchmark Calculation - BCA collects the en-
crypted responses from the bulletin board. The bench-
mark result Π is performed for each question sep-
arately. For Question Q
i
, the ith components of
each valid encrypted response from M submitters are
summed up using the homomorphic property of the
encryption scheme and decrypted using the verifiable
decryption protocol of the encryption scheme.
Benchmark Result - The result of the benchmark is
published for the individual questions. The result is
calculated according to the equation 1, 3, 5.
7 DISCUSSION
In this section, we answer the final research question
RQ4. Firstly, we address the adversary model and the
security assumption that we considered in this study.
The adversary model highlights what are the capa-
bilities of an adversary. Secondly, we perform the
security analysis on the main security requirements
which are typical for benchmarking systems. Finally,
we mention the behavior of the benchmarking system
using EV concepts towards the considered adversary
model.
7.1 Adversary Model and Trust
Assumption
The adversary model depicts the attack potential that
is a measure of the minimum effort to be expended in
an attack to be successful (Idrees et al., 2014). The be-
havior of an adversary can change largely according
to the implemented protocols and the capabilities of
the adversary. An internal attacker is equipped with
cryptographic keys and credentials that enable them
to participate in the execution of the processes in the
system. An external attacker does not possess such
keys and credentials. In this section, we provide a
general model of the adversary for benchmarking sys-
tem and EV system and map them.
In our adversary model, BS, BCA, and BA can act
as an internal attacker to break the system secrecy,
but not to influence the election outcome via bribery
or coercion. We assume that all the parties involved in
the benchmarking scenario are polynomially bounded
and thus incapable of solving hard problems or break-
ing cryptographic primitives such as contemporary
hash functions. Adversaries cannot efficiently de-
crypt ElGamal ciphertexts without knowing the pri-
vate keys. For preparing and conducting a benchmark
event, as well as for computing the final result, we
assume that at least one honest benchmark author-
ity does not collude. We take into the consideration
that dishonest BCA may collude with the adversary,
but not all of them in the same benchmark event. A
threshold t denotes the number of BCA that is required
to decrypt the responses, and which is also able to
break the secrecy of an answer. BS cannot create an
invalid response that can pass the validity proof. An
external or internal adversary cannot delete any con-
tent from the UnRizkNow bulletin board.
7.2 Fulfillment of the Security
Requirements of Benchmarking
System
In this section, we show how the security require-
ments of the benchmarking system stated in section
2.3.3 can be fulfilled by adopting the EV approach.
We establish the security of our proposed benchmark
model using the established security proofs from the
electronic voting scheme. We utilize the security
proof and concepts given in (Hirt, 2010).
1. Completeness: The dishonest submitter BS
i
may
create an invalid response, but the probability that
the validity proof of encrypted response is negli-
gible. Therefore, the invalidity of the encrypted
response is detected in the validity proof of the
scheme and the invalid vote will not be counted.
2. Uniqueness: The encrypted response along with
the proof of the validity is posted on the bulletin
board of UnRizkNow platform. Therefore, the
submitter can submit only once, and the double
submission is detected easily.
3. Universal Verifiability: Anyone can read the en-
crypted response posted on the bulletin board.
One can check its validity by verifying the K-out-
of-L encryption proof. Since the encryption func-
tion uses the homomorphic property, he can also
sum up all valid encrypted response to obtain the
encryption of sum of the answers. Since the de-
cryption is verifiable, he can also check whether
the sum of the answers has been correctly de-
crypted (Hirt, 2001), (Hirt, 2010).
4. Individual Verifiability: The individual verifia-
bility of the benchmarking system is guaranteed
by the homomorphic property of the encryption
function and the verifiable decryption of the en-
cryption scheme (Hirt, 2001), (Hirt, 2010).
5. Eligibility: The eligibility of the benchmarking
system is ensured by the use of Schnorr’s identi-
fication scheme. It is essential that each submit-
ter know his secret key, and this is ensured by the
public-key infrastructure. A protocol for ensuring
knowledge of the secret key for Schnorr’s iden-
Secure Benchmarking using Electronic Voting
37
tification scheme is provided in (Hirt and Sako,
2000).
6. Secrecy: The secrecy of the benchmarking sys-
tem is guaranteed under the assumption that no t
BCA can maliciously pool their information and
the assumption that the encryption scheme is se-
mantically secure.
7. Soundness: The soundness of the benchmarking
system can be proved using the proofs given in the
re-encrypting and proving re-encryption of (Hirt,
2010).
8 LIMITATION AND FUTURE
WORK
The security requirements of benchmarking system
are formulated mainly to address the secrecy of the
sensitive information shared by the benchmark sub-
mitter and the transparency of the benchmark pro-
cess. There could be an extra requirement of receipt-
freeness for an enhanced version of benchmarking
system. Receipt-freeness property ensures that the
submitter cannot prove to a third party that they sub-
mitted a particular set of responses. A secure elec-
tronic voting scheme usually addresses the receipt-
freeness requirement because the selling of the vote
is a serious problem in the election. The selling of
the vote is often initiated by the entity who wants a
certain candidate to win in the election. However,
in our benchmark model, we do not think this prob-
lem is widespread as there is no candidate involved
in it. However, the significance of this requirement
needs further investigation by producing a use-case
scenario.
The mapping of the benchmark structure to EV
system uses K-out-of-L voting structure. We con-
structed a response as an answer vector
a where
a
i
{0,1}. The benchmark result is constructed by
adding the ith components of each valid response
using the homomorphic property of the encryption
function. Therefore, it is not possible to get the actual
number entered by the submitter in the open numer-
ical question as we cannot combine all the answers
in the response and decrypt it. The system can ap-
ply homomorphic operation on the ith bit of the an-
swer. This property helps to ensure the confidentiality
of the answer submitted, but at the same time, it does
not allow to get all the actual numbers submitted by
BS. The presence of an actual number in the bench-
mark could help to create a distribution graph of all
the value submitted. In other words, it would provide
how many submission lies below and above his sub-
mission. However, in our proposed model, one can
only see if his performance is either below or above
the average performance.
Our proposed solution is still prone to a vulner-
ability of conflicts of interests in and incentives to
manipulate the benchmark process where the bench-
mark submitters are also the market participants with
stakes in the level of the benchmarks. The conflicts
in the interest can create an incentive for abusive con-
duct of the benchmark process. Benchmark submit-
ters may attempt to manipulate a benchmark by sub-
mitting false or misleading data to break the credibil-
ity of the benchmark result. Our future work will con-
sist of conducting a risk analysis of the benchmark-
ing system. We will adopt CIRA method (Agrawal
and Szekeres, 2017) to conduct the risk analysis ex-
ercise. The aim of this exercise will be to assess the
conflict in the interest of the stakeholders involved in
the benchmarking system and propose the treatment
plan to reduce the conflict.
The EV schemes and system that we analyzed in
this study is far from the complete list. There might be
more relevant EV schemes and system available that
can be suitable for our benchmark model on UnRiz-
kNow platform. As our future work, we would like
to implement different electronic voting schemes on
the UnRizkNow platform and test their performance
in the benchmark context. We are also interested in
conducting similar studies with Group Signature, the
Secure Multi-Party computation to analyze their role
in conducting secure benchmark on UnRizkNow.
The ontology of benchmark and electronic voting
presents an overview of the concepts and relationships
involved in the system. The ontology needs to be
formalized with Web Ontology Language (OWL) for
modeling the ontology. The formal ontology will en-
able the possibility to be used by an automated tool to
perform the mapping between benchmark and EV.
The future work also includes the assessment
of other EV schemes to conduct secure benchmark
on UnRizkNow. For instance, the LE02 schemes
also meet all the requirements of secure benchmark.
Therefore, Le02 can also be a good candidate to adopt
for a future secure benchmark solution. However,
there is a concern with the efficiency of the LE02
scheme. This scheme has the overall performance
complexity of O(xL
2
B) where B represents the num-
ber of bits used to store one group element, x repre-
sents the number of questions, and L is the number
of bits in the answer. In other words, every submitter
sends his encrypted response using (xL
2
B) bits. On
the other hand, the overall performance complexity of
Hi10 scheme is O(xLB). In other words, every sub-
mitter sends his encrypted response using the (xLB)
bits.
SECRYPT 2018 - International Conference on Security and Cryptography
38
9 CONCLUSION
We have presented the model of a benchmarking sys-
tem that is typically used by an organization to es-
tablish the benchmark standard and provide bench-
mark as a service. We highlighted the security chal-
lenges that the current benchmark model face, and
therefore, a need to develop more secure benchmark-
ing system is also justified. The security limitation of
current benchmarking systems may hinder sharing of
important information between the submitters and the
benchmark authorities. Therefore, the requirements
of a secure benchmarking system are established. We
proposed a novel approach to solving the security lim-
itation of benchmarking systems by adopting the se-
cure cryptographic proofs from the field of secure
electronic voting. We demonstrated how a bench-
marking system could be mapped to the electronic
voting system by mapping its protocol, structure, and
concepts. We also demonstrated how the different
formats of benchmark question can be presented and
how the benchmark result can be calculated using the
concepts of electronic voting. Our solution is based
on the electronic voting protocol that provides secure
transmission of the benchmark responses throughout
the system. Furthermore, the identity of the response
submitter is preserved by secrecy provided by the
cryptographic protocols. The members who partici-
pate in the benchmark process can ensure that their
responses have been counted correctly while calcu-
lating the benchmark result. Afterward, we demon-
strated that how a secure benchmark can be designed
for UnRizkNow platform using the concepts of EV
system. We showed that a benchmarking system is
more secure if it follows EV system approach as it
can satisfy the necessary security requirements. We
adopted Hi10 scheme to demonstrate the feasibility
of our approach for UnRizkNow platform, but other
relevant EV schemes can also be adapted to perform
the benchmark on UnRizkNow platform.
REFERENCES
ABB (2017). Cyber security benchmark.
Agrawal, V. (2016). Towards the ontology of iso/iec 27005:
2011 risk management standard. In HAISA, pages
101–111.
Agrawal, V. and Snekkenes, E. A. (2017). Factors Affecting
the Willingness to Share Knowledge in the Communi-
ties of Practice, pages 32–39. Springer International
Publishing, Cham.
Agrawal, V. and Szekeres, A. (2017). Cira perspective on
risks within unrizknow - a case study. In 2017 IEEE
4th International Conference on Cyber Security and
Cloud Computing (CSCloud), pages 121–126.
Agrawal, V., Wasnik, P., and Snekkenes, E. A. (2017). Fac-
tors influencing the participation of information secu-
rity professionals in electronic communities of prac-
tice. In Proceedings of the 9th International Joint
Conference on Knowledge Discovery, Knowledge En-
gineering and Knowledge Management, pages 50–60.
Chaum, D., Ryan, P. Y. A., and Schneider, S. (2005). A
practical voter-verifiable election scheme. In Proceed-
ings of the 10th European Conference on Research
in Computer Security, ESORICS’05, pages 118–139,
Berlin, Heidelberg. Springer-Verlag.
Chaum, D. L. (1981). Untraceable electronic mail, return
addresses, and digital pseudonyms. Commun. ACM,
24(2):84–90.
Chen, C.-L., Chen, Y.-Y., Jan, J.-K., and Chen, C.-C.
(2014). A secure anonymous e-voting system based
on discrete logarithm problem. Applied Mathematics
& Information Sciences, 8(5):2571.
Cortier, V., Galindo, D., Glondu, S., and Izabachene,
M. (2014a). Election verifiability for helios under
weaker trust assumptions. In European Symposium
on Research in Computer Security, pages 327–344.
Springer.
Cortier, V., Galindo, D., Glondu, S., and Izabach
`
ene,
M. (2014b). Election Verifiability for Helios under
Weaker Trust Assumptions, pages 327–344. Springer
International Publishing, Cham.
ESMA-EBA (2013). Final report:esma-eba principles for
benchmark-setting processes in the eu. Technical re-
port.
Forum, I. S. (2017). Benchmark as a service - informa-
tion security forum. https://www.securityforum.org/
products-services/benchmark-as-a-service/. Online;
accessed 28 November 2017.
Gerlach, J. and Gasser, U. (2009). Three case studies from
switzerland: E-voting. Berkman Center Research
Publication No, 3:2009.
Gregor, S. and Hevner, A. R. (2013). Positioning and pre-
senting design science research for maximum impact.
MIS Q., 37(2):337–356.
Gritzalis, D. A. (2002). Principles and requirements for a
secure e-voting system. Comput. Secur., 21(6):539–
556.
Haenni, R., Koenig, R. E., Locher, P., and Dubuis, E.
(2017). Chvote system specification. IACR Cryptol-
ogy ePrint Archive, 2017:325.
Hevner, A. R., March, S. T., Park, J., and Ram, S. (2004).
Design science in information systems research. MIS
Q., 28(1):75–105.
Hidalgo, A. and Albors, J. (2008). Innovation management
techniques and tools: a review from theory and prac-
tice. R&D Management, 38(2):113–127.
Hirt, M. (2001). Multi Party Computation: Efficient Pro-
tocols, General Adversaries, and Voting. Hartung-
Gorre.
Hirt, M. (2010). Towards trustworthy elections. chapter
Receipt-free K-out-of-L Voting Based on Elgamal En-
Secure Benchmarking using Electronic Voting
39
cryption, pages 64–82. Springer-Verlag, Berlin, Hei-
delberg.
Hirt, M. and Sako, K. (2000). Efficient Receipt-Free Voting
Based on Homomorphic Encryption, pages 539–556.
Springer Berlin Heidelberg, Berlin, Heidelberg.
Idrees, M. S., Roudier, Y., and Apvrille, L. (2014). Model
the System from Adversary Viewpoint: Threats Iden-
tification and Modeling. EPTCS 165, 2014, pp. 45-58.
arXiv:1410.4305v1.
IOSCO (2013). Principles for financial benchmarks. Tech-
nical report.
ISF (2017). The isf benchmark and benchmark as a
service. https://www.securityforum.org/tool/the-isf-
benchmark-and-benchmark-as-a-service/. online; ac-
cessed 19 November 2017.
Jakobsson, M., Sako, K., and Impagliazzo, R. (1996).
Designated verifier proofs and their applications. In
Maurer, U., editor, Advances in Cryptology EU-
ROCRYPT ’96, pages 143–154, Berlin, Heidelberg.
Springer Berlin Heidelberg.
Johannesson, P. and Perjons, E. (2014). An introduction to
design science. Springer.
Kanno, Y. (2009). Information security measures bench-
mark (ism-benchmark). Technical report, T Security
Center, Information-technology Promotion Agency
(IPA).
Kelessidis, V. (2000). Innoregio: dissemination of innova-
tion management and knowledge techniques.
Lee, B. and Kim, K. (2000). Receipt-free electronic voting
through collaboration of voter and honest verifier. In
Proceeding of JW-ISC2000, pages 101–108.
Lee, B. and Kim, K. (2003). Receipt-free electronic voting
scheme with a tamper-resistant randomizer. In Pro-
ceedings of the 5th International Conference on In-
formation Security and Cryptology, ICISC’02, pages
389–406, Berlin, Heidelberg. Springer-Verlag.
Liaw, H.-T. (2004). A secure electronic voting protocol for
general elections. Comput. Secur., 23(2):107–119.
Madise,
¨
U. and Martens, T. (2006). E-voting in estonia
2005. the first practice of country-wide binding inter-
net voting in the world. Electronic voting, 86(2006).
Mathwick, C., Wiertz, C., deRuyter, K., served as edi-
tor, J. D., and served as associate editor for this ar-
ticle., E. A. (2008). Social capital production in a vir-
tual p3 community. Journal of Consumer Research,
34(6):832–849.
Neff, C. A. (2001). A verifiable secret shuffle and its ap-
plication to e-voting. In Proceedings of the 8th ACM
Conference on Computer and Communications Secu-
rity, CCS ’01, pages 116–125, New York, NY, USA.
ACM.
of Estonia, S. E. O. (2017). General framework of elec-
tronic voting and implementation thereof at national
elections in estonia.
O’Rourke, L., Board, N. R. C. U. T. R., Program, N. C.
F. R., of Transportation. Research, U. S. D., and Ad-
ministration, I. T. (2012). Handbook on Applying En-
vironmental Benchmarking in Freight Transportation.
English short title catalogue Eighteenth Century col-
lection. Transportation Research Board.
Pierro, M. D. (2017). evote tutorials.
Rja
ˇ
skov
´
a, Z. (2002). Electronic voting schemes. Diplomov
´
a
pr
´
aca, Bratislava.
Schnorr, C. P. (1991). Efficient signature generation by
smart cards. Journal of Cryptology, 4(3):161–174.
Schoenmakers, B. (1999). A Simple Publicly Verifiable Se-
cret Sharing Scheme and Its Application to Electronic
Voting, pages 148–164. Springer Berlin Heidelberg,
Berlin, Heidelberg.
Vinkel, P. (2012). Internet Voting in Estonia, pages 4–12.
Springer Berlin Heidelberg, Berlin, Heidelberg.
Whitman, M. and Mattord, H. (2014). Management of in-
formation security. Cengage learning.
Wiertz, C. and de Ruyter, K. (2007). Beyond the call of
duty: Why customers contribute to firm-hosted com-
mercial online communities. Organization Studies,
28(3):347–376.
APPENDIX
List of Benchmarking Questions
1. Do you perform background checks on all em-
ployees with access to sensitive data, areas, or ac-
cess points?
Yes
No
2. What percentage of the employee recognize a se-
curity issue? [range 0-100]
3. Where do you store your sensitive information?
laptop
Paper document
Data server (internal)
Data Server (external)
SECRYPT 2018 - International Conference on Security and Cryptography
40