SAFE RPC
Auditing Mixnets Safely using Randomized Partial Checking
Stefan Popoveniuc and Eugen Leontie
Computer Science Department, The George Washington University, Washington D.C., U.S.A.
Keywords:
Mix net, Randomized partial checking, Anonymity, Privacy sets, Electronic voting.
Abstract:
Secure voting systems like PunchScan and Scantegrity use mixnets which are verified using Randomized
Partial Checking (RPC). This simple and efficient technique can lead to privacy loss and may, in an extreme
case, result in linking all the clear text ballots to the voters who cast them, thus completely destroying the
secrecy of all ballots and circumventing the functionality of the mixnet. We suggest a simple technique,
Secure RPC (SRPC), that uses RPC in a way that guarantees maximal privacy in all possible cases. We prove
that SRPC does not asymptotically reduce the integrity offered by RPC.
1 INTRODUCTION
David Chaum introduced mix networks (mixnets)
(Chaum, 1981), as one of the first constructions
for protecting privacy in the digital world. Since
then, mixnets have been a fertile ground for re-
search in anonymous communications and applica-
tions. Mixnets provide a foundation for schemes in
which privacy is of paramount importance, such as
anonymous message delivery and electronic voting.
The role of a mix is to take a set of messages, or in-
puts, and (1) preserve the information in the messages
while (2) shuffling their order to remove any links
(correspondences between inputs and outputs). Given
the set of inputs and a fixed output, an adversary
should not identify the particular input corresponding
to the given output, with a probability greater than a
uniform random guess. To mitigate against the cor-
ruption of single mix, a mixnet, or mix cascade has
been proposed. More mixes in a sequence improve
the quality of the mixnet, but reduce its performance.
In verifiable electronic voting systems, a mixnet
is used to de-correlate votes from voters; the in-
puts of the mixnet are encrypted ballots (possible lik-
able to voters) and the outputs are the plaintext bal-
lots (not linkable to voters). Recently, specialized
mixnets have been proposed, such as Punchscanian
mixnets (Popoveniuc and Hosp, 2006) and pointer-
based mixnets (Chaum et al., 2008).
Checking the correctness of a mixnet means ver-
ifying that the mix did not modify, delete, or inject
messages. Two auditing methods are common: zero
knowledge proofs (ZKPs, e.g. (Neff, 2001)), and
randomized partial checking (RPC (Jakobsson et al.,
2002)). Both methods are based on a challenge re-
sponse mechanism. ZKPs require the mix to produce
new information based on challenges, whereas RPC
utilizes a simple observation: for each mix, some
links can be revealed, as long as there is no full path
of revealed links for the entire mixnet. RPC may be
more efficient than ZKPs, and many of the voting sys-
tems recently proposed use RPC.
1.1 Motivation
We explore the possibility of finding the correspon-
dence between the input and the output for a pair of
mixes that is audited using RPC. We find that the pri-
vacy may be significantly reduced, much more than
was observed by the original RPC paper (Jakobsson
et al., 2002). Many links may be completely revealed.
The primary motivation for this work is the ex-
istence of the specialized mixnets used by Punch-
Scan (Popoveniuc and Hosp, 2006) and Scantegrity
(Chaum et al., 2008), which have some unique char-
acteristics: (1) the number of unique messages in the
output of the last mix is very small (because the out-
puts represent candidates) (2) the mixnet has only two
mixes and (3) the pair of mixes is audited using RPC.
However, this work is general in scope and applies to
any mixnet that satisfies the above properties, regard-
less if the mixnet is used for voting or not.
Our work does not apply to mixnets in which all
the outputs of each mix are unique, or there is a very
165
Popoveniuc S. and Leontie E. (2010).
SAFE RPC - Auditing Mixnets Safely using Randomized Partial Checking.
In Proceedings of the International Conference on Security and Cryptography, pages 165-170
DOI: 10.5220/0002939601650170
Copyright
c
SciTePress
large number of such unique outputs. Also our obser-
vations do not apply if the number of mixes is larger
than two. While having four mixes would solve the
observed problem, this would pose a significant per-
formance penalty, essentially doubling the amount of
time it takes to obtain the final final tally. Safe RPC
shows a constructive way to reduce the probability of
breaking the privacy because of the RPC to zero.
1.2 Contributions
The contributions of this work are: (1) to raise the is-
sue that RPC may reduce the ideal privacy sets, and
may expose the associations between the outputs of a
mixnet and its inputs, thus defeating the very purpose
of the mix, and (2) to proposes a simple and provable
solution that prevents this situation from appearing,
while not diluting the integrity assurance.
Our result primarily impacts designers of elec-
tronic voting systems, a number of which are based
on mixnet. To exemplify, assume that the results of an
election say that Alice got 52% of the votes and Bob
got 48%. In the case of classical RPC, the set of votes
that produced the final tally gets split into two sub-
sets, resulting in two partial tallies. While unlikely,
it may happen that one of the partial tallies contains
only votes for Alice. In this extreme case, the privacy
of half of the votes becomes totally compromised.
Our approach is simple: if two consecutive mixes
need to be audited for correctness, instead of doing
the random choices on the output of the first mix (and
thus the input to the second mix), we do the random
choices on the output of the second mix, such that
the resulting subsets maintain the characteristics of
the entire output. We audit the second mix using one
of these sets and the first mix using the complement
of the pre-images of this set.
In the example above, the final tally is divided into
two sub-tallies, each having 52% of the votes for Al-
ice and 48% of the votes for Bob. This is not a com-
pletely random partitioning, but rather an educated
split of the tally that maintains maximal privacy.
2 RELATED WORK
Much of the previous work that addresses privacy
leakage caused by mixnet auditing focuses on the re-
lationship among multiple consecutive mixes. In the
original RPC, two mixes can be paired so that the re-
vealed outputs of the first mix are not the revealed
inputs of the second mix.
Chaum (Chaum, 2004) uses RPC across four con-
secutive mixes, with the first two used as before, but
(a) Two mixes forming a mixnet. (b) Linked Fs.
Figure 1: Two mixes forming a mixnet and Linked Fs.
with the third mix revealing inputs corresponding to
half the outputs of the second mix, and with the fourth
mix revealing only the unrevealedoutputs of the third.
This addresses the problem of RPC cutting the pri-
vacy set in half.
Gomulkiewicz et al. (Gomulkiewicz et al., 2003)
provide formal analysis of the information loss in-
duced by Chaum’s scheme with respect to the prob-
ability distribution of an input being linked to an out-
put. They find that the connection between the inputs
and outputs of a mixnet is sufficiently random (assum-
ing a good shuffle at each mix) to have assurance that
the mixnet meets the privacy demands of voting sys-
tems.
While there are other ways to audit mixnets be-
sides RPC, this work is solely focused on this check-
ing technique.
3 USEFUL DEFINITIONS
We model two sequenced mixes by a function F :
Z
m
Z
m
×V where m is the the number of inputs
to the mixnet, Z
m
is the set of numbers from zero to
m 1, and V is the set of clear text messages pro-
duced by the mixnet (e.g. votes). Let n be the cardi-
nal of V . In a typical voting system, n represents the
number of candidates and is between 2 and 10, often
times much closer to 2 than to 10.
The mixnet F consists of two mixes F
1
and F
2
,
F = F
2
F
1
, where F
1
: Z
m
Z
m
and F
2
: Z
m
Z
m
×V . Figure 1(a) portraits the setting.
The current RPC method to audit the mixnet is:
an independent auditor flips an unbiased coin for each
output o
1
of F
1
. If the coin is heads, the mixnet re-
veals the pre-image i of o
1
through F
1
and all the data
that allows the public to check that F
1
(i) = o
1
. If the
coin is tails, the mixnet reveals the post image o of
o
1
through F
2
along with all the data needed to check
that F
2
(o
1
) = o.
We assume that all the mappings done by F
1
and
SECRYPT 2010 - International Conference on Security and Cryptography
166
F
2
are equally likely. Because no transformation is
revealed from the input of F to its output, no outside
observer knows the correlation between the input and
the permuted output. The probability that the mixnet
cheats on k ballots and is not detected is
1
2
k
.
3.1 Privacy Leakage
We now define privacy leakage. We start by giving a
small example. Lets assume we have an election with
2 candidates, and in the final tally each candidate got
exactly 50% of the votes. Looking at the final tally,
any voter is equally likely to have voted for any of
the two candidates. RPC divides the final tally into
two partial tallies. We assume one of the partial tal-
lies has 60% of the votes for one candidate and 40%
for the other candidate. A voter belonging to the first
partial tally is now more likely to have voted for the
candidate that got 60%, whereas in the initial case the
voter was equally likely to have voted for either of the
candidates. We say that privacy has been breached.
Lets assume we have a winner-takes-all election
and the final tally is {p
1
%, p
2
%,..., p
n
%}. We say
that privacy has been breached if there exists a partial
tally {p
1
%, p
2
%,..., p
n
%}, such that x {1,..., n}
and ε > 0, such that |
p
x
p
x
p
x
| ε. In other words, the
percentages from the final tally are different from the
percentages from the partial tallies. The larger ε is,
the larger the privacy leakage.
In the example above, |
50%40%
50%
| = 20%, thus
choosing ε = 0.20 suffices to prove that there is pri-
vacy leakage.
An interesting case is when one can prove how
a voter did not vote. For example, in a contest with
three candidates, it may be possible to prove that a
voter did not vote for any of the three candidates (x
{1,...,n} such that p
x
= 0, or, equivalent ε = 1).
In an extreme case, if x {1,...,n} such that
p
x
= 1, it can be provenhow one or more voters voted.
This implies that all other p
y
= 0, x 6= y.
3.2 Linked Fs
It may be that there is more than one function F that
links the same inputs to the same outputs. We call this
technique linked Fs.
Let F
i
: Z
m
Z
m
×V be a family of functions
(with f members), such that F
i
(x) = F
j
(x), i, j, x
Z
f
. Each function F
i
is a composition of two func-
tions F
i
= F
i
2
F
i
1
(see figure 1(b)). It is not necessary
that F
i
1
= F
j
1
or F
i
2
= F
j
2
, for i 6= j; the two can be
different or the same, as long as F
i
2
F
i
1
= F
j
2
F
j
1
.
(a) Unlinked Fs. (b) A situation that can occur under
RPC.
Figure 2: Unlinked Fs and a situation that can occur under
RPC.
This construction can be used to increase the in-
tegrity assurance during the audit phase, or to recover
from a failed audit. The probability of cheating on k
ballots without being detected is
1
2
k×f
.
3.3 Unlinked Fs
If we relax the previous requirement F
i
(x) = F
j
(x),
i, j,x Z
m
to a requirement that, when fixing any
element in V , the number of such elements in F
i
(I)
is equal to the one in F
j
(I), i, j we obtain a differ-
ent flavor of mixnet, that we call unlinked Fs. Intu-
itively, this transforms the given input into a set of
output messages that are all equivalent (when consid-
ering the messages in V ) but the output messages are
not necessarily associated with the same output in-
dexes.
The tally is the same, but each F provides a differ-
ent order of the votes. Figure 2(a) has an example. As
in the previous case, the probability of cheating on k
ballots and not being detected is one in 2
k×f
4 IDENTIFYING THE PROBLEM
The problem we have identified is caused by the small
number of unique messages that are transmitted via
the mixnet. In the case of a voting system, the number
of messages is usually under 10, each one correspond-
ing to a candidate running in a given race. But the
number of total non-unique messages is very large,
equal to the number of cast ballots.
It may happen that the RPC splits the output set
into two subsets, in a way such that all the messages
that are equal to one another (say to A) end up in the
same subset. To better portrait the problem we give
some example bellow.
Assume m = 8 (eight votes) and n = 2 (two candi-
dates, A and B). Assume the output of F is the set
SAFE RPC - Auditing Mixnets Safely using Randomized Partial Checking
167
O ={(0,A), (1,A), (2,A), (3,B), (4,B), (5,B), (6,A),
(7,B)}. Figure 2(b) describes the setup.
For our small example, the chance of breaching
privacy is reasonably high if RPC is used. In practice,
the output of a mixnet will have a large number of
ballots and the chance that an unfortunate partition-
ing is performed by the auditors drops exponentially.
However, this probability never reaches zero.
Let’s consider a single function F. Let F
1
: Z
8
P where P = Z
8
(P stands for partially decrypted),
and assume the coin flips divided the set P into
P
left
= {0,2,3,7} and P
right
= {1,4,5,6}. While
RPC reveals the actual one to one mappings, for this
exemplification we are only interested in the overall
sets. Assume that the pre-image of the set P
left
is
I
left
= F
1
1
(P
left
) = {1,4, 5, 7} and the post-image
of the set P
right
is O
right
= F
2
(P
right
) ={(0,A), (1,A),
(2,A), (6,A)} (see Figure 2(b)).
Because F
1
is a bijection and F
2
is one to one,
it can be easily inferred that I
right
= {0,2,3,6} and
F(I
right
) = O
right
={(0,A),(1, A),(2,A), (6,A)} and
thus that all the inputs {0,2,3,6} correspond to votes
for the same candidate, A. While no one knows to
which particular element from O
right
any element
from I
right
goes to, this is irrelevant, since all of them
represent the same message (a vote for candidate A).
All the other inputs, thus, correspond to candidate B.
4.1 Problems with Linked Fs
Assume we have the same output as in the previ-
ous case O ={(0,B,), (1,A), (2,B), (3,A), (4,B), (5,B),
(6,A), (7,A)}. In the case of linked Fs the output is
the same for any F
i
. Assume we have two linked
F
i
s. Following the same audit procedure for each of
the F
i
s, assume we obtain I
1
left
= {0, 1, 2, 3}, O
1
left
= {(0,B), (1,A), (2,B), (5,B),}; I
1
right
= {4, 5, 6, 7},
O
1
right
= {(3,A), (4,B), (6,A), (7,A)}; I
2
left
= {0, 1, 4,
5}, O
2
left
= {(2,B), (3,A), (4,B), (5,B)}; I
2
right
= {2, 3,
6, 7}, O
2
right
= {(0,B), (1,A), (6,A), (7,A)}.
When analyzing only F
1
, we can see that the in-
puts in I
1
left
= {0, 1, 2, 3} are more likely to cor-
respond to Bs, because O
1
left
= {(0,B), (1,A), (2,B),
(5,B),} (a 75% chance as opposed to a 50% chance).
If we intersect I
1
left
with I
2
left
and O
1
left
with O
2
left
,
we can extract further information. I
12
left
= I
1
left
T
I
2
left
= {0,1}; O
1
left
T
O
2
left
= {(2,B), (5,B)} and there-
fore we know that the inputs {0,1} correspond to the
same message, B. Applying the same logic I
12
right
=
I
1
right
T
I
2
right
= {6,7}; O
1
right
T
O
2
right
= {(6,A), (7,A)},
and thus we’ve found another two inputs that corre-
spond to the same message, A.
4.2 Problems with Unlinked Fs
Assume we have the same output as in the previous
case O = {(0,B,), (1,A), (2,B), (3,A), (4,B), (5,B),
(6,A), (7,A)}. In the case of unlinked Fs the messages
carried by the output are the same (and in the same
proportion), but the order of the messages is different
for each F
i
s. The main difference from the previous
example is that we cannot intersect the outputs of F
i
s,
as the first element in the output pair may not rep-
resent the same output (each F
i
performs a different
shuffle, but produces the same unordered set of mes-
sages). For simplification, we drop the first element
of the output from our analysis and prove that there
may be situations in which privacy is still lost.
Like in the previous case, assume we only have
two F
i
s. Following the same audit procedure for each
of the F
i
s, assume we obtain I
1
left
= {0, 1, 2, 3}, O
1
left
= {B, A, B, B,}; I
1
right
= {4, 5, 6, 7} , O
1
right
= {A, B,
A, A}; I
2
left
= {0, 1, 4, 5}, O
2
left
= {B, A, B, B}; I
2
right
= {2, 3, 6, 7}, O
2
right
= {B, A, A, A}.
We compute I
12
left right
= I
1
left
T
I
2
right
={2, 3} and
run through the possible messages of these two inputs
{2,3}. It cannot be that both have As correspondingto
them, since O
1
left
does not contain two As; similarly, it
cannot be that both are Bs, since O
2
right
does not have
two Bs. So it must be that one is A and one is B. But
if we remove one A and one B from O
1
left
we get two
Bs, thus it must be that inputs {0,1}=I
1
left
/{2,3} both
correspond to Bs. Following the same logic, inputs
{6,7}=I
2
right
/ {2,3} both correspond to As. Thus we
have completely broken the privacy of four messages.
5 DESCRIBING SRPC
We present a technique, Safe RPC, that ensures there
is no privacy leakage (as per definition from sec-
tion 3) whenever possible. SRPC ensures that x
{1,2,..., n} p
x
= p
x
, and therefore p
x
p
x
= 0 re-
sulting in no ε strictly greater than zero such that
|
p
x
p
x
p
x
| = 0 ε.
Our technique is based on the observation that the
random choices can be made on the output of the
mixnet, as opposed to the output of the first mix. We
suggest to divide the output of the mixnet into two
SECRYPT 2010 - International Conference on Security and Cryptography
168
Figure 3: Instead of doing the random choices on the mid-
dle, the output set is partitioned such that the distribution
of messages in the resulting sets is the same as in the entire
output set.
sets, where each set has the same distribution of mes-
sages (votes) as the distribution of all the messages
(final tally).
Instead of making the random choices on the P set
(the output of F
1
), our technique makes the choices
on the O = Z
m
×V set, such that the resulting two
partitions (O
left
and O
right
) each have the same dis-
tribution of elements from V as their union O =
O
left
S
O
right
. This way it is guaranteed that any of
the inputs from I can end-up in any of the messages
of the outputs, with the same probability. Figure 3 has
an example.
Formally, let O = F(I). O is a set of pairs
(number,message), where the numbers are all the
numbers from zero to m 1, but the messages are
limited to a small set of values V . Let the distribu-
tion of messages in O be {p
1
%, p
2
%,..., p
n
%}, where
0 p
i
1 represents the number of a certain unique
message i divided by m. Our technique divides O into
O
left
and O
right
such that the distribution of messages
in both O
left
and O
right
is also {p
1
%,p
2
%,...,p
n
%}.
We group all the elements in the output of the
mixnet, O, such that each group contains only one
unique message (e.g. group one has {A,A,A,A} and
group two has {B,B,B,B}). We then break each
group in half (e.g. {A,A},{A,A},{B,B},{B,B}), and
combine the halves from the multiple groups (e.g.
{A,A,B,B} and {A,A,B,B}).
While this is not the most general way of breaking
O into two sets such that the distribution of messages
remains the same in the resulting two sets, it is easy
to see that it guarantees our distribution requirement.
The next section proves that, even in this particular
case, the integrity assurance of our technique is at the
same level as the original RPC method.
Note that, in some cases, our technique may be un-
able to keep the exact same distribution of messages.
If the number of outputs carrying the same message
is odd, one cannot divide it exactly in half. From this
point of view, our technique is best effort: whenever
possible, it provides the same distribution, but when
not possible, it provides the closest possible distribu-
tion. In particular, if there is a single output carrying
a unique message (a single vote for one of the candi-
dates), our technique will result in revealing that half
of the inputs do not correspond to that message. The
original RPC technique suffers from the same prob-
lem in these extreme cases.
6 PROVING THAT THE
INTEGRITY ASSURANCE IS
MAINTAINED
By design, SRPC guarantees maximal privacy offered
by a mixnet audited with RPC. We now prove that
SRPC offers essentially the same level of integrity.
Using Stirling’s approximation, it can be easily
derived that (x choose
x
2
) =
x
x
2
2
x1
x
.
Assume we have n candidates and each candidate
received m
i
votes,
n
i=1
m
i
= m. The number of ways
to divide m
i
identical votes into two equal sets, is ap-
proximately
2
m
i
1
m
i
. If we aggregate this result for all
candidates, we get
n
i=1
2
m
i
1
m
i
=
2
n
i=1
m
i
1
n
i=1
m
i
=
2
mn
n
i=1
m
i
.
To correct for all the possible ways these half parts
can be associated, we have to multiple by 2
n
. The fi-
nal number of possible combinations is
2
m
n
i=1
m
i
.
For all practical cases,
2
m
n
i=1
m
i
is as good as 2
m
,
the number of possibilities without the technique pre-
sented in this paper. If RPC divides the set P into
two sets with the same cardinality, the number of pos-
sibilities is
m
m
2
2
m1
m
, a number even closer to
2
m
n
i=1
m
i
.
6.1 Cheating on k ballots
We now analyze a rational case, when the mixnet does
not cheat on all ballots, but only on k of them.
Assume there are n candidates and the voters gave
m
i
votes to candidate i, i Z
n
. Without loss of gen-
erality we assume that m
i1
> m
i
,i 6= 0 Z
n
, such
that candidate 0 gets the most votes. We also as-
sume that the mixnet favors the runner-up, candidate
1, and wants to modify the transformations such that
the published ballots at the output of the mixnet indi-
cate that candidate 1 won.
One possibility is that the mixnet switches votes
only from candidates 2, 3,...,n 1 in favor of candi-
date 1. In this case, the margin plus one, m
0
m
1
+ 1,
SAFE RPC - Auditing Mixnets Safely using Randomized Partial Checking
169
votes need to be switched. Another possibility is that
the mixnet switches votes only from candidate m
0
(the true winner). In this case, half the margin plus
one,
m
0
m
1
2
+ 1, votes need to be switched. Without
demonstration, we claim that the second possibility is
less risky for the mixnet. An intuitive explanation is
that in the second case only about half the number of
ballots need to be cheated on.
We need to compute the probability that the
mixnet cheats on k =
m
0
m
1
2
+ 1 ballots and is not de-
tected. With regular RPC this probability is
1
2
k
. We
now calculate the probability for Safe RPC.
Let m
,
i
be the number of votes reported by the
mixnet at its output and m
i
the number of voters
that voted for candidate i. Then, for candidate 0 to
win, m
,
0
should be at least equal to m
0
k and thus
m
,
1
= m
1
+ k. There are two possible cases. Either
k <
m
,
1
2
or k
m
,
1
2
.
In the first case, when less than half of the reported
votes for the winner are fraudulent, the mixnet has to
correctly guess in which of the two partitions all of
the k ballots go to, thus requiring k correct guesses.
The guesses are independent, because k <
m
,
1
2
. Thus
the probability of cheating and not getting caught is
1
2
k
in the case of Safe RPC, exactly the same as RPC.
In the second case, k
m
,
1
2
, the mixnet has to cor-
rectly guess in which of the two partitions the k votes
are going to be in, but the guesses are not indepen-
dent anymore. Lets assume now that for each of the
first
m
,
1
2
votes, the mixnet correctly guesses that they
are going to be in the same partition. This is the
worst case scenario, since, once the first half of the
votes are correctly guessed to be in the first partition,
the other
m
,
1
2
ballots are in the opposite partition (no
guessing is needed for the second half of the m
,
i
bal-
lots). The probability of making this correct guess
is lower than one in 2
m
,
1
2
. Thus, in this case, when
k
m
,
1
2
, the probability of cheating is lower than
1
2
m
,
1
2
,
where m
,
1
is the number of votes received by the win-
ner declared by the mixnet. This probability is higher
than what RPC offers (
1
2
k
), but still a number very
close to zero, if the declared winner got a decent num-
ber of votes (e.g m
1
= 40 implies a probability lower
than 0.00000095).
Also, we can prove that if k
m
,
1
2
, then the vot-
ers prefer candidate 0 almost three times more than
candidate 1 in a fair election. Having candidate 1
win may trigger other alarms. Proof: k
m
,
1
2
k
m
1
+k
2
k m
1
m
0
m
1
2
+ 1 m
1
m
0
m
1
+ 2
2×m
1
m
0
3×m
1
2.
7 CONCLUSIONS
In the traditional way Randomized Partial Checking
is used for paired mixes, when the number of mes-
sages at the output of the second mix is small, situ-
ations may arise in which the privacy offered by the
two mixes is partially or completely lost.
We described Safe RPC, a technique that does a
better audit partitioning, using the output of the sec-
ond mix. We suggest dividing the output messages
into two sets, such that the distribution of each unique
message in each of the two sub-sets is the same as
the distribution of the entire set. This way, maximal
privacy is guaranteed.
We further prove that our technique does not de-
grade the integrity assurances that the traditional RPC
brings, the order of magnitude of the integrity assur-
ance remaining essentially unchanged.
ACKNOWLEDGEMENTS
This work was made possible in part by grants NSF
CNS-0831149, NSF CNS-0934725l and AFOSR
FA9550-09-1-0194.
REFERENCES
Chaum, D. (2004). Secret-ballot receipts: True voter-
verifiable elections. IEEE Security and Privacy, pages
38–47.
Chaum, D., Essex, A., Carback, R., Clark, J., Popoveniuc,
S., Sherman, A. T., and Vora, P. (2008). Scantegrity:
End-to-end voter verifiable optical-scan voting. IEEE
Security and Privacy.
Chaum, D. L. (1981). Untraceable electronic mail, return
address, and digital pseudonyms. Communication of
ACM, pages 84–90.
Gomulkiewicz, M., Klonowski, M., and Kutylowski, M.
(2003). Rapid mixing and security of Chaums vi-
sual electronic voting. In In Proceedings of ESORICS
2003, pages 132–145. Springer-Verlag.
Jakobsson, M., Juels, A., and Rivest, R. L. (2002). Making
mix nets robust for electronic voting by randomized
partial checking. In Proceedings of the 11th USENIX
Security Symposium, pages 339–353, Berkeley, CA,
USA. USENIX Association.
Neff, C. A. (2001). A verifiable secret shuffle and its appli-
cation to e-voting. In 8th ACM Conference on Com-
puter and Communications Security, pages 116–125.
Popoveniuc, S. and Hosp, B. (2006). An introduction
to PunchScan. In IAVoSS Workshop On Trustwor-
thy Elections (WOTE 2006), Robinson College, Cam-
bridge UK.
SECRYPT 2010 - International Conference on Security and Cryptography
170