5 DISCUSSION
By employing our proposed protocol, the readers
(verifiers) can get strong guarantees about the in-
tegrity and correctness of the computed result. To
provide strong privacy guarantees, we need to make
sure that the random noise used to compute the dif-
ferentially private function g is generated with a good
source of randomness. To address this, we incorpo-
rate in the VDPC
Pub
protocol, a zero-knowledge pro-
tocol. The latter allows us to guarantee that the ran-
dom noise is generated based on a randomness u on
which the reader and the curator mutually agree and
thus, no one can control the random noise term on
his own. Of course the DP level achieved via our
solution depends on the differentially private mech-
anism employed and can be tuned based on the se-
lected ε parameter in order to achieve a good balance
of utility and privacy. Curious readers that want to
recover private data are not able to get any additional
information than the result of the computation. More-
over, our protocol is secure against malicious ana-
lysts, since analysts have access to encoded informa-
tion about the dataset m but do not obtain neither the
dataset itself nor the randomness u. Our security re-
quirement is that the computations performed by the
analysts should be correct and they should not be able
to get any additional information besides what is re-
quired to perform the computation. This is achieved
assuming that the employed (publicly) VC scheme is
secure. Contrary to our approach, in the VerDP sys-
tem the analyst can access the dataset and thus, poses
a privacy risk. In our approach this is addressed by
allowing the analysts to access only an encoded form
of the data.
6 CONCLUSION
Often, when receiving an output, we want to confirm
that it is computed correctly and that it does not leak
any sensitive information. Thus, we require not only
confidentiality on the used data but also differential
privacy guarantees on the computed result, while at
the same time, we want to be able to verify its correct-
ness. In this paper, we formally define the notion of
verifiable differentially private computations (VDPC)
and we present a protocol (VDPC
Pub
) that can be em-
ployed to compute the value of a differentially private
function g (i.e., denoting the ε-DP computation for a
function f ), as well as to check the correctness of the
computation.
VDPC is an important security notion that has re-
ceived limited attention in the literature. We believe
that verifiable differentially private computation can
have important impact in a broad range of applica-
tion scenarios in the cloud-assisted setting that require
strong privacy and integrity guarantees. However, a
rather challenging point for any verifiable differen-
tially private computation protocol is for the end user
to be able to verify that a VDPC scheme is used, and
how we can make sure that he understands the use
of the epsilon parameters. More precisely, we are
interested in further investigating how a link can be
made between the formal verification process and a
human verification of the whole process, especially at
the user level.
ACKNOWLEDGEMENTS
This work was partially supported by the Wallen-
berg AI, Autonomous Systems and Software Program
(WASP) funded by the Knut and Alice Wallenberg
Foundation.
REFERENCES
Braun, B., Feldman, A. J., Ren, Z., Setty, S., Blumberg,
A. J., and Walfish, M. (2013). Verifying computations
with state. In Proceedings of SOSP, pages 341–357.
Chung, K.-M., Kalai, Y. T., and Vadhan, S. P. (2010). Im-
proved delegation of computation using fully homo-
morphic encryption. In CRYPTO, volume 6223, pages
483–501.
Dwork, C., McSherry, F., Nissim, K., and Smith, A. D.
(2006). Calibrating noise to sensitivity in private data
analysis. In Proceedings of TCC, pages 265–284.
Dwork, C. and Roth, A. (2014). The algorithmic founda-
tions of differential privacy. Foundations and Trends
in Theoretical Computer Science, 9(3-4):211–407.
Fiore, D. and Gennaro, R. (2012). Publicly verifiable dele-
gation of large polynomials and matrix computations,
with applications. In Proceedings of CCS, pages 501–
512.
Fiore, D., Gennaro, R., and Pastro, V. (2014). Efficiently
verifiable computation on encrypted data. In Proceed-
ings of the 2014 ACM SIGSAC Conference on Com-
puter and Communications Security, pages 844–855.
Gennaro, R., Gentry, C., and Parno, B. (2010). Non-
interactive verifiable computing: Outsourcing com-
putation to untrusted workers. In Advances in
Cryptology–CRYPTO 2010, pages 465–482.
Narayan, A., Feldman, A., Papadimitriou, A., and Hae-
berlen, A. (2015). Verifiable differential privacy. In
Proceedings of the Tenth European Conference on
Computer Systems, page 28.
Parno, B., Raykova, M., and Vaikuntanathan, V. (2012).
How to delegate and verify in public: Verifiable com-
putation from attribute-based encryption. In Proceed-
ings of TCC, pages 422–439.
SECRYPT 2019 - 16th International Conference on Security and Cryptography
430