The operator, after collecting all the reported
counters < s
1
, v
1
>, . . . , < s
n
, v
n
>, is able to compute
an estimated impression counter as follows:
¯s
i
=
∑
j
v
j
i
s
j
=
∑
j
(v
j
i
∑
t
v
j
t
·c
j
t
) =
∑
j
∑
t
(v
j
i
v
j
t
c
j
t
) =
=
∑
j
∑
t=i
(v
j
i
v
j
t
c
j
t
) +
∑
t6=i
(v
j
i
v
j
t
c
j
t
)
!
=
∑
j
c
j
t
+ random
Since v
j
i
are chosen at random, for t 6= i the summation
acts as a random value; thus we get a noisy sum of c
j
i
.
5 SECURITY ANALYSIS
In this section, we analyze the security of our scheme
and emphasize the advantages in the context of the
two adversarial models. We start with analyzing the
first scheme, which hides the exact number of impres-
sions for each ad. Clearly, this scheme achieves pri-
vacy, since adding a report of an additional user to the
server’s database does not change the overall distribu-
tion of impressions in the database.
By adding Gaussian noise with a distribution
(µ, σ
2
) we get that
∑
j
¯c
i
j
∼ (µ, (
σ
√
n
)
2
)
The larger σ is, the better privacy protection we get
for the user. The smaller
σ
√
n
is, the more accurate is
the estimation of the total number of impressions. So
by carefully choosing σ, one can set the tradeoff be-
tween accuracy of the result and the level of privacy.
However, there is one more thing we need to ad-
dress – that is the potential privacy leakage over time.
Consider the following attack: The operator keeps a
log of all the reports he gets from a single user. If
he notices that for a specific category, such as sports,
the user’s report is often above the average impression
count, then he can deduce that the user is a sport fan.
The way to cope with this is by periodically deleting
the logs after they are processed. We claim that it is
reasonable to perform this because we do not consider
the operator as malicious, but as semi-honest. If the
operator was malicious, then could simply have the
set-top box report back the exact values (the operator
controls the set-top box software). While the operator
is assumed to act in good faith and to follow the pri-
vacy regulations, he does not want to retain private in-
formation any longer than necessary (and potentially
have it exposed to insiders).
Our second scheme avoids possibility of long-
term learning even if the operator does not delete the
logs. However, there is a different potential weakness
in this approach. Consider the following attack: As-
suming the number of possible ads is small and that
an adversary has auxiliary information about a par-
ticular user (e.g., via the package he has purchased),
then the adversary can make a good estimation about
the distribution of the different impressions for that
user. Here again we claim that the operator is semi-
honest and would not store such information about
the user. Therefore, it is unlikely that insiders would
have such a-priori knowledge about individual users,
and outsiders with such a potential knowledge would
not have access to the logs.
6 CONCLUSIONS
In this paper, we have presented the first practical
scheme for achieving targeted advertising in TV sys-
tems, while preserving the user privacy. We showed
how to build a household profile, how the set-top box
decides accordingly which ad is the most appropriate
to display, and how to report back the impressions to
the operator in a privacy preserving manner.
REFERENCES
Chaum, D. (1981). Untraceable electronic mail, return ad-
dresses, and digital pseudonyms. Commun. ACM,
24(2):84–88.
Dwork, C. (2008). Differential privacy: A survey of re-
sults. In Agrawal, M., Du, D.-Z., Duan, Z., and Li,
A., editors, TAMC, volume 4978 of Lecture Notes in
Computer Science, pages 1–19. Springer.
Dwork, C., McSherry, F., Nissim, K., and Smith, A. (2006).
Calibrating noise to sensitivity in private data analy-
sis. In Halevi, S. and Rabin, T., editors, TCC, volume
3876 of Lecture Notes in Computer Science, pages
265–284. Springer.
Juels, A. (2001). Targeted Advertising ... And Privacy Too.
In Naccache, D., editor, CT-RSA, volume 2020 of
Lecture Notes in Computer Science, pages 408–424.
Springer.
Korolova, A. (2010). Privacy violations using microtargeted
ads: A case study. In Fan, W., Hsu, W., Webb, G. I.,
Liu, B., Zhang, C., Gunopulos, D., and Wu, X., ed-
itors, ICDM Workshops, pages 474–482. IEEE Com-
puter Society.
Spangler, W. E., Gal-Or, M., and May, J. H. (2003). Using
Data Mining to Profile TV Viewers. Commun. ACM,
46(12):66–72.
Toubiana, V., Narayanan, A., Boneh, D., Nissenbaum, H.,
and Barocas, S. (2010). Adnostic: Privacy Preserving
Targeted Advertising. In NDSS. The Internet Society.
SECRYPT2012-InternationalConferenceonSecurityandCryptography
332