enough to cover this distance. Therefore we can con-
struct our first set element by using the following two
GPS positions [50, -10] and [50 + x, -10] (latitude
and longitude in degrees) where x is the correct value
for distance d. This means we have a track that starts
somewhere in Germany and leads straight North. For
the next element we start a little bit to the east: [50, -
10 - 1/y] and [50 + x, -10 - 1/y]. Again this track leads
straight to the North, but is disjoint from the first track
and is tied to the number 1 (by the term 1/y). The
next element then is [50, -10 - 2/y] and [50 + x, -10
- 2/y] and so on. y must be selected so that we ob-
tain enough set elements with valid coordinates. Each
track has the correct distance d, has valid coordinates
and is disjoint from all other tracks. Hence we have
constructed our set and can prove our property. The
actual proof for the set construction uses induction on
the size of the set n. In the induction step a new ele-
ment is added with coordinates [50, -10 - (n+1)/y] and
[50 + x, -10 - (n+1)/y]. We know that the element is
new because we require all elements in the set to have
values ≤ n. Therefore the actual induction hypothesis
is
∀ i, str. n ≤ 10000000 ∧ 0 ≤ i ∧ i < 100000 →
∃ set. size(set) ≥ n ∧ disjoint(set) ∧
(∀ y. y ∈ set → y.nickname = str ∧
validGPSCoordinates(y) ∧
(∃ m. 0 ≤ m ∧ m ≤ n ∧ y.positions =
[[50, -10 - m/y], [50 + x, 10 - m/y]]))
with suitable values for x and y as explained above
(x = d/111000, y = 10
5
). Even though this formula has
a rather complex nesting of quantifiers the proof suc-
ceeds smoothly. For other properties and other filter
functions another construction of the set elements is
needed, but the induction hypothesis follows the same
schema and the proof is similar.
The specification and proofs can be found on our
website
4
.
7 CONCLUSIONS
Information flow control frameworks often support
the controlled release (or declassification) of confi-
dential information. Qualitative and quantitative ap-
proaches exist to reason about what or how much in-
formation is released. We described a technique that
is useable for complex filter functions that are diffi-
cult to analyse. As an example we used a Distance-
Tracker app where the covered distance is computed
from a sequence of confidential GPS positions. This
4
The direct link is: https://swt.informatik.uni-augsburg.de/swt/
projects/iflow/DistanceTrackerComplexFilterSite/index.html
function uses complex data types and trigonometric
operations. We showed that it is not enough to rea-
son about the number of inputs that map to the same
output with an example containing a serious leak. We
described a scheme to obtain meaningful guarantees
together with a generic proof technique. The results
are fully integrated into our framework for informa-
tion flow control.
Future work includes support for a graphical lan-
guage to model this kind of properties.
REFERENCES
Alvim, M. S., Andres, M. E., Chatzikokolakis, K., and
Palamidessi, C. (2011). On the relation between dif-
ferential privacy and quantitative information flow. In
ICALP 2011, Part II, pages 60–76. Springer LNCS
6756.
Backes, M., K¨opf, B., and Rybalchenko, A. (2009). Au-
tomatic discovery and quantification of information
leaks. In Proceedings of the 30th IEEE Symposium
on Security and Privacy (S&P 2009), pages 141–153.
IEEE Computer Society.
Ben Said, N., Abdellatif, T., Bensalem, S., and Bozga, M.
(2014). Model-driven information flow security for
component-based systems. In Bensalem, S., Lakh-
neck, Y., and Legay, A., editors, From Programs to
Systems. The Systems perspective in Computing, vol-
ume 8415 of Lecture Notes in Computer Science,
pages 1–20. Springer Berlin Heidelberg.
Chatzikokolakis, K., Andrs, M. E., Bordenabe, N. E., and
Palamidessi, C. (2013). Broadening the scope of dif-
ferential privacy using metrics. In PETS 2013, pages
82–102. Springer LNCS 7981.
Clark, D., Hunt, S., and Malacaria, P. (2007). A static anal-
ysis for quantifying information flow in a simple im-
perative language. J. Comput. Secur., 15(3):321–371.
Cohen, E. S. (1978). Information transmission in sequential
programs. In DeMillo, R. A., Dobkin, D. P., Jones,
A. K., and Lipton, R. J., editors, Foundations of Se-
cure Computation, pages 301–339. Academic Press.
Enck, W., Octeau, D., McDaniel, P., and Chaudhuri, S.
(2011). A study of android application security. In
Proceedings of the 20th USENIX conference on Se-
curity, SEC’11, pages 21–21, Berkeley, CA, USA.
USENIX Association.
Ernst, G., Pf¨ahler, J., Schellhorn, G., Haneberg, D., and
Reif, W. (2014). KIV: overview and VerifyThis com-
petition. International Journal on Software Tools for
Technology Transfer, pages 1–18.
Giacobazzi, R. and Mastroeni, I. (2005). Adjoining de-
classification and attack models by abstract interpre-
tation. In Proc. European Symp. on Programming,
pages 295–310. Springer LNCS 3444.
Goguen, J. and Meseguer, J. (1982). Security policies and
security models. In IEEE Symposium on Security and
privacy, volume 12.
ICISSP 2016 - 2nd International Conference on Information Systems Security and Privacy