1.1 Time and Sensitivity
The traditional model of multi-level security (MLS)
associates security clearances with subjects, security
classifications with objects, and provides a clear de-
cision mechanism as to whether an access request
should be granted or not. Thus for example, the “no
read-up rule” of Bell and La Padula (BLP) model dic-
tates that a read request should be granted only if the
subject clearance dominates the object classification.
The intuition behind this (and behind the correspond-
ing “no-write down” rule) is sound. However, such
rules encode for a pre-determined calculation of risks
and benefits, and in many modern networking situ-
ations will preclude effective operations that can be
justified on a risk basis when the specifics of the con-
text are taken into account. Some situations demand
that higher risks be taken for the sake of operational
benefit. In a recent policy statement, US Director of
National Intelligence Mike McConnell on 15 Septem-
ber 2008 said that the principal goal for risk manage-
ment of any intelligence agency such as the CIA or
the NSA should be to protect the agency’s ability to
perform its mission, not just to protect its informa-
tion assets. One practice that certainly impedes the
ability of an organisation to dispatch its responsibili-
ties is inappropriate classification of data. The perils
of underclassification are obvious; overclassification
is a readily explicable outcome. But overclassifica-
tion does not actually solve the problem it intends to;
rather it leads to a variety of ‘workarounds’ and in-
formal practices that simply take risk-based decision
making outside procedural control (MITRE, 2004),
effectively sweeping the issue under the carpet. As-
sessment of risk is an input into the decision making
process, and it should not define the outcome under
all circumstances. Closer examination of modern ap-
plications reveals further assumptions that underpin
traditional MLS based access control. We shall ad-
dress these in turn.
In implementations of traditional MLS models the
default assumption is that the sensitivity of an object
does not change over time. This principle is generally
known as tranquility and was introduced in the BLP
model to formally ensure that certain security prop-
erties hold over time
1
. For many application scenar-
ios this clearly does not hold. In a military scenario
the identified terrorist target of an air-strike is clearly
1
To be precise, the tranquility principle states that nei-
ther a subject clearance nor an object label must change
while they are being referenced. Strong tranquility inter-
prets this as that security levels does not change at all during
normal operation, whilst weak tranquility allows changes
whenever the rules of a given security policy are not vio-
lated (Bishop, 2002).
vastly more sensitive an hour before the strike than it
is one hour after the strike (when the fact it has been
bombed will generally be apparent to all). In contrast,
the name of any pilot involved in the strike may re-
main sensitive for a considerable period of time. Sim-
ilarly, in a commercial environment, treasury deci-
sions on setting interest rates must be released in a
controlled fashion at pre-specified times to avoid un-
fair market advantages. In a highly mobile tactical
situation a soldier’s current location may be highly
sensitive, but his location yesterday will usually be
less sensitive. Similar arguments hold for subject
clearances. Thus, for example, a subject entering en-
emy territory may have his/her clearance temporarily
downgraded until coming back to a safer location.
Modern collaborative operations will generate a
significant amount of classified data and there would
appear to be a need to prevent a general drift to-
wards significant overclassification. More sophisti-
cated practices will need to be adopted to ensure ap-
propriate information usage in current times. Over-
classification will make appropriate information shar-
ing harder in almost any plausible access control
scheme. Innovative risk benefit tradeoff handling ap-
proaches have been proposed to handle the inflexibil-
ity of traditional MLS, such as budget-based schemes
(e.g. as suggested by (MITRE, 2004)). The price a
requester pays for an access will increase with the es-
timate of the consequent risk, which will be inflated if
the sensitivity label is too conservative. Thus, to give
such innovative schemes the best chances of allow-
ing rational risk-based decision making we must en-
sure that the underlying labelling accurately reflects
the current sensitivity.
We clearly need also to take the time-variant na-
ture of sensitivity into account. Traditionally this
would be achieved by trusted subjects downgrading
information at an appropriate time. This is a plausi-
ble approach for small numbers of documents where
manual consideration can be given. However, the
emergence of data-rich MANET environments forces
us to reconsider this approach and ask: can we use-
fully model the time-varying nature of sensitivity in a
principled yet practical way? In this paper we suggest
some means by which this can be achieved.
1.2 Uncertain Security Levels
The traditional MLS model simply assumes that ob-
jects can be classified with an appropriate label re-
flecting the damage that may result from it falling into
the wrong hands. There is general acceptance that
such assignments are best guesses, and typically re-
flect the order of magnitude of the damage that might
SECRYPT 2010 - International Conference on Security and Cryptography
6