with input vectors to locations corresponding to infor-
mation flow sinks. IBs are classified in two different
groups of sets as follows:
the Severity sets that quantify the danger level for
the execution paths (the impact that an exploit
would have, if it would be manifested in the path);
the Vulnerability sets that quantify detected vul-
nerabilities based on a variable usage analysis
(tainted object propagation and validation of san-
itization checks, in which the data context of var-
iables is checked).
The method consists of the following components:
i. A static analysis, based on the BCEL library
(BCEL, 2003) (Dahm et al., 2003), creates the
control flow graph that is then parsed to get infor-
mation about variable usages. It is thus possible to
detect input data vectors, control-flow locations
and instructions that enforce context checks on
variable data. Entroine then maps the execution
paths for the AUT variables and, more specifi-
cally, only those locations, where the program ex-
ecution can follow different paths (execution flow
branching points).
ii. A classification approach that combines output
from (a) to create IBs. Each IB is classified using
statistical Entropy Loss and the two fuzzy mem-
bership sets, namely Severity and Vulnerability.
iii. A Fuzzy Logic system for quantifying the overall
Risk for each IB, based on linguistic variables,
and Severity and Vulnerability classification rat-
ings.
The main contributions of this paper are summarized
as follows:
1. We introduce a program analysis technique for
our classification system. Based on the control
flow graph and our Fuzzy Logic ranking system
only a limited number of execution paths and
statements have to be analysed.
2. We propose a Risk classification of program loca-
tions using two membership functions, one for the
identified Severity (Entropy Loss) and another
one for the Vulnerability level.
3. We present our prototype tool. By using the Vul-
nerability and Severity classifications, we realized
that the number of false positives for our detection
technique is lowered. In addition, Entroine
warned for elevated danger levels in program lo-
cations where a false negative could have oc-
curred.
4. We provide test results from applying Entroine to
the Juliet Test Suite (Boland and Black, 2012) that
has been proposed by NIST to study the effective-
ness of code analysis tools (National Security
Agency (NSA), 2011). Entroine detected all com-
mon weaknesses used upon, without having re-
ported any false positive.
In Section 2, we report recent results in related re-
search. In Section 3, we expose the theoretical under-
pinnings of our method. Section 4 provides technical
details for the implementation of our method in En-
troine and section 5 presents our experiments and re-
ports metrics and detection coverage in all tests.
2 RELATED WORK
Previously proposed analysis techniques based on
tainted object propagation such as the one in (Livshits
and Lam, 2005) mostly focus on how to formulate
various classes of security vulnerabilities as instances
of the general taint analysis problem. These ap-
proaches do not explicitly model the program’s con-
trol flow and it is therefore possible to miss-flag san-
itized input, thus resulting in false positives. Further-
more, there is no easy general approach to avoid the
possibility of false negatives. This type of analysis
does not suffer a potential state space explosion, but
its scalability is directly connected to the analysis sen-
sitivity characteristics (path and context sensitivity)
and there is an inherent trade-off between the analysis
scalability and the resulting precision/recall.
Regarding well-known static analysis tools, it is
worth to mention FindBugs (Hovemeyer and Pugh,
2004), which is used to detect more than 300 code de-
fects that are usually classified in diverse categories,
including those analysed by tainted object propaga-
tion. The principle of most of the FindBug’s bug de-
tectors is to identifying low-hanging fruits, i.e. to
cheaply detect likely defects or program points where
the programmer’s attention should be focused
(Ayewah et al., 2008).
Other tools, such as (CodePro, 2015), (UCDetec-
tor, 2015), (Pmd, 2015) and (Coverity, 2015) are
well-known for their capability to detect numerous
bugs, but related research in (Tripathi, 2014) has
shown that their main focus is centred around specific
bug types like null pointer exceptions, explicit im-
port-export and not those, for which a taint analysis is
required (XSS, OS executions etc.). In (Tripathi and
Gupta, 2014), a relatively low detection rate is re-
ported for many of the above mentioned tools for
some variants of important bug types (null pointer ex-
ception, user injections and non-black final instance).
To the best of our knowledge, none of the above men-
tioned tools implements a mechanism to cope with
the possibility of false negatives.
AutomatedExploitDetectionusingPathProfiling-TheDispositionShouldMatter,NotthePosition
101