utilization of JPF by creating an automated configu-
ration mechanism to ”tweak” JPF per test-case and
break testing into multiple executions.
Although empirical methods are often criticized
for the lack of sound foundations in software engi-
neering, it is obvious that, in order for a tool to detect
flaws in the logic of applications, it needs to model
knowledge that reflects intended functionality.
Also, (provided that it is executed in the correct
manner and that it covers the entire functionality of
an AUT) Daikon’s output does reflect the AUT’s in-
tended functionality, since its dynamic invariants are
properties that were true over the observed executions
(Ernst et al., 2007). PLATO’s results enforce this no-
tion.
Based on the above notion and our tests, we drew
some significant conclusions:
• PLATO can indeed detect logical errors in ap-
plications using reasonable limits in the size and
complexity of AUTs, something no other tool can
claim at the time this article was written.
• Results have shown that this method goes beyond
logical error detection and can provide valid de-
tections of other types of flaws. The unexpected
detection of race conditions in one of our experi-
ments, although it was an unintended side effect,
proved this to be the case. As shown in previous
results, limiting a variables value in an airplane
ticket store not only led to a logical error that was
essentially a race condition flaw, but also to a log-
ical vulnerability that could lead the airline to sell
more tickets that its seats.
• Logical errors must be detected using productive
reasoning and not inductive because logical er-
rors can manifest in widely different contexts. For
example, a race condition can lead to a logical
vulnerability and is indeed a subtype of logical
programming errors, but it can also lead to other
types of errors (null pointer exception, division
by zero etc.) or even to no errors at all. In-
stead, PLATO’s deductive approach, not only de-
tects different types of logical errors but also pro-
vides insight on the impact of each error.
REFERENCES
(2015). Common vulnerabilities and exposures, us-cert,
mitre, CVE. MITRE, CVE-ID CVE-2014-0160.
(2015). The daikon invariant detector manual.
(2015). The java pathfinder tool.
(2015). Java platform, standard edition 7 api specification.
(2015). National vulnerability database. [online]
http://nvd.nist.gov.
(2015). Using code quality metrics in management of out-
sourced development and maintenance.
(2016). Cwe-840: Business logic errors.
Abramson, N. A. (1964). Introduction to Information The-
ory and Coding. McGraw Hill.
Agency, N. S. (2011). NSA, On Analyzing Static Analysis
Tools. National Security Agency.
Agency, N. S. (2012). NSA,Static Analysis Tool Study-
Methodology. National Security Agency.
Albaum, G. (1997). The likert scale revisited. Journal-
Market research society, 39:331–348.
Baah, G. K. (2012). Statistical causal analysis for fault lo-
calization.
Balzarotti, D., Cova, M., Felmetsger, V. V., and Vigna, G.
(2007). Multi-module vulnerability analysis of web-
based applications. In Proceedings of the 14th ACM
conference on Computer and communications secu-
rity, pages 25–35. ACM.
Barr, E. T., Harman, M., McMinn, P., Shahbaz, M., and
Yoo, S. (2015). The oracle problem in software test-
ing: A survey. Software Engineering, IEEE Transac-
tions on, 41(5):507–525.
Bastani, O., Anand, S., and Aiken, A. (2015). Interactively
verifying absence of explicit information flows in an-
droid apps. In Proceedings of the 2015 ACM SIG-
PLAN International Conference on Object-Oriented
Programming, Systems, Languages, and Applications,
pages 299–315. ACM.
Boland, T. and Black, P. E. (2012). Juliet 1.1 c/c++ and java
test suite. Computer, (10):88–90.
Bray, M., Brune, K., Fisher, D. A., Foreman, J., and Gerken,
M. (1997). C4 software technology reference guide-a
prototype. Technical report, DTIC Document.
Chhabra, P. and Bansal, L. (2014). An effective implemen-
tation of improved halstead metrics for software pa-
rameters analysis.
Cingolani, P. and Alcala-Fdez, J. (2012). jfuzzylogic: a
robust and flexible fuzzy-logic inference system lan-
guage implementation. In FUZZ-IEEE, pages 1–8.
Citeseer.
Do, H., Elbaum, S., and Rothermel, G. (2005). Supporting
controlled experimentation with testing techniques:
An infrastructure and its potential impact. Empirical
Software Engineering, 10(4):405–435.
Doup´e, A., Boe, B., Kruegel, C., and Vigna, G. (2011).
Fear the ear: discovering and mitigating execution af-
ter redirect vulnerabilities. In Proceedings of the 18th
ACM conference on Computer and communications
security, pages 251–262. ACM.
Ernst, M. D., Perkins, J. H., Guo, P. J., McCamant,
S., Pacheco, C., Tschantz, M. S., and Xiao, C.
(2007). The daikon system for dynamic detection of
likely invariants. Science of Computer Programming,
69(1):35–45.
Etzkorn, L. H. and Davis, C. G. (1997). Automati-
cally identifying reusable oo legacy code. Computer,
30(10):66–71.
Felmetsger, V., Cavedon, L., Kruegel, C., and Vigna, G.
(2010). Toward automated detection of logic vulnera-