Combining Invariant Violation with Execution Path Classification for Detecting Multiple Types of Logical Errors and Race Conditions

George Stergiopoulos, Panagiotis Katsaros, Dimitris Gritzalis, Theodore Apostolopoulos


Context: Modern automated source code analysis techniques can be very successful in detecting a priori de- fined defect patterns and security vulnerabilities. Yet, they cannot detect flaws that manifest due to erroneous translation of the software’s functional requirements into the source code. The automated detection of logical errors that are attributed to a faulty implementation of applications’ functionality, is a relatively uncharted territory. In previous research, we proposed a combination of automated analyses for logical error detection. In this paper, we develop a novel business-logic oriented method able to filter mathematical depictions of software logic in order to augment logical error detection, eliminate previous limitations in analysis and provide a formal tested logical error detection classification without subjective discrepancies. As a proof of concept, our method has been implemented in a prototype tool called PLATO that can detect various types of logical errors. Potential logical errors are thus detected that are ranked using a fuzzy logic system with two scales characterizing their impact: (i) a Severity scale, based on the execution paths’ characteristics and Information Gain, (ii) a Reliability scale, based on the measured program’s Computational Density. The method’s effectiveness is shown using diverse experiments. Albeit not without restrictions, the proposed automated analysis seems able to detect a wide variety of logical errors, while at the same time limiting the false positives.


  1. (2015). Common vulnerabilities and exposures, us-cert, mitre, CVE. MITRE, CVE-ID CVE-2014-0160.
  2. (2015). The daikon invariant detector manual.
  3. (2015). The java pathfinder tool.
  4. (2015). Java platform, standard edition 7 api specification.
  5. (2015). National vulnerability database. [online]
  6. (2015). Using code quality metrics in management of outsourced development and maintenance.
  7. (2016). Cwe-840: Business logic errors.
  8. Abramson, N. A. (1964). Introduction to Information Theory and Coding. McGraw Hill.
  9. Agency, N. S. (2011). NSA, On Analyzing Static Analysis Tools. National Security Agency.
  10. Agency, N. S. (2012). NSA,Static Analysis Tool StudyMethodology. National Security Agency.
  11. Albaum, G. (1997). The likert scale revisited. JournalMarket research society, 39:331-348.
  12. Baah, G. K. (2012). Statistical causal analysis for fault localization.
  13. Balzarotti, D., Cova, M., Felmetsger, V. V., and Vigna, G. (2007). Multi-module vulnerability analysis of webbased applications. In Proceedings of the 14th ACM conference on Computer and communications security, pages 25-35. ACM.
  14. Barr, E. T., Harman, M., McMinn, P., Shahbaz, M., and Yoo, S. (2015). The oracle problem in software testing: A survey. Software Engineering, IEEE Transactions on, 41(5):507-525.
  15. Bastani, O., Anand, S., and Aiken, A. (2015). Interactively verifying absence of explicit information flows in android apps. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, pages 299-315. ACM.
  16. Boland, T. and Black, P. E. (2012). Juliet 1.1 c/c++ and java test suite. Computer, (10):88-90.
  17. Bray, M., Brune, K., Fisher, D. A., Foreman, J., and Gerken, M. (1997). C4 software technology reference guide-a prototype. Technical report, DTIC Document.
  18. Chhabra, P. and Bansal, L. (2014). An effective implementation of improved halstead metrics for software parameters analysis.
  19. Cingolani, P. and Alcala-Fdez, J. (2012). jfuzzylogic: a robust and flexible fuzzy-logic inference system language implementation. In FUZZ-IEEE, pages 1-8. Citeseer.
  20. Do, H., Elbaum, S., and Rothermel, G. (2005). Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405-435.
  21. Doupé, A., Boe, B., Kruegel, C., and Vigna, G. (2011). Fear the ear: discovering and mitigating execution after redirect vulnerabilities. In Proceedings of the 18th ACM conference on Computer and communications security, pages 251-262. ACM.
  22. Ernst, M. D., Perkins, J. H., Guo, P. J., McCamant, S., Pacheco, C., Tschantz, M. S., and Xiao, C. (2007). The daikon system for dynamic detection of likely invariants. Science of Computer Programming, 69(1):35-45.
  23. Etzkorn, L. H. and Davis, C. G. (1997). Automatically identifying reusable oo legacy code. Computer, 30(10):66-71.
  24. Gill, G. K. and Kemerer, C. F. (1991). Cyclomatic complexity density and software maintenance productivity. Software Engineering, IEEE Transactions on, 17(12):1284-1288.
  25. Glover, E. J., Flake, G. W., Lawrence, S., Birmingham, W. P., Kruger, A., Giles, C. L., and Pennock, D. M. (2001). Improving category specific web search by learning query modifications. In Applications and the Internet, 2001. Proceedings. 2001 Symposium on, pages 23-32. IEEE.
  26. Godefroid, P., Klarlund, N., and Sen, K. (2005). Dart: Directed automated random testing. SIGPLAN Not., 40(6):213-223.
  27. Gosling, J., Joy, B., Steele Jr, G. L., Bracha, G., and Buckley, A. (2014). The Java Language Specification. Pearson Education.
  28. Hansen, W. J. (1978). Measurement of program complexity by the pair:(cyclomatic number, operator count). ACM SIGPLan Notices, 13(3):29-33.
  29. Harold, E. R. (2006). Java I/O. ” O'Reilly Media, Inc.”.
  30. Hovemeyer, D. and Pugh, W. (2004). Finding bugs is easy. ACM Sigplan Notices, 39(12):92-106.
  31. Martin, R. A. and Barnum, S. (2008). Common weakness enumeration (cwe) status update. ACM SIGAda Ada Letters, 28(1):88-91.
  32. Pa?sa?reanu, C. S. and Visser, W. (2004). Verification of java programs using symbolic execution and invariant generation. In Model Checking Software, pages 164-181. Springer.
  33. Peng, W. W. and Wallace, D. R. (1993). Software error analysis. NIST Special Publication, 500:209.
  34. Rosenberg, L. and Hammer, T. (1998). Metrics for quality assurance and risk assessment. Proc. Eleventh International Software Quality Week, San Francisco, CA.
  35. Rothermel, G., Elbaum, S., Kinneer, A., and Do, H. (2006). Software-artifact infrastructure repository.
  36. Scarfone, K. A., Grance, T., and Masone, K. (2008). Sp 800-61 rev. 1. computer security incident handling guide. Technical report, Gaithersburg, MD, United States.
  37. Stergiopoulos, G., Katsaros, P., and Gritzalis, D. (2014). Automated detection of logical errors in programs. In Proc. of the 9th International Conference on Risks & Security of Internet and Systems.
  38. Stergiopoulos, G., Petsanas, P., Katsaros, P., and Gritzalis, D. (2015a). Automated exploit detection using path profiling - the disposition should matter, not the position. In Proceedings of the 12th International Conference on Security and Cryptography, pages 100-111.
  39. Stergiopoulos, G., Theoharidou, M., and Gritzalis, D. (2015b). Using logical error detection in remoteterminal units to predict initiating events of critical infrastructures failures. In Proc. of the 3rd International Conference on Human Aspects of Information Security, Privacy and Trust (HCI-2015), Springer, USA.
  40. Stergiopoulos, G., Tsoumas, B., and Gritzalis, D. (2012). Hunting application-level logical errors. In Engineering Secure Software and Systems, pages 135-142.
  41. Stergiopoulos, G., Tsoumas, B., and Gritzalis, D. (2013). On business logic vulnerabilities hunting: The app loggic framework. In Network and System Security, pages 236-249. Springer.
  42. Ugurel, S., Krovetz, R., and Giles, C. L. (2002). What's the code?: automatic classification of source code archives. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 632-638. ACM.
  43. Wright, H. K., Kim, M., and Perry, D. E. (2010). Validity concerns in software engineering research. In Proceedings of the FSE/SDP workshop on Future of software engineering research, pages 411-414. ACM.
  44. Zeller, A. (2002). Isolating cause-effect chains from computer programs. In Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering, pages 1-10. ACM.
  45. Zhang, X., Gupta, N., and Gupta, R. (2006). Locating faults through automated predicate switching. In Proceedings of the 28th international conference on Software engineering, pages 272-281. ACM.
  46. Zielczynski, P. (May 2006). Traceability from use cases to test cases. IBM developerWorks.

Paper Citation

in Harvard Style

Stergiopoulos G., Katsaros P., Gritzalis D. and Apostolopoulos T. (2016). Combining Invariant Violation with Execution Path Classification for Detecting Multiple Types of Logical Errors and Race Conditions . In Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 4: SECRYPT, (ICETE 2016) ISBN 978-989-758-196-0, pages 28-40. DOI: 10.5220/0005947200280040

in Bibtex Style

author={George Stergiopoulos and Panagiotis Katsaros and Dimitris Gritzalis and Theodore Apostolopoulos},
title={Combining Invariant Violation with Execution Path Classification for Detecting Multiple Types of Logical Errors and Race Conditions},
booktitle={Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 4: SECRYPT, (ICETE 2016)},

in EndNote Style

JO - Proceedings of the 13th International Joint Conference on e-Business and Telecommunications - Volume 4: SECRYPT, (ICETE 2016)
TI - Combining Invariant Violation with Execution Path Classification for Detecting Multiple Types of Logical Errors and Race Conditions
SN - 978-989-758-196-0
AU - Stergiopoulos G.
AU - Katsaros P.
AU - Gritzalis D.
AU - Apostolopoulos T.
PY - 2016
SP - 28
EP - 40
DO - 10.5220/0005947200280040