Automated Exploit Detection using Path Profiling - The Disposition Should Matter, Not the Position

George Stergiopoulos, Panagiotis Petsanas, Panagiotis Katsaros, Dimitris Gritzalis

2015

Abstract

Recent advances in static and dynamic program analysis resulted in tools capable to detect various types of security bugs in the Applications under Test (AUT). However, any such analysis is designed for a priori specified types of bugs and it is characterized by some rate of false positives or even false negatives and certain scalability limitations. We present a new analysis and source code classification technique, and a prototype tool aiming to aid code reviews in the detection of general information flow dependent bugs. Our approach is based on classifying the criticality of likely exploits in the source code using two measuring functions, namely Severity and Vulnerability. For an AUT, we analyse every single pair of input vector and program sink in an execution path, which we call an Information Block (IB). A classification technique is introduced for quantifying the Severity (danger level) of an IB by static analysis and computation of its Entropy Loss. An IB’s Vulnerability is quantified using a tainted object propagation analysis along with a Fuzzy Logic system. Possible exploits are then characterized with respect to their Risk by combining the computed Severity and Vulnerability measurements through an aggregation operation over two fuzzy sets. An IB is characterized of a high risk, when both its Severity and Vulnerability rankings have been found to be above the low zone. In this case, a detected code exploit is reported by our prototype tool, called Entroine. The effectiveness of the approach has been tested by analysing 45 Java programs of NIST’s Juliet Test Suite, which implement 3 different common weakness exploits. All existing code exploits were detected without any false positive.

References

  1. Boland T., Black P., 2012. Juliet 1.1 C/C++ and Java Test Suite”. In Computer, vol. 45, no. 10, pp. 88-90.
  2. Rutar N., Almazan, C., Foster, S., 2004. A Comparison of Bug Finding Tools for Java. In Proc. of the 15th International Symposium on Software Reliability Engineering. IEEE Computer Society, USA.
  3. Livshits V., Lam M., 2005. Finding security vulnerabilities in Java applications with static analysis. In Proc. of the 14th Usenix Security Symposium.
  4. Ayewah, N. Hovemeyer, D. Morgenthaler, J., Penix, J., Pugh, W. 2008. Using Static Analysis to Find Bugs. In Software, IEEE , vol.25, no.5, pp.22,29.
  5. CodePro, 2015. CodePro, https://developers.google.com/java-dev-tools/codepro/doc/
  6. UCDetector 2015. UCDetector, http://www.ucdetector.org/
  7. Pmd, 2015. Pmd, http://pmd.sourceforge.net/
  8. Tripathi A., Gupta A., 2014. A controlled experiment to evaluate the effectiveness and the efficiency of four static program analysis tools for Java programs. In Proc. of the 18th International Conference on Evaluation & Assessment in Software Engineering. ACM.
  9. Hovemeyer D., Pugh W., 2004. Finding bugs is easy. In SIGPLAN Not. 39, 12, pp. 92-106.
  10. Jovanovic N., Kruegel C., Kirda E., 2010. Static analysis for detecting taint-style vulnerabilities in web applications. In Journal of Computer Security, No. 5, IOS Press.
  11. Weiser M., 1981. Program Slicing. In Proc. of the International Conference on Software Engineering, pp. 439- 449,
  12. Stergiopoulos G., Tsoumas V., Gritzalis D., 2013. On Business Logic Vulnerabilities Hunting: The APP_LogGIC Framework. In Proc. of the 7th International Conference on Network and System Security. Springer, 236- 249.
  13. Zhang X., Gupta N., Gupta R., 2006. Pruning Dynamic Slices with Confidence. In Proc. of the Conference on Programming Language Design and Implementation, pp. 169-180.
  14. Cingolani P., Alcala-Fdez J., 2012. jFuzzyLogic: A robust and flexible Fuzzy-Logic inference system language implementation”. In Proc. of the IEEE International Conference on Fuzzy Systems, 1-8.
  15. Doupe A., Boe B. Vigna G., 2011. Fear the EAR: Discovering and Mitigating Execution after Redirect Vulnerabilities. In Proc. of the 18th ACM Conference on Computer and Communications Security. ACM, USA, pp. 251-262.
  16. Balzarotti D., Cova M., Felmetsger V., Vigna G., 2007. Multi-module vulnerability analysis of web-based applications. In: Proc. of the 14th ACM Conference on Computer and Communications security. ACM, USA, 25-35.
  17. Albaum G., “The Likert scale revisited”. In Market Research Society Journal, vol. 39, pp. 331-348, 1997.
  18. Ugurel S., Krovetz R., Giles C., Pennock D., Glover E., Zha H., 2002. What's the code?: automatic classification of source code archives. In Proc.of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, USA pp. 632-638.
  19. Abramson N., 1963 "Information Theory and Coding." McGraw-Hill, USA.
  20. Etzkorn L., Davis, C., 1997. Automatically identifying reusable OO legacy code. In IEEE Computer, pp. 66-71.
  21. Glover E., Flake G., Lawrence S., Birmingham W., Kruger A., Giles L., Pennoek D., 2001. Improving category specific web search by learning query modification. In Proc. of the IEEE Symposium on Applications and the Internet, IEEE Press, USA, pp. 23-31.
  22. Stoneburner G., Goguen A., 2002. SP 800-30. Risk Management Guide for Information Technology Systems. Technical Report. NIST, USA.
  23. OWASP, 2015. The OWASP Risk Rating Methodology, www.owasp.org/ index.php/OWASP_Risk_Rating_Methodology.
  24. Leekwijck W., Kerre E., 1999. Defuzzification: Criteria and classification. In Fuzzy Sets and Systems, vol. 108, issue 2, 159-178.
  25. Java API, 2013. Java Standard Edition 7 API Specification, http://docs.oracle.com/javase/7/docs/api/
  26. Gosling J., Joy B., Steele G., Bracha G., Buckley A., 2013. The Java Language Specification, Java SE 8 Edition, http://docs.oracle.com/javase/specs/jls/se8/ html/index.html
  27. Harold E., 2006. Java I/O, Tips and Techniques for Putting I/O to Work. O'Reilly.
  28. National Security Agency (NSA), 2011. On Analyzing Static Analysis Tools. Center for Assured Software, National Security Agency.
  29. National Security Agency (NSA), 2012. Static Analysis Tool Study-Methodology. Center for Assured Software.
  30. Yang Y., Pederson J., 1997. A comparative study on feature selection in text categorization. In Proc. of the 14th International Conference on Machine Learning (ICML'97), 412-420
  31. BCEL, 2003. Apache Commons BCEL project page. http://commons. apache.org/proper/commons-bcel/
  32. Dahm, Markus, J. van Zyl, and E. Haase. 2003. The bytecode engineering library (BCEL).
  33. Okun V., Delaitre O., Black P., 2013. Report on the Static Analysis Tool Exposition (SATE) IV, NIST Special Publication 500-297.
  34. Stergiopoulos, G., Tsoumas, B., Gritzalis, D., 2012. Hunting application-level logical errors. In Proc. of the Engineering Secure Software and Systems Conference. Springer (LNCS 7159), 135-142.
  35. Stergiopoulos G., Katsaros P., Gritzalis D., 2014. Automated detection of logical errors in programs”. In: Proc. of the 9th International Conference on Risks and Security of Internet and Systems, Springer.
  36. Coverity, 2015. Coverity SAVE audit tool, http://www.coverity.com
  37. Mell P., Scarfone, K., Romanosky S., 2006. Common Vulnerability Scoring System. In Security & Privacy, IEEE, vol.4, no.6, pp.85-89.
  38. The Common Weakness Enumeration (CWE), 2015. Office of Cybersecurity and Communications, US Dept. of Homeland Security, http://cwe.mitre.org
  39. Stergiopoulos G., Theoharidou M., Gritzalis D., 2015. Using logical error detection in Remote-Terminal Units to predict initiating events of Critical Infrastructures failures. In Proc. of the 3rd International Conference on Human Aspects of Information Security, Privacy and Trust, Springer, USA.
Download


Paper Citation


in Harvard Style

Stergiopoulos G., Petsanas P., Katsaros P. and Gritzalis D. (2015). Automated Exploit Detection using Path Profiling - The Disposition Should Matter, Not the Position . In Proceedings of the 12th International Conference on Security and Cryptography - Volume 1: SECRYPT, (ICETE 2015) ISBN 978-989-758-117-5, pages 100-111. DOI: 10.5220/0005561101000111


in Bibtex Style

@conference{secrypt15,
author={George Stergiopoulos and Panagiotis Petsanas and Panagiotis Katsaros and Dimitris Gritzalis},
title={Automated Exploit Detection using Path Profiling - The Disposition Should Matter, Not the Position},
booktitle={Proceedings of the 12th International Conference on Security and Cryptography - Volume 1: SECRYPT, (ICETE 2015)},
year={2015},
pages={100-111},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005561101000111},
isbn={978-989-758-117-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Conference on Security and Cryptography - Volume 1: SECRYPT, (ICETE 2015)
TI - Automated Exploit Detection using Path Profiling - The Disposition Should Matter, Not the Position
SN - 978-989-758-117-5
AU - Stergiopoulos G.
AU - Petsanas P.
AU - Katsaros P.
AU - Gritzalis D.
PY - 2015
SP - 100
EP - 111
DO - 10.5220/0005561101000111