Zimmermann, 1989) dealing with divide and conquer
algorithms) or specific parts of the code (such as loops
in (Demonti
ˆ
e et al., 2015)). Our approach is more
general, it is not constrained by the type of algorithm
used in the analyzed method. We focus on an entire
method that has the additional benefit that the instru-
mented code used to collect runtime data is less costly
since our only concern is the execution time of the
method.
6 CONCLUSIONS AND FUTURE
WORK
In this paper we have introduced an approach for auto-
matically determining the algorithmic complexity of a
method from a software system, using runtime mea-
surements. The results of the experimental evaluation
show the potential of our approach. Although the ap-
proach has a good accuracy, further improvement is
possible by analyzing the particularities of the mis-
classified examples.
The next step would be automatizing this whole
process and making it readily available to developers.
The research and experiments described in this paper
serve as solid groundwork for creating such a tool that
allows for real time algorithm complexity verification.
The need for this functionality becomes clear when
one considers the benefits gained, such as the abil-
ity to easily identify potential performance or security
misconceptions developers might have when writing
the code. The exact manner in which said tool might
function still needs analysis and experimentation, but
a possible form might be akin to unit tests that are ex-
ecuted within a continuous integration environment.
REFERENCES
Bachmann, P. (1894). Die Analytische Zahlentheorie.
Benchmark (2016). Benchmark library.
https://github.com/google/benchmark.
Brockschmidt, M., Emmes, F., Falke, S., Fuhs, C., and
Giesl, J. (2014). Alternating runtime and size com-
plexity analysis of integer programs. In International
Conference on Tools and Algorithms for the Con-
struction and Analysis of Systems, pages 140–155.
Springer.
Chapin, N., Hale, J. E., Kham, K. M., Ramil, J. F., and Tan,
W.-G. (2001). Types of software evolution and soft-
ware maintenance. Journal of Software Maintenance,
13(1):3–30.
Chen, Z., Chen, B., Xiao, L., Wang, X., Chen, L., Liu, Y.,
and Xu, B. (2018). Speedoo: prioritizing performance
optimization opportunities. In 2018 IEEE/ACM 40th
International Conference on Software Engineering
(ICSE), pages 811–821. IEEE.
Cormen, T. H., Stein, C., Rivest, R. L., and Leiserson, C. E.
(2001). Introduction to Algorithms. McGraw-Hill
Higher Education, 2nd edition.
Crosby, S. A. and Wallach, D. S. (2003). Denial of ser-
vice via algorithmic complexity attacks. In Proceed-
ings of the 12th Conference on USENIX Security Sym-
posium - Volume 12, SSYM’03, pages 3–3, Berkeley,
CA, USA. USENIX Association.
Demonti
ˆ
e, F., Cezar, J., Bigonha, M., Campos, F., and
Magno Quint
˜
ao Pereira, F. (2015). Automatic infer-
ence of loop complexity through polynomial interpo-
lation. In Pardo, A. and Swierstra, S. D., editors, Pro-
gramming Languages, pages 1–15, Cham. Springer
International Publishing.
Goldsmith, S. F., Aiken, A. S., and Wilkerson, D. S. (2007).
Measuring empirical computational complexity. In
Proceedings of the the 6th Joint Meeting of the Euro-
pean Software Engineering Conference and the ACM
SIGSOFT Symposium on The Foundations of Software
Engineering, ESEC-FSE ’07, pages 395–404, New
York, NY, USA. ACM.
Hansen, P., Pereyra, V., and Scherer, G. (2012). Least
Squares Data Fitting with Applications. Least Squares
Data Fitting with Applications. Johns Hopkins Uni-
versity Press.
Knuth, D. E. (1998). The Art of Computer Programming,
Volume 3: (2Nd Ed.) Sorting and Searching. Addison
Wesley Longman Publishing Co., Inc., Redwood City,
CA, USA.
Le M
´
etayer, D. (1988). Ace: An automatic complexity eval-
uator. ACM Trans. Program. Lang. Syst., 10:248–266.
Luo, Q., Nair, A., Grechanik, M., and Poshyvanyk, D.
(2017). Forepost: Finding performance problems au-
tomatically with feedback-directed learning software
testing. Empirical Software Engineering, 22(1):6–56.
McCall, J. A., Herndon, M. A., Osborne, W. M., and States.,
U. (1985). Software maintenance management [mi-
croform] / James A. McCall and Mary A. Herndon,
Wilma M. Osborne. U.S. Dept. of Commerce, Na-
tional Bureau of Standards.
Olivo, O., Dillig, I., and Lin, C. (2015). Static detection of
asymptotic performance bugs in collection traversals.
In ACM SIGPLAN Notices, volume 50, pages 369–
378. ACM.
OpenJDK (2017). Hashmap implementation change.
https://openjdk.java.net/jeps/180.
Rosendahl, M. (1989). Automatic complexity analysis. In
Fpca, volume 89, pages 144–156. Citeseer.
Scipy (2019). Python scipy.optimize documentation.
https://docs.scipy.org/doc/scipy/reference/optimize.html.
W. Kernighan, B. and J. Van Wyk, C. (1998). Timing tri-
als, or, the trials of timing: Experiments with scripting
and user-interface languages. Software: Practice and
Experience, 28.
Wegbreit, B. (1975). Mechanical program analysis. Com-
munications of the ACM, 18(9):528–539.
Weiss, M. A. (2012). Data Structures and Algorithm Anal-
ysis in Java. Pearson Education, Inc.
Zimmermann, P. and Zimmermann, W. (1989). The auto-
matic complexity analysis of divide-and-conquer al-
gorithms. Research Report RR-1149, INRIA. Projet
EURECA.
Automatic Algorithmic Complexity Determination Using Dynamic Program Analysis
193