Table 1: Response time measurements for user actions when ramping up from 0 to 300 users.
Target Response Time Non-Bidders (22 %) Passive Users (33 %) Aggressive users 45 % Verdict
Actions Average Max Time of Time of Time of Time of Time of Time of Pass/fail
(sec) (sec) breach (sec) breach (sec) breach (sec) breach (sec) breach (sec) breach (sec)
browse() 4.0 8.0 279 (78 users) 394 (110 users) 323 (90 users) 394 (110 users) 279 (78 users) 394 (110 users) Failed
search(string) 3.0 6.0 279 (78 users) 394 (110 users) 279 (78 users) 394 (110 users) 229 (64 users) 327 (92 users) Failed
get action(id) 2.0 4.0 280 (79 users) 325 (91 users) 279 (78 users) 279 (78 users) 276 (77 users) 325 (91 users) Failed
get bids(id) 3.0 6.0 279 (78 users) 446 (130 users) 325 (91 users) 394 (110 users) 327 (92 users) 394 (110 users) Failed
bid(id,price, username, password) 5.0 10.0 —- —– 327 (92 users) 474 (132 users) 328 (92 users) 468 (131 users) Failed
models to generate synthetic load in real-time. The
models are based on the Probabilistic Timed Au-
tomata, and include statistical information that de-
scribes the distribution between different actions and
corresponding think times. With the help of probabil-
ity values, we can make it so that a certain action is
more likely to be chosen over another action, when-
ever the virtual user encounters a choice in the PTA.
We believe that the PTA models are well suited for
performance testing and that the probability aspect
that the PTA holds is good for describing dynamic
user behavior, allowing us to include a certain level
of randomness in the load generation process. This is
important because we wanted the virtual users to be
able to mimic real user behavior as closely as possi-
ble, and minimize the effect of caches on the perfor-
mance evaluation.
The approach is supported by a set of tools, in-
cluding the MBPeT load generator. MBPeT has a
scalable distributed architecture which can be easily
deployed to cloud environments. The tool has a ramp-
ing feature which describes at what rate new users are
added to the system and also supports the ability to
specify a think time. When the test duration has ended
the MBPeT tool will gather measured data, process it
and create a test report.
In the future we will look into if parts of the model
creation can be automated. At the moment it is done
manually. There are indications that certain parts of
creating the models can be automated e.g. by auto-
matically analyzing the log data and using different
clustering algorithms.
REFERENCES
Abbors, F., Ahmad, T., Truscan, D., and Porres, I. (2012).
MBPeT: A Model-Based Performance Testing Tool.
2012 Fourth International Conference on Advances in
System Testing and Validation Lifecycle.
Ahmad, T., Abbors, F., Truscan, D., and Porres, I. (2013).
Model-Based Performance Testing Using the MBPeT
Tool. Technical Report 1066, Turku Centre for Com-
puter Science (TUCS).
Alur, R. and Dill, D. L. (1994). A theory of timed automata.
Theor. Comput. Sci., 126(2):183–235.
Barna, C., Litoiu, M., and Ghanbari, H. (2011). Model-
based performance testing (NIER track). In Proceed-
ings of the 33rd International Conference on Software
Engineering, ICSE ’11, pages 872–875, New York,
NY, USA. ACM.
Calzarossa, M., Massari, L., and Tessera, D. (2000). Work-
load Characterization Issues and Methodologies. In
Performance Evaluation: Origins and Directions,
pages 459–481, London, UK, UK. Springer-Verlag.
Denaro, G., Polini, A., and Emmerich, W. (2004). Early
performance testing of distributed software applica-
tions. In Proceedings of the 4th international work-
shop on Software and performance, WOSP ’04, pages
94–103, New York, NY, USA. ACM.
Django (2012). Online at https://www.djangoproject.com/.
Ferrari, D. (1984). On the foundations of artificial work-
load design. In Proceedings of the 1984 ACM SIG-
METRICS conference on Measurement and modeling
of computer systems, SIGMETRICS ’84, pages 8–14,
New York, NY, USA. ACM.
Kwiatkowska, M., Norman, G., Parker, D., and Sproston,
J. (2006). Performance analysis of probabilistic timed
automata using digital clocks. Formal Methods in Sys-
tem Design, 29:33–78.
Menasce, D. A. (2002). Load Testing of Web Sites. IEEE
Internet Computing, 6:70–74.
Menasce, D. A. and Almeida, V. (2001). Capacity Plan-
ning for Web Services: metrics, models, and methods.
Prentice Hall PTR, Upper Saddle River, NJ, USA, 1st
edition.
Mosberger, D. and Jin, T. (1998). httperfa tool for measur-
ing web server performance. SIGMETRICS Perform.
Eval. Rev., 26(3):31–37.
Petriu, D. C. and Shen, H. (2002). Applying the UML
Performance Profile: Graph Grammar-based Deriva-
tion of LQN Models from UML Specifications. pages
159–177. Springer-Verlag.
Python (2012). Python programming language. Online at
http://www.python.org/.
Richardson, L. and Ruby, S. (2007). Restful web services.
O’Reilly, first edition.
Ruffo, G., Schifanella, R., Sereno, M., and Politi, R. (2004).
WALTy: A User Behavior Tailored Tool for Evaluat-
ing Web Application Performance. Network Comput-
ing and Applications, IEEE International Symposium
on, 0:77–86.
SeleniumHQ (2012). Online at http://seleniumhq.org/.
Shams, M., Krishnamurthy, D., and Far, B. (2006). A
model-based approach for testing the performance of
web applications. In SOQUA ’06: Proceedings of the
3rd international workshop on Software quality assur-
ance, pages 54–61, New York, NY, USA. ACM.
Shaw, J. (2000). Web Application Performance Testing –
a Case Study of an On-line Learning Application. BT
Technology Journal, 18(2):79–86.
WEBIST2013-9thInternationalConferenceonWebInformationSystemsandTechnologies
104