6 LIMITATIONS AND FURTHER
RESEARCH
While these evaluation results in principle demon-
strate its feasibility, some open issues remain to be
addressed for evaluating the general applicability and
benefits of the proposed approach in practice:
First, its adaptability to test different OSN bac-
kends needs to be demonstrated by evaluating the
tool in various scenarios, in lab tests as well as in
practice. As indicated in section 4, sub-classing of
GenericUser provides a mean to adapt the present
PoC implementation with reasonable effort to new
OSN backends.
Second, as described in section 4, the current PoC
implementation is limited to a single JVM instance
and therefore is only vertically scalable. While this
is sufficient to evaluate the proposed model and the
basic feasibility of the approach, the creation of more
realistic, spatially distributed load tests requires hori-
zontal scalability using an implementation distributa-
ble over multiple servers. The presented implemen-
tation in principle could be extended to run on multi-
ple server nodes in a network by implementing a syn-
chronisation and replication mechanism between the
nodes for the two global arrays described. E.g., a mes-
sage passing communication model could be used for
this. However, further research is needed to imple-
ment and evaluate this extension.
7 CONCLUSION
In conclusion, in this paper a probabilistic model for
simulating interacting users of a social app has been
proposed and evaluated by implementing it in a pro-
totype load testing tool. The proposed approach is
capable not only of simulating the users’ activity de-
pending on their interest, external events in their area
and their friends’ activity, but also takes into account
the spatial distribution of users and external events,
which are localized and have a certain spatial impact
range within a virtual world.
While the evaluation is still preliminary, the re-
sults obtained so far are promising. Already with the
prototype tool some serious design flaws of a service
backend of a new, real-world social app currently un-
der development were detected. However, further re-
search is needed to continue the evaluation of the pro-
posed approach in lab and field tests with respect to
its possible applications, ease of use, performance and
adaptability.
REFERENCES
Albonico, M., Mottu, J.-M., and Suny
´
e, G. (2016). Au-
tomated workload generation for testing elastic web
applications.
Awad, M. A. and Khalil, I. (2012). Prediction of user’s
web-browsing behavior: Application of markov mo-
del. IEEE Transactions on Systems, Man, and Cyber-
netics, Part B (Cybernetics), 42(4):1131–1142.
Balachandran, A., Voelker, G. M., Bahl, P., and Rangan,
P. V. (2002). Characterizing user behavior and net-
work performance in a public wireless lan. In Procee-
dings of the 2002 ACM SIGMETRICS International
Conference on Measurement and Modeling of Compu-
ter Systems, SIGMETRICS ’02, pages 195–205, New
York, NY, USA. ACM.
Benevenuto, F., Rodrigues, T., Cha, M., and Almeida, V.
(2009). Characterizing user behavior in online so-
cial networks. In Proceedings of the 9th ACM SIG-
COMM Conference on Internet Measurement Confe-
rence, IMC ’09, pages 49–62, New York, NY, USA.
ACM.
Bonabeau, E. (2002). Agent-based modeling: Methods and
techniques for simulating human systems. Procee-
dings of the National Academy of Sciences, 99(suppl
3):7280–7287.
Busari, M. and Williamson, C. (2002). Prowgen: a synthetic
workload generation tool for simulation evaluation of
web proxy caches. Computer Networks, 38(6):779–
794.
Calheiros, R. N., Netto, M. A., De Rose, C. A., and
Buyya, R. (2013). Emusim: an integrated emula-
tion and simulation environment for modeling, evalu-
ation, and validation of performance of cloud compu-
ting applications. Software: Practice and Experience,
43(5):595–612.
Calzarossa, M. C. and Tucci, S. (2002). Performance Eva-
luation of Complex Systems: Techniques and Tools:
Performance 2002. Tutorial Lectures, volume 2459.
Springer Science & Business Media.
Chen, F., Grundy, J., Schneider, J.-G., Yang, Y., and He,
Q. (2015). Stresscloud: a tool for analysing perfor-
mance and energy consumption of cloud applications.
In Proceedings of the 37th International Conference
on Software Engineering-Volume 2, pages 721–724.
IEEE Press.
Chen, Y., Ganapathi, A. S., Griffith, R., and Katz, R. H.
(2010). Towards understanding cloud performance
tradeoffs using statistical workload analysis and re-
play. University of California at Berkeley, Technical
Report No. UCB/EECS-2010-81.
Folkerts, E., Alexandrov, A., Sachs, K., Iosup, A., Markl,
V., and Tosun, C. (2012). Benchmarking in the cloud:
What it should, can, and cannot be. In Technology
Conference on Performance Evaluation and Bench-
marking, pages 173–188. Springer.
Gonc¸alves, G. D., Drago, I., Vieira, A. B., da Silva, A. P. C.,
Almeida, J. M., and Mellia, M. (2016). Workload mo-
dels and performance evaluation of cloud storage ser-
vices. Computer Networks.
Simulating User Interactions: A Model and Tool for Semi-realistic Load Testing of Social App Backend Web Services
241