deployment environment needs to be considered. This
was difficult in our setup, because it would require a
flexibly configurable deployment infrastructure, that
was not available in our case.
The complexity of microservices-based applica-
tions, also demonstrated by the need to consider the
actual deployment of an application, represents a gen-
eral challenge. By limiting the experiments to singu-
lar architectural changes, introducing CPU limits to
the Kubernetes pods, and focusing on a single han-
dling at a time, influences of other parts of the appli-
cation can be minimized, but not entirely excluded.
For further experiments, these effects would have to
be further reduced or controlled for. Extending an ex-
perimental system in this way is an interesting pos-
sibility for future work. Because the experiments in
this paper focused only on throughput, future work
could also examine additional runtime metrics. An-
other category in the literature survey section that is
also promising is Reliability. Especially the fault tol-
erance of a microservices-based application is highly
dependent on its architecture and should thus be worth
to explore. The presented generation system could be
extended for this by adding the possibility to config-
ure failover strategies for communication links or run-
ning services in a replicated way. Architectural met-
rics about failover and replication could then be put in
relation to the reliability of the application.
7 CONCLUSION
To satisfy runtime quality requirements of applica-
tions, it is often necessary to make trade-offs within
the design of an application. If the architecture of
the application does not fit the requirements during
its runtime costly redevelopments might be neces-
sary. With known relations between architectural
metrics of microservices-based applications and run-
time metrics, design time evaluations would be pos-
sible. Based on a model-driven generation system
we performed multiple experiments to investigate ex-
pected relations between the two types of metrics for
a microservices-based application. These results are
able to strengthen our understanding of relations be-
tween architectural and runtime metrics. Thus, they
can be used in a more informed way in scenarios
where architectural decisions have to be made.
REFERENCES
Apel, S., Hertrampf, F., and Sp
¨
athe, S. (2019). Towards a
metrics-based software quality rating for a microser-
vice architecture. In I4CS, pages 205–220. Springer.
Bogner, J., Wagner, S., and Zimmermann, A. (2017). Au-
tomatically measuring the maintainability of service-
and microservice-based systems: a literature review.
In Joint Conference of 27th IWSM and MENSURA,
pages 107–115.
Bushong, V., Abdelfattah, A., Maruf, A., Das, D., Lehman,
A., Jaroszewski, E., Coffey, M.,
ˇ
Cern
´
y, T., Frajt
´
ak, K.,
Tisnovsky, P., and Bures, M. (2021). On Microser-
vice Analysis and Architecture Evolution: A System-
atic Mapping Study. Applied Sciences, 11:7856.
D
¨
ullmann, T. F. and Van Hoorn, A. (2017). Model-driven
generation of microservice architectures for bench-
marking performance and resilience engineering ap-
proaches. In 8th ACM/SPEC ICPE companion, pages
171–172.
Engel, T., Langermeier, M., Bauer, B., and Hofmann, A.
(2018). Evaluation of microservice architectures: a
metric and tool-based approach. In International Con-
ference on Advanced Information Systems Engineer-
ing, pages 74–89. Springer.
Francesco, P. D., Lago, P., and Malavolta, I. (2019). Ar-
chitecting with microservices: A systematic mapping
study. Journal of Systems and Software, 150:77–97.
Guerron, X., Abrah
˜
ao, S., Insfran, E., Fern
´
andez-Diego,
M., and Gonz
´
alez-Ladr
´
on-De-Guevara, F. (2020). A
taxonomy of quality metrics for cloud services. IEEE
Access, 8:131461–131498.
Haoues, M., Sellami, A., Ben-Abdallah, H., and Cheikhi, L.
(2017). A guideline for software architecture selection
based on iso 25010 quality related characteristics. In-
ternational Journal of System Assurance Engineering
and Management, 8(2):886–909.
ISO/IEC (2014). ISO/IEC 25000 Systems and software
engineering – Systems and software Quality Require-
ments and Evaluation (SQuaRE). Online. 27 pages.
Li, S., Zhang, H., Jia, Z., Zhong, C., Zhang, C., Shan, Z.,
Shen, J., and Babar, M. A. (2021). Understanding and
addressing quality attributes of microservices archi-
tecture: A systematic literature review. Information
and Software Technology, 131:106449.
Panichella, S., Rahman, M. I., and Taibi, D. (2021). Struc-
tural coupling for microservices. 11th CLOSER.
Ravanello, A., Desharnais, J.-M., Villalpando, L. E. B.,
April, A., and Gherbi, A. (2014). Performance mea-
surement for cloud computing applications using iso
25010 standard characteristics. In Joint Conference of
IWSM and MENSURA, pages 41–49. IEEE.
Richardson, C. (2018). Microservices Patterns: With exam-
ples in Java. Manning.
Rud, D., Schmietendorf, A., and Dumke, R. R. (2006).
Product metrics for service-oriented infrastructures.
IWSM/MetriKon, pages 161–174.
Zimmermann, O. (2015). Metrics for architectural synthe-
sis and evaluation – requirements and compilation by
viewpoint. an industrial experience report. In 2015
IEEE/ACM 2nd International Workshop on Software
Architecture and Metrics. IEEE.
CLOSER 2023 - 13th International Conference on Cloud Computing and Services Science
154