Function-as-a-Service (FaaS) is a relatively young
technology in the field of Cloud Computing, which
provides a platform for CSC to develop and run appli-
cations without managing servers and providing low-
level infrastructure resources. In a FaaS environment,
applications are broken down into granular function
units of scale, which are invoked through network
protocols with explicitly specifed or externally trig-
gered messages (Spillner, 2017). All functions are ex-
ecuted in a fully-managed environment where a CSP
handles the underlying infrastructure resources dy-
namically and manages the runtime environment for
code execution. The software development and ap-
plication management remains with the CSC whereas
the responsibility for running and maintaining the en-
vironment shifts from the CSC to the CSP.
In recent years, a large number of commercial and
Open-source FaaS platforms have been developed and
deployed. Under the hood, all platform are hetero-
geneous in nature, because they use different hard-
ware and software components, different runtime sys-
tems and programming languages, routing gateways,
resource management tools and monitoring systems.
As an example, Malawski et al. (2017) points out that
most platforms use Linux, but Azure functions run on
Windows. With the rising popularity of FaaS, an in-
creasing need to benchmark different aspects of FaaS
platforms has been recognized by research and indus-
try (Kuhlenkamp and Werner, 2018). However, the
authors add that it remains challenging to efficiently
identify a suitable benchmarking approach. They fur-
thermore point out that there is a need for efficiently
identifying the current state of the art of experiments,
which validates FaaS platforms.
In this paper, we present a prototype of a frame-
work for benchmarking FaaS, which takes Cloud
Functions and the underlying environment into ac-
count. We describe the architectual design, the com-
ponents, and how they interact with each other. Typ-
ically, benchmark tools are used to evaluate the per-
formance of hard- or software components in terms
of speed or bandwidth. However, we show how this
benchmarking framework can also be used to iden-
tify limitations and restrictions on top of the FaaS in-
frastructure. This, in turn, helps CSC to identify if
the overall performance of Cloud Functions and FaaS
platform meets their business requirement.
The remainder of this paper is as follows: we pro-
vide a summary of the related work in Section II,
followed by a detailed description of the benchmark
framework archictechture in Section III. Afterwards,
we present our test scenario and the results in Section
IV. Finally, we conclude our work with a short sum-
mary and future work in Section V.
2 RELATED WORK
An initial introduction and guideline have been done
by Bermbach et al. (2017) who are mainly interested
in client-observable characteristics of Cloud Services.
The authors cover all aspects of Cloud service bench-
marking including motivation, benchmarking design
and execution, and the use of the results. The au-
thors point out that Cloud benchmarking is important
as applications depends more and more on Cloud ser-
vices. However, this work describes a general picture
of Cloud benchmarking but does not focus on FaaS,
in particular, and specific characteristics of this new
Cloud Service.
Kuhlenkamp and Werner (2018) identify the need
to benchmark different qualities and features of FaaS
platforms and present a set of benchmarking ap-
proaches. They present a preliminary results for a sys-
tematic literature review in support of benchmarking
FaaS platforms. The results show that no standardized
and industry-wide Benchmark suite exists for measur-
ing the performance and capabilities of FaaS imple-
mentations. Their results indicate a lack of bench-
marks that observe functions not in an isolated but in
a shared environment of a Cloud Service.
Spillner et al. (2017) analyze several resource-
intensive tasks in terms of comparing FaaS models
with conventional monolithic algorithms. The au-
thors conduct several experiments and compare the
performance and other resource-related characteris-
tics. The results demonstrate that solutions for sci-
entific and high-performance computing can be re-
alized by Cloud Functions. The authors mainly fo-
cus on computing intensive tasks (e.g. face detec-
tion, calculation of π, etc.) in the domain of scientific
and high-performance computing but they do not take
other FaaS qualities of interest into account. For ex-
ample, FaaS environments offer a timeout parameter,
which specifies the maximum time for code execu-
tion. After this time, any code execution stops and
the FaaS platform returns an error status. A timeout
might become crucial when it comes to Cloud Func-
tion pricing since CSC is charged for the entire time
that the Cloud Function executes. Therefore, a lack
of precision can lead to early and misleading time-
outs during code execution, which, in turn, can end in
uncompleted tasks and high costs.
McGrath and Brenner (2017) present a
performance-oriented serverless computing platform
to study serverless implementation considerations.
Their prototype provides important insights how
functions are managed and executed in serverless
environments. The authors also discuss several
implementation challenges such as function scaling
CLOSER 2019 - 9th International Conference on Cloud Computing and Services Science
480