
late the desired application from external programs
and processes that run on the machine’s host operat-
ing system. Thus, it can be defined as an application
packaging mechanism (Siddiqui et al., 2019).
In order to be used, an application image must first
be created with its specifications. Then, a container is
built on top of that image, like a self-contained ma-
chine that runs the application. This container image
with the specifications can be shared among different
hosts, and each machine can build their own running
container with the same configurations, regardless of
hardware and operating system (OS). This is possi-
ble because containers are a form of lightweight vir-
tualization, which can include its own OS (Scheepers,
2014).
The use of containers has become widely pop-
ular due to their portability to migrate applications
among different environments, without the occur-
rence of problems caused by different configurations
in distinct machines. To be portable, the host machine
must have its operating system prepared to run con-
tainers. Generally, an installation and configuration
of a container manager software is required for those
OSs that do not offer it out of the box.
2.3 AWS Services
One of the largest cloud providers known nowadays
is Amazon Web Service (AWS), which offers hun-
dreds of services available on the internet. This work
uses the following services: AWS Lambda and AWS
ECR (Elastic Container Registry). Amazon’s server-
less service is called AWS Lambda (AWS, 2024g),
and it allows creating functions to run applications
without having to provision and manage servers.
The Lambda service manages most of the com-
puting configuration, which provides computing re-
sources for memory, CPU, network, and necessary re-
sources to run the application code (function). Func-
tions are instantiated on demand based on user re-
quests, prompting the cloud provider to dynamically
allocate computing resources from its infrastructure
to meet the demand. As functions become idle for a
few minutes, allocated computing resources are deac-
tivated. This allows customers to pay only for the time
the application is active and being invoked, with the
resources allocated to the serverless function (AWS,
2024g).
AWS provides several ways to deploy (create)
serverless functions through its website in a visual
and intuitive manner. In particular, as this work’s ob-
ject of study, the available and analysed models are:
compressed folder in ZIP format and container im-
age. There is a third model, through the IDE (Inte-
grated Development Environment) integrated into the
website on AWS. However, it was not analysed since
browser IDEs are not as intuitive and do not facilitate
development as much as traditional IDEs, and there-
fore are not frequently used by developers and com-
panies in the software development sector. Another
factor that led to this choice is that the integrated IDE
is only available for a few programming languages on
the platform (NodeJS, Python and Ruby), which lim-
its its scope of use.
The hardware architecture where the serverless
service will be executing can be chosen when creat-
ing the function. It can be either under the arm64 or
x86 64 architecture, and as a default it is pre-selected
x86 64, though can be changed to the arm64 architec-
ture, which stands out for having a lower execution
cost and achieving good performance results (AWS,
2024c).
AWS ECR is the repository service for storing
container images. Some of the container tools that
can be used for this purpose are: Podman (AWS,
2024e) and Docker (AWS, 2024d). Given Docker’s
great popularity, it was opted as the container tool
for the experiments. The developer must create the
image to run the application on their local machine
from a Dockerfile file and publish it to AWS ECR
(AWS, 2024f), to become available on AWS. By hav-
ing the image available in the AWS ECR repository, it
can be used in several of the provider’s services, and
specifically for this work, to create functions in AWS
Lambda.
3 RELATED WORKS
FaaS models are not particularly new, and though
bring ease to the implementation of applications,
there is still room for studies to analyse areas for im-
provement.
The work of (Dantas et al., 2022) addresses strate-
gies to reduce cold start and compares the impact on
time when instantiating a function through a com-
pressed file and via a container image. Despite
proposing solutions to minimize the init time, the
cited work does not address cost.
It is by evaluated by (Elsakhawy and Bauer, 2021)
the factors that affect the performance of serverless
functions, the results of container options, differ-
ent programming languages, and compilation alterna-
tives. However, the authors do not take into account
the cost to execute the function and the init time of the
cold start.
The authors (Villamizar et al., 2017) compared the
costs of running applications in monolith, microser-
ICEIS 2025 - 27th International Conference on Enterprise Information Systems
742