FaaS support were created, making it possible to de-
ploy FaaS-based services on private and public cloud
computing infrastructures.
FaaS architectures usually rely on the existence of
a main component – API Gateway – that works as
an entry point for the entire platform, i.e., it receives
requests to deploy, update or remove functions and
executes them when an event is triggered or an ex-
ecution request is made to it. The API Gateway is
also responsible for managing the computational in-
frastructure where the functions will be executed and
to increase or decrease the allocated resources for ex-
ecuting the functions, depending on the demand.
Considering the possibility that the user base of
a FaaS-based service is spread around the globe, it
makes sense that the infrastructure that is used for
such service is also distributed around the globe, al-
lowing requests made by users to be handled by in-
stances of the service hosted closer to them.
In this work we analyse the feasibility of imple-
menting distributed platforms that support FaaS, con-
sidering both multi-cloud and federated cloud sce-
narios, that can effectively balance the load between
multiple instances of the service, as well as automati-
cally manage the computational infrastructure capac-
ity available for executing each registered function
(auto-scaling), considering both virtual machines and
containers.
The rest of the paper is structured as follows: in
Section 2, we present work related to this research.
Then, in Section 3 we propose an architecture for
deploying FaaS-based services in geographically dis-
tributed multi-clouds. Thereafter, in Section 4 we
provide a reference implementation of the proposed
architecture. Finally, in Section 6 we present our in-
sights when developing this work.
2 RELATED WORK
Over the course of years, several researches about
cloud computing deployment and service models
have been developed. Such researches aim at pro-
viding cloud-based services that have, among other
characteristics, high availability and scalibility, while
maintaining low operation costs.
Considering the deployment and management
of hybrid multi-clouds, Brasileiro et al. pro-
posed Fogbow – a middleware to federate IaaS
clouds (Brasileiro et al., 2016). This middleware
eases the interoperability of hybrid multi-clouds and
the management of federated cloud resources.
Multiple open source FaaS platforms have been
developed, but they are usually only able to per-
form auto-scaling of function replicas (containers)
and not the auto-scaling of the underlying infrastruc-
ture (Spillner, 2017).
To the best of our knowledge, this is the first time
that the aforementioned technologies or any of their
kind have been combined together to enable the de-
ployment of a FaaS platform in a multi-cloud environ-
ment with auto-scaling of nodes (virtual machines).
3 ARCHITECTURE
In this paper we propose an architecture for deploy-
ing and executing Serverless applications in federated
clouds. This architecture comprises three main com-
ponents: (i) FaaS Cluster, (ii) FaaS Proxy, and (iii)
Multi-Cloud Resource Allocator (MCRA). Such ar-
chitecture is depicted in Figure 1.
The FaaS Cluster, as its name suggests, is a clus-
ter where the actual FaaS platform is deployed. Mul-
tiple instances of the FaaS Cluster are deployed on
geographically distributed cloud infrastructures. This
component receives and processes requests coming
from the FaaS Proxy. It is also responsible for moni-
toring the resources used in each individual cloud, and
request the MCRA to provide more resources when
needed, or release resources when they are no longer
necessary.
The FaaS Proxy is responsible for providing the
same interface that a regular FaaS gateway would.
However, instead of processing the requests directly,
it must choose between either forwarding the request
to a load balancing service or forwarding it to a mul-
ticasting service. This choice depends on the type of
the request received. Requests related to registering
or removing functions available should be sent to a
multicaster service. This service will make sure the
received requests are replicated in all FaaS Clusters
deployed. On the other hand, requests for the execu-
tion of a function are forwarded to a load balancing
service. This service will chose where to execute the
function, based on the geographic location of the user
who made the request and that of the available FaaS
Clusters.
Last but not least, the Multi-Cloud Resource Al-
locator (MCRA) is a component able to allocate or
remove resources in any of the clouds that provide
the infrastructure to the FaaS Clusters. These clouds
may be operated by different providers, and run dif-
ferent orchestrator middleware. The MCRA imple-
ments auto-scaling at the virtual machine level.
IWFCC 2019 - Special Session on Federation in Cloud and Container Infrastructures
596