data that must be sent to the final destination. There-
fore, edge computing allows the execution of many
applications on intermediary nodes, instead of only in
their central location. Fog and Edge computing are in-
terchangeable terms according to some authors (Gon-
zalez et al., 2016; Dastjerdi and Buyya, 2016; Hu
et al., 2016).
Given the computing capability offered by fog
computing, it is necessary to investigate how to en-
hance network performance from it. Considering this
context, in this paper we propose the FACE. It of-
fers a framework with functionalities capable of ei-
ther reducing or optimizing the network consumption.
The functions provided are (F)iltering, (A)ggregation,
(C)ompression and (E)xtraction. In this paper we ex-
plained the definition and objectives of each function
and also present potential scenarios where FACE can
be applied. In order to test the capabilities of our solu-
tion, we performed an experiment using FACE frame-
work in a video surveillance application. This context
was selected due to the large amount of data gener-
ated by video cameras and due to the need of such
applications for low latency. Results show that FACE
framework provides significant reduction of data traf-
fic, and it can be applied in the network infrastructure
in a transparent way.
The remainder of this paper is organized as fol-
lows. In Section 2 we present a contextualization
about fog computing, describing its definition and dis-
tinctions of cloud computing. The description in de-
tail of the FACE framework is presented in Section 3.
In Section 4 is described our experiment with a video-
surveillance use case. The tests, results and analy-
sis are presented in Section 5, and finally, Section 6
brings the conclusions and future work.
2 FOG COMPUTING
The current wireless network infrastructure provides
the connection between the end users and the data
center from application providers. A user of a mo-
bile phone when performing a search using a browser
on its device counts on the wireless network infras-
tructure to upload her/his request and deliver it to the
destination. The response to the user’s request will
be brought to her/his device by the wireless network
infrastructure. Therefore, the request from the user
has to reach the central server in order to provide the
response to the user. The data generated by an end
user on the edges of the wireless networks will have
to travel through the network until it finds its destina-
tion. The problems with this centralized processing
(conventional scenario) are:
• Delay insertion to the response time. This oc-
curs due to the time that it takes for the request
to be transferred from the user device to the cen-
tral server and then back, to bring the response to
the end user. This time can compromise the per-
formance of latency-sensitive applications such as
video streaming and gaming. Therefore, the faster
that a response can be issued to the end user, the
better the network meets the application response
time requirements.
• Traffic addition to the network. Since all the end-
users requests have to be transferred through the
network until it finds its central server, more traf-
fic is added to the network. With the growing
number of smartphone subscriptions, the telecom
operators will have to provide networks that sup-
port more traffic and more users.
From these two observations, we can conclude
that telecom operators need to find innovative ap-
proaches to support the continuous growth of data
traffic. Therefore, fog computing emerges as an al-
ternative to meet this requirement (Margulius, 2002;
Pang and Tan, 2004). Considered an extension of
cloud computing paradigm, this concept was defined
for the first time in 2012 by the work of Flavio
Bonomi et al. (Bonomi et al., 2012) According to the
authors, fog computing is a virtualized platform that
brings cloud computing services (compute, storage,
and network) closer to the end user, in the edges of the
networks. By offering these services, the platform can
better meet the requirements of time-sensitive appli-
cations. It implies that the cloud computing resources
will be geographically distributed to locations closer
to the end users and/or IoT endpoints. This distributed
infrastructure allows to get bandwidth savings by ap-
plying data processing across intermediary processing
nodes, rather than only in the cloud.
3 FACE FRAMEWORK
The FACE framework consists of a set of specific
functions that aims at improving response time and
avoiding sending unnecessary traffic through the net-
work. It was designed to perform data processing in
the edge or intermediary nodes of the network. As
depicted in Fig. 1, the functions can be hosted in
a box composed by compute and storage resources.
This box may be placed either at the base station level
(setup 1) or right after it (setup 2). In both cases the
box is added to the network topology and is responsi-
ble for intersecting the data, applying the functions to
data processing, and delivering the data results to the
next component of the network.
CLOSER 2017 - 7th International Conference on Cloud Computing and Services Science
464