use of State Machine Replication (SMR) where repli-
cated state machines have a cache of requests that
have been previously executed. This cache is con-
sulted by the replicas when a request is received. If
the request has already been executed, its previously
computed reply is sent by the replica to the client.
A service manager is used to manage events in se-
quence, executing them on the service and sending a
reply back to the client. Replies are cached to en-
sure at-most-once execution of requests. A Concur-
rentHashMap is used to reduce contention and im-
prove multi-threaded performance when under high
load from many threads.
For Scalable State Machine Replication (S-SMR)
in (Bezerra et al., 2014), when a command is exe-
cuted, the executing client multicasts the command
to all partitions that contain variables that the com-
mand reads or updates. An oracle is assumed to exist
that handles informing the client about which parti-
tions the command should be sent to. A server in a
partition is able to cache variables that are from other
partitions in multiple ways. They consider conserva-
tive caching and speculative caching.
With conservative caching, a server waits for a
message from a variable’s partition that says if the
cached value is still valid or not before executing a
read operation for that variable. If it’s been inval-
idated, the server discards the cached copy and re-
quests the value from the partition. With speculative
caching, it is assumed that the cached values are up
to date. A server will immediately get the cached
value from a read operation without using any vali-
dation messages. If the variable is invalidated by a
message from a server in the variable’s partition, the
server with the invalid value will be rolled back to be-
fore the operation, the cached value will be updated to
the new value, and execution will be resumed from the
earlier state. In (Le et al., 2016), Dynamic Scalable
State Machine Replication (DS-SMR) improves on
(Bezerra et al., 2014)’s S-SMR by giving each client
a local cache of the oracle’s partition information. If
the cache contains information about the variables in-
volved in a particular command, the oracle is not con-
sulted, improving scalability.
We may also mention Robbert van Renesse et al.’s
work (Marian et al., 2008), where they use a middle
tier state machine-like model to alleviate replication
of services, that may include caching. Even though
this study is quite out of scope of our study we still
mention it as a possible approach to handle caching
services on the cloud.
For Mobile Edge Caching in (Yao et al., 2019),
caching in the context of mobile edge networks is cov-
ered, including the issues of where, how, and what
to cache. They discuss caching locations, including
caching at user equipment, base stations, and relays,
as well as caching criteria such as cache hit prob-
ability, network throughput, content retrieval delay,
and energy efficiency. Several caching schemes are
mentioned, for example, distributed caching, in which
nodes use info from neighboring nodes to increase the
perceived cache size to the user.
In (Le et al., 2019), Le Hoang et al. propose Dy-
naStar, based on DS-SMR from (Le et al., 2016).
Their improvements over DS-SMR involve multicas-
ting of commands, caching of location information
from the oracle on the client, and re-partitioning of
a command’s variables to a target partition that exe-
cutes the command, replies, and returns the updated
variables to their partitions.
Last but not least we should also mention re-
cent studies of Changchuan Yin et al. (Chen et al.,
2017; Chen et al., 2019) where they utilized a liquid
state machine learning model to manage caching in
an LTE-U unmanned Aerial Vehicle Network. Com-
pared to our approach this study uses network level
information and the state machines come into action
as a learning model rather than the executed model
itself.
3 STATE MACHINE MODEL AND
EXPERIMENTAL DATA
3.1 Caching Approaches Used in
Experiments
Figure 1 presents the overall architecture we have
used to perform our experiments. As our state ma-
chine instance receives events, a cache managing
component accesses data from the database, caching
it locally. The cache manager also has a collection of
statistics that is used to determine likelihoods of what
event will be sent next. These statistics consist of sub-
sets of the full state machine history, with each subset
path containing the frequencies of the potential next
events. Based on these frequencies of previous events,
the most likely upcoming event can be predicted, and
the data that would be used by the resultant state can
be cached in advance.
We propose two approaches for predicting the
next set of objects to be fetched to the cache as fol-
lows. We use the notations in Table1 during our defi-
nitions.
• Path based prediction: Based on the current state
of the state machine and the possible predecessors
paths, we choose the next most likely path and the
ICSOFT 2021 - 16th International Conference on Software Technologies
152