3.2 Agent Based Volunteer Solutions
Agent based platforms that solve large scale compu-
tational problems are collectively called “agent grids”
(Manola and Thompson, 1999). In the grid an agent
controls computer resources, manipulates data, man-
ages code execution and handles its result; or it pro-
vides an access to its peripheral devices for collective
use. A set of such agents constitutes a multi-agent
system that solves problems using autonomy and col-
laboration principles (i.e. individual agents do not
execute jobs by themselves, but do so in cooperation
with others).
Agent grid typically has two levels: application
and functional (Manola and Thompson, 1999). Ap-
plication level is a set of requirements that defines
platform characteristics, such as scalability and adapt-
ability. Functional level, on the other hand, consists of
requirements for the computing environment and its
components (i.e. how available resources are linked
to each other).
In (Gangeshwari et al., 2012) authors organize
multiple agent supervised data centers into a hy-
per cubic grid structure. Every data center has pre-
installed execution software, while agents optimize
inter-data center workload distribution and commu-
nication channels load when distributing jobs. How-
ever, on the level of a single data center nodes are
controlled by pre-defined algorithms, thus, are not au-
tonomous.
Another platform of this kind is presented in
(Marozzo et al., 2011). Here, every node is man-
aged by an agent, who are assigned master or slave
role. Master nodes cooperate to organize and man-
age job execution. One of them acts as user inter-
face, whilst others monitor its performance and vol-
untarily take on control if it fails. Slaves, on the other
hand, get commands from master nodes, execute them
and return the result. Node autonomy, in this case, is
utilized at the master level, whilst slaves are directly
managed. In (Dang et al., 2012) authors present simi-
lar solution that extends Gnutella protocol to facilitate
peer-to-peer execution of jobs. In particular software
components called super agents organize themselves
into groups that cooperate to facilitate the execution.
Task initiator becomes master node and other peers
become slaves.
AGrIP (Luo and Shi, 2007), on the other hand, is a
FIPA compliant platform based on MAGE project. It
satisfies two main requirements: creates and manages
a pool of computing machines, and provides standard-
ized build in grid services. In order to do so AGrIP
creates agent roles that target particular functionality
on both application and functional levels.
We develop a platform that is also FIPA compli-
ant and extends Jade framework (Bellifemine et al.,
2007). Following rules apply with respect to agent
autonomy:
1. no node has direct control over others, but may
indirectly influence execution flow. We refer to
this mechanism as supervision and it includes
reducer-mapper and supervisor-reducer relation-
ships as part of system architecture.
2. agents store the data in a distributed fashion so,
that there is no central storage that would create a
bottle neck.
3. supervision includes state duplication on peer de-
vices to allow restarting processes at different
stages and not from the start if needed.
4. agents may independently change roles and/or
take numerous roles (e.g. reducer and supervisor)
at the same time.
Finally, we narrow down agent-grid definition by
merging it with the notion of “scale out” solution (Lin
and Dyer, 2010). Scale out is an architecture type that
offers cluster computing with machines connected to
the network. The difference is - an agent grid does
not necessarily imply changes in execution efficiency
when number of agents changes; whilst, scale out
does, but lacks machine autonomy (freedom to self-
organize at execution time). Thus, agent-grid should
hold following property: an increase in number of
agents should result in increase in computing speedup
and visa versa, while all machines are autonomous at
all levels.
4 PLATFORM MODEL AND
ARCHITECTURE
First, we extend formal description of the volunteer
job execution in the light of the case study require-
ments in 4.1. Then we introduce an algorithm for the
ad hoc mobile cloud composition in 4.2.
4.1 Workload Distribution Function
In order to solve the problem we construct an iterative
scheme that employs domain decomposition method
from (Barry et al., 2004) (figure 1). General comput-
ing domain is divided into arbitary number of sub-
domains (three in the figure) to be dostributed be-
tween nodes and computed in parallel.
We denote entire computation by J and its step by
k, such as J = {k
1
, k
2
, . . . , k
n
}. Here, k represents a
sub-domain to be computed. All steps are performed
SIMULTECH2014-4thInternationalConferenceonSimulationandModelingMethodologies,Technologiesand
Applications
172