degrees, they achieve more dependent tasks without
compromising the dropout degree. When the
dependencies become quite complex, due to the
increase of tasks that require assistance, their utility
degrades. In the latter case it seems quite reasonable
to act more selfishly and rely more on oneself (Figure
7a). On the other hand, if one agent can afford to
assist then it can adapt its behavior to that end (Figure
7b). A dynamic willingness to cooperate captures
these shifts in behavior. As shown by the results in
Section 4.2 (Figure 6g), even one agent with dynamic
degree of willingness to help is able to positively
impact the whole population.
In the simulations, the dropout degree served as a
regulator. Agents were continuously keeping track of
how many tasks they were concluding (each agent for
itself) and based on that value their behavior adapted.
Consequently, dependency relations are established
with agents in need, based on current circumstances.
In other research areas, this kind of parameter is
used to model risk tolerance (Cardoso and Oliveira,
2009). Agents which are representatives of business
entities, are spawned with different willingness to
sign contracts with other entities – the latter might be
subject to fines. Fines are considered punishment for
undesired behavior. The higher the fines, the higher
the risk is of signing a contract with an agent.
On a different note, the dependency degree was
kept fixed during a single run of the simulations.
Therefore, it can be assumed that the dependencies
are known in advance. However, this might not
always be the case, because dependencies could also
arise during the agent’s lifespan. In principle, the
model presented in this work does not make any
restrictions for how dependencies should be.
Future research will be concerned with the further
development of the agent model, and the
establishment of an agent framework.
Firstly, the model will be expanded to include a
willingness to ask for assistance which changes
depending on the agent’s chance of success if it would
attempt the task by itself. As a result, autonomy will
be shaped by both the willingness to cooperate and
willingness to ask for assistance.
Secondly, the factors which should influence
these parameters such as: health, reward, hierarchy,
and trust, need to be taken into account. A general
definition considers trust in terms of how much an
agent will want to depend on another (Jøsang, et al.,
2007). Integration of this dimension with the current
model will aid the agents to make better choices about
whom to give assistance, and whom to ask for it. The
presence of a hierarchy, also creates interesting
scenarios. As an example, in which cases should an
agent obey its superior? The case in which the
superior sends wrong information continuously is
tackled by Vecht et al. (Vecht et al., 2009), which
results in the agent taking more initiative. Additional
scenarios could include a superior which is in conflict
with agents of a higher rank than itself, or a superior
which asks the agent to do tasks associated with low
reward, thus not exploiting the agent’s full capacity.
Lastly, the model will also be expanded to include
two more auxiliary states, which are regenerative and
out_of_order. The agent can go to out_of_order from
any other state. If the agent attempts by itself to
recover it will change its state to regenerative. In the
case it does indeed recover it will go to idle and
continue normal operation, otherwise it will return to
out_of_order.
ACKNOWLEDGEMENTS
The research leading to the presented results has been
undertaken within the research profile DPAC –
Dependable Platforms for Autonomous Systems and
Control project, funded by the Swedish Knowledge
Foundation (the second and the third authors). In part
it is also funded by the Erasmus Mundus scheme
EUROWEB+ (the first author).
REFERENCES
Barber, S. K., Goel, A., Martin, C. E. (2000). Dynamic
adaptive autonomy in multi-agent systems. Journal of
Experimental \& Theoretical Artificial Intelligence,
12(2), 129-147.
Barnes, M. J., Chen, J. Y., Jentsch, F. (2015). Designing for
Mixed-Initiative Interactions between Human and
Autonomous Systems in Complex Environments. IEEE
International Conference on Systems, Man, and
Cybernetics (SMC).
Bradshaw, J. M., Jung, H., Kulkarni, S., Johnson, M.,
Feltovich, P., Allen, J., Bunch et al. (2005). Kaa:
policy-based explorations of a richer model for
adjustable autonomy. Proceedings of the fourth
international joint conference on Autonomous agents
and multiagent systems. ACM.
Brookshire, J., Singh, S., Simmons, R. (2004). Preliminary
results in sliding autonomy for coordinated teams.
Proceedings of The 2004 Spring Symposium Series.
Cardoso, H. L., Oliveira, E. (2009). Adaptive deterrence
sanctions in a normative framework. Proceedings of the
2009 IEEE/WIC/ACM International Joint Conference
on Web Intelligence and Intelligent Agent Technology-
Volume 02.
Castelfranchi, C. (2000). Founding agents' "autonomy" on
dependence theory. ECAI.
ICAART 2017 - 9th International Conference on Agents and Artificial Intelligence
86