Authors:
Stefan Rudolph
;
Sven Tomforde
and
Jörg Hähner
Affiliation:
University of Augsburg, Germany
Keyword(s):
Mutual Influence, Q-Learning, Distributed W-Learning, Smart Cameras, Adaptive Control.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Autonomous Systems
;
Computational Intelligence
;
Cooperation and Coordination
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Evolutionary Computing
;
Knowledge Discovery and Information Retrieval
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Machine Learning
;
Multi-Agent Systems
;
Self Organizing Systems
;
Soft Computing
;
Software Engineering
;
Symbolic Systems
Abstract:
Robust and optimized agent behavior can be achieved by allowing for learning mechanisms within the underlying adaptive control strategies. Therefore, a classic feedback loop concept is used that chooses the best action for an observed situation – and learns the success by analyzing the achieved performance. This typically reflects only the local scope of an agent and neglects the existence of other agents with impact on the reward calculation. However, there are significant mutual influences among agents population. For instance, the success of a Smart Camera’s control strategy depends (in terms of person detection or 3D-reconstruction) largely on the current strategy performed by its spatially neighbors. In this paper, we compare two concepts to consider such influences within the adaptive control strategy: Distributed W-Learning and Q-Learning in combination with mutual influence detection. We demonstrate that the performance can be improved significantly, if taking detected influe
nces into account.
(More)