example of ML connected to adaptability is ML in
dynamically changing environments where the
adaptation is performed through learning and
identification of context changes.
But how does ML distinguish from human
learning?
Currently, ML is only possible at a small scale
where self-adaptation is usually related to
formalisms used in Fuzzy Systems, Artificial Neural
Networks, and Evolutionary Computation (Baier and
Katoen, 2008). ML covers both symbolic methods
(decision trees and rules, etc.), sub-symbolic
methods (neural networks, Bayesian networks, etc.)
(Berger, 1985) and has several connections with
traditional statistics (discriminant analysis,
regression analysis, cluster analysis, etc.).
Still, mainly due to its mathematical
underpinning, ML is not biased, i.e., humans can be
biased in their decisions, but ML is strictly
mathematical. Human learning involves different
methods and different sources - their “knowledge
base” has common facts and strictly personal facts.
Experience can be transferred.
4 TRUST IN AUTONOMOUS
SYSTEMS
In today’s technologies, the term autonomous plays
a major role. It denotes systems which perform their
tasks without human intervention as e.g., automatic
lawn mowers, smart home equipment, driverless
train systems, or autonomous cars. The most
challenging question which comes up when
following the life cycle of the term “autonomy“ is
the potential to construct a system that behaves and
operates similarly to, or even better than, a human
being. Hence, it is reasonable to discuss how far we
can push the boundary towards such behavior and
provide autonomic operations at least in a certain
context with highest safety guarantees, and finally
establish trust in its innocuous operation.
But, will robots ever be able to fully substitute
humans?
Our answer is “probably not”. It really depends
a lot on the overall impact, not only technological
but also political that should be expected from a
universal autonomous system that successfully
replaces human beings. Autonomous systems that
will take over the entire traffic control and
transportation—yes, because this will eliminate
hazards related to human errors, e.g., fatigue. But it
is less likely that we will see robots that will ever
replace humans in decision making related to social
organization, for example.
The means to establish trust in autonomous
systems can be roughly described as a twofold
objective based on both boundaries and a technical
approach:
1. Establish boundaries or a range of adaptation
— certain properties (e.g., safety) should be
unavoidably held, and thus unforeseen
adaptations that may mitigate such properties,
should not be allowed without change (human
control) in the established adaptation range.
2. Pursue autonomy in a stepwise manner where
autonomy can be gradually introduced: no
autonomy, partial autonomy, controlled
autonomy, and full autonomy. Hence in earlier
stages, autonomy should be used in less risky
domains.
4.1 Dominant Role of Autonomy
It is hard to imagine a system being constructed by a
human which adapts itself to all and especially all
unforeseen situations as the term unforeseen
describes circumstances the human himself has not
foreseen.
If we restrict ourselves to some foreseen
unforeseen behaviors which we might be able to
handle, we have to consider a problem of
completeness. Did we cover the whole set of
behaviors or did we omit some of the behaviors?
This, of course, raises questions of complexity as the
number of such situations might be close to infinity
and thus, not foreseeable at all. In order to handle
such complexity, we have to restrict the adaptability
of our systems to a certain context in which we are
able to capture all different behaviors, or which at
least enables us to classify and cluster such
situations. Home environments with a few sensors
only might be such a context as well as autonomous
transportation systems, e.g., smart trains.
Some other contexts in which autonomy could
play a dominant role are contexts where systems
operate in environments that change dynamically,
e.g., space, ocean, weather stations, etc. In such
cases, it is impossible to identify and predefine all
possible behaviors. A solution could be related to the
use of granularity in behavior modeling, i.e., a self-
adaptive system should not handle all the possible
behaviors, but categories (or classes, or clusters) of
possible behaviors. Then, known behaviors shall be
classified and let the learning process cope with
these.