MODELLING ADAPTIVE CONTROLLERS WITH
EVOLVING LOGIC PROGRAMS
Pierangelo Dell’Acqua, Anna Lombardi
Department of Science and Technology (ITN) - Link
¨
oping University
601 74 Norrk
¨
oping, Sweden
Lu
´
ıs Moniz Pereira
Centro de Intelig
ˆ
encia Artificial (CENTRIA) - Departamento de Inform
´
atica, Universidade Nova de Lisboa
2829-516 Caparica, Portugal
Keywords:
Adaptive controllers, non-monotonic reasoning, evolving logic programming.
Abstract:
The paper presents the use of Evolving Logic Programming to model adaptive controllers. The advantage of
using well-defined, self-evolving logic-based controllers is that it is possible to model dynamic environments,
and to formally prove systems’ requirements.
1 INTRODUCTION
Intuitively, an adaptive controller can change its be-
haviour in response to changes in the dynamics of
the process and the disturbances (
˚
Astrom and Witten-
mark, 1990).
One of the first approaches proposed for adap-
tive control is gain scheduling, which implements
an open-loop compensation. Other approaches have
been proposed based on model-reference adaptive
system (MRAS) or self-tuning regulator (STR). They
both can be seen as composed of two loops. These
classical approaches to deterministic adaptive control
have some limitations when unknown parameters en-
ter the process model in complicated ways. In this
case, it may be difficult to construct a continuously
parameterized family of candidate controllers.
An alternative approach to control uncertain sys-
tems has been proposed (Hespanha et al., 2003). The
main feature which distinguishes it from conventional
adaptive control is that controller selection is carried
out by means of logic-based switching rather than
continuous tuning. Switching among candidate con-
trollers is performed by a high-level decision maker
called a supervisor, hence the name supervisory con-
trol. The supervisor updates controller parameters
when a new estimate of the process parameters be-
comes available, similarly to the adaptive control par-
adigm, but these events occur at discrete instants of
time. This results in a hybrid closed-loop system.
Another class of adaptive controllers is fuzzy logic
controllers (Mamdani and Baaklini, 1975). They are
based on adaptation algorithm to update parameters of
the controller as classical adaptive control but they are
capable of incorporating linguistic information from
human operators or experts. This characteristic is par-
ticular important for systems with a high degree of un-
certainty, i.e. systems that are difficult to control from
a control theoretical point of view but they are often
successfully controlled by human operator.
Evolving Logic Programming (EVOLP) is an ex-
tension of logic programming (Alferes et al., 2002). It
allows one to model the dynamics of knowledge bases
expressed by programs, as well as specifications that
dynamically change. In this paper EVOLP is used to
model adaptive controllers. A case study will be illus-
trated where the controller is implemented by using
EVOLP.
The paper is structured as follows: Section 2 in-
troduces the notion of adaptive control with particular
focus on model reference adaptive control and adap-
tive fuzzy control, Section 3 presents the language
and semantics of Evolving Logic Programming. Sec-
tion 4 describes a case study to model adaptive con-
trol systems using evolving language programming.
Finally, Section 5 discusses some future work.
2 ADAPTIVE CONTROL
A definition of adaptive control that is widely ac-
cepted is (
˚
Astrom and Wittenmark, 1990) a controller
with adjustable parameters and a mechanism for ad-
justing the parameters. An adaptive controller has a
distinct architecture, consisting of two loops: a con-
trol loop and a parameter adjustment loop.
107
Dell’Acqua P., Lombardi A. and Moniz Pereira L. (2006).
MODELLING ADAPTIVE CONTROLLERS WITH EVOLVING LOGIC PROGRAMS.
In Proceedings of the Third International Conference on Informatics in Control, Automation and Robotics, pages 107-112
DOI: 10.5220/0001220501070112
Copyright
c
SciTePress
Model
Adjustment
rule
Controller
Process
Σ
ym
e
y
r
u
+
-
Figure 1: Scheme of a model reference adaptive system.
2.1 Model Reference Adaptive
Control
In the model reference adaptive system (MRAS)
(
˚
Astrom and Wittenmark, 1990) the control is spec-
ified in terms of a reference model which tells how
the process output ideally should respond to the com-
mand signal. A block diagram of the system is shown
in Fig. 1.
The regulator system consists of two loops. An
inner loop which is an ordinary feedback loop com-
posed of the process and a controller. The parameters
of the regulator are adjusted by the outer loop in such
a way that the error e between the model output y
m
and the process output y becomes small. This scheme
is so-called direct approach because the adjustment
rules tell directly how the regulator parameter should
be updated.
The key problem is to determine the adjustment
mechanism so that the process output becomes close
to the model output making the error going to zero.
2.2 Fuzzy Control Systems
Fuzzy logic control (Feng et al., 1997) has proved to
be a successful approach for complex nonlinear sys-
tems. In many cases it has been suggested as an alter-
native approach to conventional control techniques.
Fuzzy logic control techniques represent a means of
both collecting human knowledge and expertise and
dealing with uncertainties in the process of control.
Fuzzy control usually decomposes the complex
system into several subsystem according to the human
expert’s understanding of the system and uses a sim-
ple control law to emulate the human control strategy
in each local operating region. The global control law
is then constructed by combining all the local control
actions through fuzzy membership functions.
Many physical systems are very complex in prac-
tice so that rigorous mathematical models can be very
difficult if not impossible to obtain. However many
physical systems can be expressed in some form of
mathematical model locally, or as an aggregation of
a set of mathematical models. The fuzzy dynamic
model, proposed by Takagi and Sugeno (Takagi and
Sugeno, 1985) is described by fuzzy IF-THEN rules
which locally represent nonlinear systems. The fol-
lowing fuzzy model represents a complex single-
input-single-output system and it includes rules and
local analytic linear models:
R
i
: IF
THEN
z
1
is F
i
1
AND . . . AND z
s
is F
i
s
˙x(t) = A
i
x(t) + B
i
u(t)
y
i
(t) = C
i
x(t)
(1)
where i = 1, 2, . . . , m, R
i
denotes the ith fuzzy in-
ference rule, m the number of inference rules, F
i
j
(j =
1, 2, . . . , s) are fuzzy sets, x(t)
n
the system state
variables, u(t)
p
the system input variables, y
i
(t)
and (A
i
, B
i
, C
i
) the output and the matrix triple of
the i th subsystem, and z(t) = [z
1
, z
2
, . . . z
s
] some
measurable system variables.
Let µ
i
(x(t)) be the normalized membership func-
tion of the inferred fuzzy set F
i
where
F
i
=
s
Y
j=1
F
i
j
,
m
X
i=1
µ
i
= 1 (2)
then the final output y(t) of the system is inferred by
taking the weighted average of the outputs y
i
(t) of
each subsystem, that is
y(t) =
m
X
i=1
µ
i
y
i
(t) (3)
It should be noted that the global fuzzy model is non-
linear time-varying since the membership functions
are nonlinear and time-varying in general. The de-
veloped fuzzy model includes two kinds of knowl-
edge: one is the qualitative knowledge represented by
the fuzzy IF-THEN rules, and the other is the quan-
titative knowledge represented by the local dynamic
models. The model has a structure of a two level con-
trol system with the lower level providing basic feed-
back control and the higher level providing supervi-
sory control or scheduling. A basic idea of control
is to design local feedback controllers based on local
models and then construct the global controller from
the local controllers.
In (Feng, 2002) an adaptive control design method
for a class of fuzzy dynamic models has been pro-
posed. The basic idea is to design an adaptive con-
troller in each local region and then construct the
global adaptive controller by suitably integrating the
local adaptive controllers together in such a way
that the global closed-loop adaptive control system
is stable. Adaptive fuzzy control, often called self-
organizing fuzzy control (SOC) (Mamdani and Baak-
lini, 1975), can be classified as a MRAS. It has a hi-
erarchical structure in which the inner loop is a table
ICINCO 2006 - INTELLIGENT CONTROL SYSTEMS AND OPTIMIZATION
108
based controller and the outer loop is the adjustment
mechanism. The idea behind self-organization is to
let the adjustment mechanism update the values in the
control table on the basis of the current performance
of the controller.
3 EVOLVING LOGIC
PROGRAMMING
In this section we recap the paradigm of Evolving
Logic Programming (EVOLP), a simple though quite
powerful extension of logic programming (Alferes
et al., 2002)
1
. EVOLP allows to model the dynamics
of knowledge bases expressed by programs, as well
as specifications that dynamically change.
3.1 Language
To make a logic program evolve one needs some
mechanism for letting older rules be supervened by
more recent ones. That is, one must include a mech-
anism for deletion of previous knowledge. This can
be achieved by permitting negation not just in bod-
ies of rules, but in their heads as well
2
. Moreover,
one needs a means to state that, under some condi-
tions, some new rule is to be added to the program. In
EVOLP this is achieved by augmenting the language
with a reserved predicate assert/1, whose argument
is itself a rule, so that arbitrary nesting becomes pos-
sible. This predicate can appear both as rule head (to
impose internal assertions of rules) as well as in rule
bodies (to test for assertion of rules).
In the following we let L be any propositional lan-
guage not containing the predicate assert/1. Given
L, the extended language L
+
is defined inductively as
follows:
All propositional atoms in L are propositional
atoms in L
+
.
If every L
0
, . . . , L
n
(n 0) is a literal in L
+
(i.e. a
propositional atom A or its default negation not A),
then L
0
L
1
, . . . , L
n
is a rule
3
over L
+
.
If R is a rule over L
+
, then assert(R) is a propo-
sitional atom of L
+
.
Nothing else is a propositional atom in L
+
.
Given a rule L
0
L
1
, . . . , L
n
, then L
0
is the head
of the rule, and L
1
, . . . , L
n
is the body. Rules with
1
An implementation of EVOLP is available from
http://centria.fct.unl.pt/˜jja/updates.
2
A well known extension to normal logic programs (Lif-
schitz and Woo, 1992).
3
Rules of the form L
0
L
1
, . . . , L
n
, where each L
i
is
a literal, are typically called generalized logic programming
rules.
empty body (that is, n = 0) are written as L
0
and
are called facts.
An evolving logic program over L is a (possibly
infinite) set of rules over L
+
. Consider the two rules:
assert(not a b) not c
a assert(b )
Intuitively, the first rule states that, if c is false, then
the rule not a b must be asserted. The 2nd rule
states that, if the fact b is going to be asserted, then
a is true.
The language L
+
allows one to model the knowl-
edge base self-evolution. What is needed now is a
way to make the system aware of events that happen
outside it, e.g., the observation of facts (or rules) that
are perceived at some state or assertion commands
imparting the assertion of new rules on the evolving
program. Both observations and assertion commands
can be represented as EVOLP rules: the former by
rules without the assert predicate in the head, and the
latter by rules with it. Thus, in EVOLP outside in-
fluence can be represented as a sequence of sets of
EVOLP rules. This leads us to the following notion.
An event sequence E over an evolving logic program
P is a sequence of evolving logic programs over the
language L of P .
3.2 Semantics
The meaning of a sequence of EVOLP programs is
given by a set of evolution stable models, each of
which is a sequence of interpretations. The basic idea
is that each evolution stable model describes some
possible evolution of one initial program after a given
number n of evolution steps with respect to an event
sequence E.
The construction of these program sequences is as
follows: whenever the atom assert(Rule) belongs
to an interpretation in a sequence, i.e. belongs to a
model according to the stable model semantics of the
current program, then Rule must belong to the pro-
gram in the next state; asserts in bodies are treated as
any other predicate literals.
Program sequences are treated as in the framework
of dynamic logic program (DLP). A dynamic logic
program P
1
· · · P
n
is a sequence of generalized
logic programs (P
n
being the most recent one). The
idea of DLP is that the most recent rules (i.e., the
ones belonging to the most recent programs in the se-
quence) are set in force, and previous rules are valid
(by inertia) insofar as possible, i.e. they are kept for
as long as they do not conflict with more recent ones.
For the formal definition of the declarative and proce-
dural semantics of DLP see (Alferes et al., 2000).
MODELLING ADAPTIVE CONTROLLERS WITH EVOLVING LOGIC PROGRAMS
109
Consider the program P :
a
assert(b a) not c
c assert(not a )
assert(not a ) b
For simplicity suppose that all events in E are empty.
The (only) stable model of P is I = {a, assert(b
a)} and it conveys the information that program P is
ready to evolve into a new program P P
2
by adding
rule (b a) at the next step, i.e. in P
2
. In the only
stable model I
2
of the new program P P
2
, atom b
is true as well as atom assert(not a ) and also c,
meaning that P P
2
is ready to evolve into a new
program P P
2
P
3
by adding rule (not a ) at the
next step, i.e. in P
3
. Now, the (negative) fact not a
in P
3
conflicts with the fact a in P , and so this
older fact is rejected. The rule added in P
2
remains
valid, but is no longer useful to conclude b, since a is
no longer valid. So, assert(not a ) and c are also
no longer true. In the only stable model of the last
sequence both a, b, and c are false.
This example simplifies the problem of defining
the semantics in that it does not consider the influ-
ence of events from the outside. In fact, as stated
above, all those events are empty. To take into con-
sideration outside events, the rules that came in the
i-th event are added to the program of state i. Sup-
pose that at state 2 there is an event from the outside
E
2
= {assert(d b) a; e ←}. Since the
only stable model of P is I = {a, assert(b a)}
and there is an outside event at state 2, the program
should evolve into the new program obtained by up-
dating P not only with the rule b a but also with
the rules in E
2
, i.e. P {b a; assert(d b)
a; e ←}. The only stable model I
2
of this pro-
gram is now {a, b, e, assert(not a ), assert(d
b), assert(b a)}.
In EVOLP the rules coming from the outside, be
they observations or assertion commands, are under-
stood as events given at a certain state, but which are
not to persist by inertia. That is, if a rule R belongs
to an event E
i
of an event sequence E, then R was
perceived after i 1 evolution steps of the program,
and this perception event is not to be assumed by iner-
tia from then onward. Thus, in the previous example,
when constructing subsequent states, the rules com-
ing from events in state 2 should no longer be avail-
able and considered. This understanding is formal-
ized as follows.
An evolution interpretation of length n of an evolv-
ing logic program P over L is a finite sequence
I = hI
1
, I
2
, . . . , I
n
i of sets of propositional atoms
of L
+
. The evolution trace associated with an evo-
lution interpretation I is the sequence of programs
hP
1
, P
2
, . . . , P
n
i where:
P
1
= P and P
i
= {R | assert(R) I
i1
}
for each 2 i n. An evolution interpre-
tation of length n, hI
1
, I
2
, . . . , I
n
i, with evolution
trace hP
1
, P
2
, . . . , P
n
i, is an evolution stable model
of P given E = hE
1
, E
2
, . . . , E
n
i iff for every i
(1 i n), I
i
is a stable model at state i of
P
1
P
2
· · · (P
i
E
i
).
Notice that the rules coming from the outside in-
deed do not persist by inertia. At any given state i,
the rules from E
i
are added to P
i
and the (possibly
various) stable models I
i
are calculated. This deter-
mines the programs P
i+1
of the trace, which are then
added to E
i+1
to determine the stable models I
i+1
.
Being based on stable models, evolving logic pro-
grams may have various evolution stable models, as
well as no evolution stable models at all. Consider
the following program:
assert(a ) not assert(b ), not b
assert(b ) not assert(a ), not a
it has two evolution stable models of length 3 wrt. the
event sequence E = h∅, , ∅i of empty events. Each
model represents one possible evolution of the pro-
gram:
h{assert(a )}, {a, assert(a )}, {a, assert(a )}i
h{assert(b )}, {b, assert(b )}, {b, assert(b )}i
Given an evolution trace hP
1
, . . . , P
n
i and an event
sequence E = hE
1
, . . . , E
n
i, a state i (1 i n) is
deterministic iff P
1
P
2
· · · (P
i
E
i
) has a unique
stable model. P is deterministic wrt. E iff every state
i is deterministic.
Since various evolutions may exist for a given
length, evolution stable models alone do not deter-
mine a truth relation. But one such truth relation can
be defined, as usual, based on the intersection of mod-
els. One important case is when the program is strat-
ified, for then there is a single stable model, and a
deterministic evolution.
Given a program P and an event sequence E of
length n, a propositional atom A is: true iff A I
n
for every evolution stable model hI
1
, I
2
, . . . , I
n
i of
P and E; false iff A 6∈ I
n
for every evolution stable
model hI
1
, I
2
, . . . , I
n
i of P and E; unknown other-
wise.
4 MODELLING ADAPTIVE
CONTROLLERS WITH EVOLP
The simplest way to model an adaptive logic-based
controller is to employ EVOLP as the language of
the controller as illustrated in Fig. 2. In this case,
we have a discrete, adaptive controller governed by
logic-based rules. This scheme can be seen as a sim-
plified version of MRAS (see Fig. 1), where the ad-
justment rule block is composed of the logic rules
ICINCO 2006 - INTELLIGENT CONTROL SYSTEMS AND OPTIMIZATION
110
E
Controller
assert(.)
do(.)
EVOLP
Process
Figure 2: Adaptive logic-based controller.
defining the predicate assert. This makes it possible
that old rules of the controller are replaced by new
rules. The theory of the controller is formalized by
an (initial) evolving logic program that can evolve
via its (internal) assertion commands, and/or via the
input to the controller received as events. The lan-
guage of the controller contains distinguished atoms
expressing the actions the controller wants to execute
on the process/environment through its actuators. We
assume that such atoms take the form do(action). The
intuition is that whenever an atom do(action) is true
at the current state, the corresponding action is per-
formed by the actuator on the process. Then, the out-
put of the process is received by the controller in the
form of facts (or more generally rules) represented as
events. At each state n, the controller considers all
the stable models of P
1
P
2
· · · (P
n
E
n
), and (i)
sends to the actuators the actions corresponding to all
the atoms do(action) that are true at n, and (ii) self-
evolves by considering all the atoms assert(R) that
are true.
Example: a controller for a lift
Consider the controller of a lift that receives from out-
side signals of the form push(N), when somebody
pushes the button for going to floor N, or floor, when
the lift reaches a new floor. Upon receipt of a push(N)
signal, the lift records that a request for going to floor
N is pending:
assert(request(F )) push(F )
4
Mark the difference between this rule and the rule
request(F ) push(F ). When the button F is
pushed, with the latter rule request(F ) is true only
at that moment, while with the former request(F )
is asserted to the evolving program so that it remains
inertially true (until its truth is possibly deleted after-
wards).
Based on the pending requests at each moment, the
4
Rules with variables stand for all their ground in-
stances.
controller must prefer where to go:
going(F ) request(F ), not unpref(F ),
not fire
alarm
unpref(F ) request(F 2), better(F 2, F )
better(F 1, F 2) at(F ), |F 1 F | < |F 2 F |
better(F 1, F 2) at(F ), |F 1 F | = |F 2 F |,
F 2 < F
Notice that the body of the first rule above contains
the non-monotonic condition, not fire
alarm. This is
needed to stop the lift (i.e., it makes going(F ) false) in
situations where the event fire
alarm is received from
the outside. Predicate at/1 stores, at each moment,
the number of the floor where the lift is located. Thus,
if a floor signal is received, depending on where the
lift is going, at(F ) must be incremented (resp. decre-
mented):
assert(at(F + 1)) floor, at(F ), going(G), G > F
assert(not at(F )) floor, at(F ), going(G), G > F
When the lift reaches the floor to which it was going,
it must open the door. After that, it must remove the
pending request for going to that floor:
do(open(F )) going(F ), at(F )
assert(not request(F )) going(F ), at(F )
Modelling uncertainty
Consider a scenario where one wants to formalize
a controller that monitors the room temperature and
consequently activates a heating device to maintain
the temperature within some specified range. Assume
that the controller receives contradictory/uncertain
data from its sensors, e.g., it contemporaneously re-
ceives two distinct temperatures x and y whose differ-
ence is greater than a specified value. The behaviour
of the controller can be defined in such a situation by
a simple rule of the form:
do(action) tempA(x), tempB(y), x > y + 5
In other situations, the controller may have incom-
plete knowledge of the outside environment. Suppose
that in the lift example above, the controller receives
a signal that may (or may not) be a floor signal. This
can be coded as an event consisting of two rules:
floor not no
signal
no signal not floor
At this state there are 2 stable models: one corre-
sponding to the evolution in case the floor signal is
considered; the other, in case it isn’t. The truth re-
lation can here be used to determine what is certain
despite the undefinedness of the events received, i.e.,
true in every stable model, e.g., what may be triggered
if fire
alarm is also received: do(stop) fire alarm.
Properties of the system
Since the controller is axiomatized by logic rules, it
MODELLING ADAPTIVE CONTROLLERS WITH EVOLVING LOGIC PROGRAMS
111
is possible to formally prove a number of properties.
Typically, such properties take one of the following
two forms. Let P be the (initial) evolving logic pro-
gram that axiomatizes the theory of the controller, and
E a (finite) event sequence representing the input to
the controller.
∀E n. Property (weak)
∀E n. Property (strong)
Reconsider the example of the lift controller. Then,
one can guarantee: (i) the safety condition that the lift
will never open its door if it is not at some floor by
proving that:
∀E n x. not (open(x) not at(x))
or (ii) the fairness condition that if the button of a cer-
tain floor has been pushed, then the lift will eventually
go to that floor by proving that:
∀E n x. push(x) at(x)
It is easy to prove that the above property does not
hold if the policy to handle the pending requests is
the one axiomatized by the rules for going/1.
5 CONCLUSION
In this paper we have addressed the problem of mod-
elling adaptive logic-based controllers by means of
Evolving Logic Programs. One advantage of using
a well-defined, logic-based approach is that it is pos-
sible to formally prove properties of the controller.
Moreover, various forms of logic reasoning (e.g., ab-
duction, hypothetical reasoning, rule mining) can be
integrated into the logic framework and employed to
enhance the controller’s performance in cases where
there is uncertainty due to the complexity of the envi-
ronment.
The use of abduction, a well-developed technique
in the Logic Programming paradigm, will enable us
to diagnose erroneous controller behaviour, by auto-
matically hypothesizing possible faults. Furthermore,
abduction can be employed to prove correctness of a
controller specification, by showing that no physically
meaningful hypothesized sequence of events can re-
sult in some integrity violation by the controller, as in
the approach in (de Castro and Pereira, 2004).
Since the semantics of EVOLP is stable model
based, it is possible to characterize uncertainty by
having at a certain state several stable models. This
corresponds to the case where there exist branches in
the evolution stable model of the program. Clearly, it
is possible to guarantee that the program will evolve
into a unique branch by enforcing syntactic restric-
tions on programs. In fact, if the program is stratified
then there will be only one stable model, and therefore
no branching can occur. This hypothesis is however
unrealistic in most of the cases. A better solution to
the problem would be to exploit preference reasoning
in order to prefer among alternatives when a branch-
ing situation occur. This is possible since preference
reasoning can be employed to prefer among alterna-
tive stable models (Alferes and Pereira, 2000). More-
over, preferences themselves are updatable, and this
empowers a form of meta-control.
Additionally, EVOLP can be used to simulate pos-
sible futures, and then preferences may be used to
chose desired futures or to avoid undesirable ones.
This means one can have lookahead proactive control.
REFERENCES
Alferes, J. J., Brogi, A., Leite, J. A., and Pereira, L. M.
(2002). Evolving logic programs. In Procs. 8th
European Conf. on Logics in Artificial Intelligence
(JELIA’02), vol. 242 of LNAI, pp. 50–61.
Alferes, J. J., Leite, J. A., Pereira, L. M., Przymusinska,
H., and Przymusinski, T. C. (2000). Dynamic updates
of non-monotonic knowledge bases. The J. of Logic
Programming, 45(1-3):43–70.
Alferes, J. J. and Pereira, L. M. (2000). Updates plus pref-
erences. In Aciego, M. O., de Guzmn, I. P., Brewka,
G., and Pereira, L. M., editors, Logics in AI, Procs.
JELIA’00, LNAI 1919, pp. 345–360.
˚
Astrom, K. J. and Wittenmark, B. (1990). Computer-
Controlled Systems.Theory and Design. Prentice Hall
Internal Inc.
Castro, J. F. and Pereira, L. M. (2004). Abductive validation
of a power-grid expert system diagnoser. In Procs.
17th Int. Conf. on Industrial and Engineering Appli-
cations of Artificial Intelligence and Expert Systems
(IEA-AIE’04), vol. 3029 of LNAI, pp. 838–847.
Feng, G. (2002). An approach to adaptive control of fuzzy
dynamic systems. IEEE Transactions on Fuzzy Sys-
tems, 10(2):268–275.
Feng, G., Cao, S., Rees, N., and Cheng, C. (1997). Analysis
and design of model based fuzzy control systems. In
Proc. Sixth IEEE Int. Conf. on Fuzzy Systems, vol. 2,
pp. 901 – 906.
Hespanha, J. P., Liberzon, D., and Morse, A. S. Overcom-
ing the limitations of adaptive control by means of
logic-based switching. Systems and Control Letters,
49(1):49-65.
Lifschitz, V. and Woo, T. (1992). Answer sets in gen-
eral non-monotonic reasoning (preliminary report). In
Nebel, B., Rich, C., and Swartout, W., editors, KR’92.
Morgan-Kaufmann.
Mamdani, E. and Baaklini, N. (1975). Prescriptive method
for deriving control policy in a fuzzy-logic controller.
Electronic Letters, 11(25/26):625–626.
Takagi, T. and Sugeno, M. (1985). Fuzzy identification of
systems and its applications to modeling and control,
IEEE Trans. Syst. Man. & Cybern., 15(1):116–132.
ICINCO 2006 - INTELLIGENT CONTROL SYSTEMS AND OPTIMIZATION
112