index defined on the state trajectory of the system.
This leads to two possible sub-problems, the time op-
timization problem and the optimal mode-scheduling
problem (Ding, 2009). The former relies in finding
the optimal placements of switching times assuming
a fixed switching sequence; the latter is the problem
of determining the optimal switching sequence of a
switched system.
The presence of (white) noise perturbations can be
also considered, as in (Liu et al., 2005); an interesting
aspect is that the control weights are indefinite and the
switching regime is described via a continuous-time
Markov chain. It is proposed a near-optimal control
strategy aiming at a reduction of complexity.
Numerical problems arising when dealing with
optimal switching control are considered in (Luus and
Chen, 2004) where a direct search optimization pro-
cedure is discussed.
In this paper the problem of optimal resource al-
location is related to the real time system behavior
considering the total amount of resources, i.e. the in-
put constraint, fixed, and acting on the cost function,
in particular on the weight of the input, in order to
change the total cost according to the operative con-
ditions. The idea is to replicate a planning scheme in
which the designer fixes the relevance of the control
action according to the conditions and, consequently,
changes the politics of intervention making the con-
trol effort more or less relevant. For example, in a
economic contest, within a prefixed total amount of
resources (input constraint), the investment of more
or less budget for the solution of some problem can
be driven by some social indicator indexes, like un-
employment below or over a prefixed critical percent-
ages, or the national PIL lower or higher a prefixed
threshold which guarantees economic grown, or the
level taxation, and so on.
Then, a cost index in which the control action is
weighted by a spatial piecewise constant function of
the state is introduced, so that its value changes de-
pending on the current state. The effect is to get dif-
ferent cost functions, defined over each state space re-
gion, which weight the control differently depending
on the region in which the system operates, in order to
implement, in the contextof the classical optimal con-
trol formulation, a state dependent strategy. Changing
the weight for the control for each distinct state space
region corresponds to give a different relevance to the
control amplitude action with respect to the other con-
tributions, mainly errors, in the cost function. The re-
sult is that planning the different constant weights for
the control reflects in allowing the control to use dif-
ferent amplitudes, clearly higher in correspondence to
lower costs and lower for higher costs.
While the system evolves remaining in the same
state space region, the solution of the optimal control
problem gives an optimal solution for the control ac-
tion. When, during the state evolution, the trajectory
crosses from one region to another, a switch of the
cost function occurs at the time instant in which the
state reaches the regions separation boundary. From
that time on, a different optimal control problem is
formulated, equal to the previous one except for the
input weight in the cost function.
This procedure is iterated until the final state con-
ditions are reached. The overall control results to be
a switching one, whose switching time instants are
not known in advance but are part of the solution of
the optimal control problem, depending on the op-
timal state evolution within each region. This kind
of approach is different from the others previously
recalled; here, the discontinuous switching solution
does not arise either for the presence of switching dy-
namics, or for control saturation, but comes from the
particular choice of the cost index. The control strat-
egy changes since in the cost index it is assumed that
the control needs to be weighted differently bringing
to different strategies depending on the actual state
value. It can be referred as a real time state dependent
weight.
A first use of a switching formulation for an op-
timal control problem is proposed in (Di Giamber-
ardino and Iacoviello, 2017), applied to a classical
SIR epidemic diffusion. The effectiveness of the pro-
posed approach is then here shown making use of a
biomedical example, the control of an epidemic dis-
ease, the immunodeficiency virus (HIV). The HIV
model proposed in (Wodarz, 2001) and modified in
(Chang and Astolfi, 2009) is adopted. The choice of
this example comes from the consideration that usu-
ally the medical and social interest for the presence
of an epidemic spread depends on the level of diffu-
sion of the infection, being considered in some sense
natural if it is lower than a physiologic level and be-
coming more and more relevant as the intensity of the
infection increases. Then, according to the present ap-
proach, a state dependent coefficient that weights dif-
ferently the control depending on the number of the
infected cells is introduced, taking as state space re-
gion division the sets that correspond to a physiolog-
ical level, a high but not serious level and a very high
risk level. This corresponds to change the interven-
tion strategy depending on the varied conditions; as
already noted, the possible switching instants are not
known in advance but are determined on the basis of
the dynamic variables evolution and on the optimiza-
tion process.
In general, the introduction of a continuous state