ACCEPTING NETWORKS OF EVOLUTIONARY PROCESSORS:
COMPLEXITY ASPECTS
Recent Results and New Challenges
Florin Manea and Victor Mitrana
Faculty of Mathematics and Computer Science, University of Bucharest, Academiei 14, 010014, Bucharest, Romania
Keywords:
Theory of computation, Computational Complexity, Complexity classes, Evolutionary processor.
Abstract:
In this paper we survey some results reported so far, for the new computational model of Accepting Networks
of Evolutionary Processors (ANEPs), in the area of computational and descriptional complexity. First we
give the definitions of the computational model, and its variants, then we define several ANEP complexity
classes, and, further, we show how some classical complexity classes, defined for Turing Machines, can be
characterized in this framework. After this, we briefly show how ANEPs can be used to solve efficiently
NP-complete problems. Finally, we discuss a list of open problems and further directions of research which
appear interesting to us.
1 INTRODUCTION
The origin of the networks of evolutionary proces-
sors (NEPs for short) is twofold. A basic architecture
for parallel and distributed symbolic processing, re-
lated to the Connection Machine (Hillis, 1985) as well
as the Logic Flow paradigm (Errico and Jesshope,
1994), consists of several processors, each of them
being placed in a node of a virtual complete graph,
which are able to handle data associated with the re-
spective node. Each node processor acts on the local
data in accordance with some predefined rules, and
then local data becomes a mobile agent which can
navigate in the network following a given protocol.
Only that data which is able to pass a filtering process
can be communicated. This filtering process may re-
quire to satisfy some conditions imposed by the send-
ing processor, by the receiving processor or by both
of them. All the nodes send simultaneously their data
and the receiving nodes handle also simultaneously
all the arriving messages, according to some strate-
gies, see (Fahlman et al., 1983; Hillis, 1985).
On the other hand, in (Csuhaj-Varj´u and Mitrana,
2000) we consider a computing model inspired by
the evolution of cell populations, which might model
some properties of evolving cell communities at the
syntactical level. Cells are represented by strings
which describe their DNA sequences. Informally, at
any moment of time, the evolutionary system is de-
scribed by a collection of strings, where each string
represents one cell. Cells belong to species and their
community evolves according to mutations and divi-
sion which are defined by operations on strings. Only
those cells are accepted as surviving (correct) ones
which are represented by a string in a given set of
strings, called the genotype space of the species. This
feature parallels with the natural process of evolu-
tion. Similar ideas may be met in other bio-inspired
models like membrane systems (P˘aun, 2000), evolu-
tionary systems (Csuhaj-Varj´u and Mitrana, 2000), or
models from Distributed Computing area like par-
allel communicating grammar systems (P˘aun and
Sˆantean, 1989), networks of parallel language pro-
cessors (Csuhaj-Varj´u and Salomaa, 1997).
In (Castellanos et al., 2001) we modify this con-
cept (considered from a formal language theory point
of view in (Csuhaj-Varj´u and Salomaa, 1997)) in the
following way inspired from cell biology. Each pro-
cessor placed in a node is a very simple processor, an
evolutionary processor. By an evolutionary processor
we mean a processor which is able to perform very
simple operations, namely point mutations in a DNA
sequence (insertion, deletion or substitution of a pair
of nucleotides). More generally, each node may be
viewed as a cell having genetic information encoded
in DNA sequences which may evolve by local evo-
lutionary events, that is point mutations. Each node
is specialized just for one of these evolutionary oper-
ations. Furthermore, the data in each node is orga-
nized in the form of multisets of strings (each string
597
Manea F. and Mitrana V. (2009).
ACCEPTING NETWORKS OF EVOLUTIONARY PROCESSORS: COMPLEXITY ASPECTS - Recent Results and New Challenges.
In Proceedings of the International Conference on Agents and Artificial Intelligence, pages 597-604
DOI: 10.5220/0001796305970604
Copyright
c
SciTePress
appears in an arbitrarily large number of copies), and
all copies are processed in parallel such that all the
possible events that can take place do actually take
place. The work (Mart´ın-Vide and Mitrana, 2005) is
an early survey.
2 BASIC DEFINITIONS
We start by summarizing the notions used throughout
the paper. An alphabet is a finite and nonempty set
of symbols. The cardinality of a finite set A is written
card(A). Any sequence of symbols from an alphabet
V is called string (word) overV. The set of all strings
over V is denoted by V
and the empty string is de-
noted by ε. The length of a string x is denoted by |x|
while alph(x) denotes the minimal alphabet W such
that x W
. For the basic details regarding Turing
machines and complexity classes we refer to (Garey
and Johnson, 1979).
In the course of its evolution, the genome of an or-
ganism mutates by different processes. At the level of
individual genes the evolution proceeds by local op-
erations (point mutations) which substitute, insert and
delete nucleotides of the DNA sequence. In what fol-
lows, we define some rewriting operations that will be
referred as evolutionary operations since they may be
viewed as linguistic formulations of local gene muta-
tions. We say that a rule a b, with a, b V {ε}
is a substitution rule if both a and b are not ε; it is a
deletion rule if a 6= ε and b = ε; it is an insertion rule
if a = ε and b 6= ε. The set of all substitution, deletion,
and insertion rules over an alphabet V are denoted by
Sub
V
, Del
V
, and Ins
V
, respectively.
Given a rule σ as above and a string w V
, we
define the following actions of σ on w:
If σ a b Sub
V
, then
σ
(w) =
{ubv : u, v V
(w = uav)},
{w}, otherwise
Note that a rule as above is applied to all occur-
rences of the letter a in different copies of the
word w. An implicit assumption is that arbitrar-
ily many copies of w are available.
If σ a ε Del
V
, then
σ
(w) =
{uv : u, v V
(w = uav)},
{w}, otherwise
σ
r
(w) =
{u : w = ua},
{w}, otherwise
σ
l
(w) =
{v : w = av},
{w}, otherwise
If σ ε a Ins
V
, then
σ
(w) = {uav : u, v V
(w = uv)},
σ
r
(w) = {wa}, σ
l
(w) = {aw}.
α {∗, l, r} expresses the way of applying a deletion
or insertion rule to a string, namely at any position
(α = ), in the left (α = l), or in the right (α = r)
end of the string, respectively. The note for the sub-
stitution operation mentioned above remains valid for
insertion and deletion at any position. For every rule
σ, action α {∗, l, r}, and L V
, we define the α-
action of σ on L by σ
α
(L) =
S
wL
σ
α
(w). Given a
finite set of rules M, we define the α-action of M on
the string w and the language L by:
M
α
(w) =
[
σM
σ
α
(w) and M
α
(L) =
[
wL
M
α
(w),
respectively.
For two disjoint and nonempty subsets P and F of
an alphabet V and a string w over V, we define the
following two predicates
rc
s
(w;P, F) P alph(w) F alph(w) =
/
0
rc
w
(w;P, F) alph(w) P 6=
/
0 F alph(w) =
/
0.
The construction of these predicates is based on
context conditions defined by the two sets P (per-
mitting contexts/symbols) and F (forbidding con-
texts/symbols). Informally, both conditions requires
that no forbidding symbol is present in w; furthermore
the first condition requires all permitting symbols to
appear in w, while the second one requires at least
one permitting symbol to appear in w. It is plain that
the first condition is stronger than the second one.
For every language L V
and β {s, w}, we de-
fine:
rc
β
(L, P, F) = {w L | rc
β
(w;P, F)}.
An evolutionary processor over V is a 5-tuple
(M, PI, FI, PO, FO), where:
Either (M Sub
V
) or (M Del
V
) or (M
Ins
V
). The set M represents the set of evolutionary
rules of the processor. As one can see, a processor is
“specialized” in one evolutionary operation, only.
PI, FI V are the input permitting/forbidding
contexts of the processor, while PO, FO V are the
output permitting/forbidding contexts of the proces-
sor (with PI FI =
/
0 and POFO =
/
0).
We denote the set of evolutionary processors over
V by EP
V
. Clearly, the evolutionary processor de-
scribed here is a mathematical concept similar to
that of an evolutionary algorithm, both being inspired
from the Darwinian evolution. As we mentioned
above, the rewriting operations we have considered
might be interpreted as mutations and the filtering
process described above might be viewed as a selec-
tion process. Recombination is missing but it was as-
serted that evolutionary and functional relationships
between genes can be captured by taking only local
mutations into consideration (Sankoff et al., 1992).
ICAART 2009 - International Conference on Agents and Artificial Intelligence
598
However, another type of processor based on recom-
bination only, called splicing processor has been con-
sidered as well in a series of works (Manea et al.,
2007a; Loos et al., 2008) and the references thereof.
An accepting network of evolutionary pro-
cessors (ANEP for short) is a 8-tuple Γ =
(V, U, G, N, α, β, x
I
, x
O
), where:
V and U are the input and network alphabet, re-
spectively, V U.
G = (X
G
, E
G
) is an undirected graph without loops
with the set of vertices X
G
and the set of edges E
G
.
G is called the underlying graph of the network.
N : X
G
EP
U
is a mapping which associates
with each node x X
G
the evolutionary processor
N(x) = (M
x
, PI
x
, FI
x
, PO
x
, FO
x
).
α : X
G
{∗, l, r}; α(x) gives the action mode of
the rules of node x on the strings existing in that
node.
β : X
G
{s, w} defines the type of the in-
put/output filters of a node. More precisely, for
every node, x X
G
, the following filters are
defined:
input filter: ρ
x
(·) = rc
β(x)
(·;PI
x
, FI
x
),
output filter: τ
x
(·) = rc
β(x)
(·;PO
x
, FO
x
).
That is, ρ
x
(w) (resp. τ
x
) indicates whether or not
the string w can pass the input (resp. output) filter
of x. Moreover, ρ
x
(L) (resp. τ
x
(L)) is the set of
strings of L that can pass the input (resp. output)
filter of x.
x
I
, x
O
X
G
are the input and the output node of Γ,
respectively.
We say that card(X
G
) is the size of Γ. If α and β
are constant functions, then the network is said to be
homogeneous. In the theory of networks some types
of underlying graphs are common like rings, stars,
grids, etc. We focus here on complete ANEPs i.e.,
ANEPs having a complete underlying graph.
A model closely related to that of ANEPs, in-
troduced in (Dr˘agoi et al., 2007) and further studied
in (Dr˘agoi and Manea, 2008), is that of Accepting
Networks of Evolutionary Processors with Filttering
Connections(ANEPFCs for short). An ANEPFC may
be viewed as an ANEP where the filters are shifted
from the nodes on the edges. Therefore, instead of
having a filter at both ends of an edge on each direc-
tion, there is only one filter disregarding the direction.
Note that every ANEPFC can be immediately
transformed into an equivalent ANEPFC with a com-
plete underlying graph by adding the edges that are
missing and associate with them filters that do not
allow any string to pass. Therefore, for the sake of
simplicity, the ANEPFCs we discuss in this paper
have underlying graphs with useful edges only (note
that such a simplification is not always possible for
ANEPs).
A configuration of an ANEP/ANEPFC Γ as above
is a mapping C : X
G
2
V
which associates a set of
strings with every node of the graph. A configuration
may be understood as the sets of strings which are
present in any node at a given moment. Given a string
w V
, the initial configuration of Γ on w is defined
by C
(w)
0
(x
I
) = {w} and C
(w)
0
(x) =
/
0 for all x X
G
{x
I
}.
When changing by an evolutionary step, for both
ANEPs and ANEPFCs, each component C(x) of the
configuration C is changed in accordance with the set
of evolutionary rules M
x
associated with the node x
and the way of applying these rules α(x). Formally,
we say that the configuration C
is obtained in one
evolutionary step from the configuration C, written as
C = C
, iff C
(x) = M
α(x)
x
(C(x)) for all x X
G
.
When changing by a communication step, in the
case of ANEPs, each node processor x X
G
sends one
copy of each word it has, which is able to pass the out-
put filter of x, to all the node processors connected to x
and receives all the words sent by any node processor
connected with x providing that they can pass its input
filter. Formally, we say that the configurationC
is ob-
tained in one communication step from configuration
C, written as C C
, iff C
(x) = (C(x) τ
x
(C(x)))
[
{x,y}∈E
G
(τ
y
(C(y))ρ
x
(C(y))) for all x X
G
. Note that
words which leave a node are eliminated from that
node. If they cannot pass the input filter of any node,
they are lost.
Differently, when changing by a communication
step, in an ANEPFC, each node-processor x X
G
sends one copy of each word it contains to everynode-
processor y connected to x, provided they can pass the
filter of the edge between x and y. It keeps no copy
of these words but receives all the words sent by any
node processor z connected with x providing that they
can pass the filter of the edge between x and z. In this
case, no string is lost.
Let Γ be an ANEP (ANEPFC), the computation
of Γ on the input word w V
is a sequence of con-
figurations C
(w)
0
, C
(w)
1
, C
(w)
2
, . . . , where C
(w)
0
is the ini-
tial configuration of Γ defined by C
(w)
0
(x
I
) = w and
C
(w)
0
(x) =
/
0 for all x X
G
, x 6= x
I
, C
(w)
2i
= C
(w)
2i+1
and
C
(w)
2i+1
C
(w)
2i+2
, for all i 0. Note that the configu-
rations are changed by alternative evolutionary and
communication steps. By the previous definitions,
each configurationC
(w)
i
is uniquely determined by the
configurationC
(w)
i1
. A computationhalts (and it is said
ACCEPTING NETWORKS OF EVOLUTIONARY PROCESSORS: COMPLEXITY ASPECTS - Recent Results and New
Challenges
599
to be halting) if one of the following two conditions
holds:
(i) There exists a configuration in which the set of
strings existing in the output node x
O
is non-empty.
In this case, the computation is said to be an accept-
ing computation.
(ii) There exist two identical configurations obtained
either in consecutive evolutionary steps or in consec-
utive communication steps.
The language accepted by the ANEP/ANEPFC Γ
is L
a
(Γ) = {w V
| the computation of Γ on w is
an accepting one}. We say that an ANEP/ANEPFC Γ
decides the language L V
, and write L(Γ) = L iff
L
a
(Γ) = L and the computation of Γ on every x V
halts.
The ANEP computing model was modified in
(Manea, 2005) to obtain Timed Accepting Networks
of Evolutionary Processors (TANEP for short). Such
a TANEP is a triple T = (Γ, f, b), where Γ =
(V, U, G, N, α, β, x
I
, x
O
) is an ANEP, f : V
N is a
Turing computable function, called clock, and b 0, 1
is a bit called the accepting-mode bit.
In this setting, the computation of a TANEP
T = (Γ, f, b) on the input word w is the (fi-
nite) sequence of configurations of the ANEP Γ:
C
(w)
0
, C
(w)
1
, . . . , C
(w)
f(w)
. The language accepted by T is
defined as:
if b = 1 then: L(T ) = {w V
| C
(w)
f(w)
(x
O
) 6=
/
0}
if b = 0 then: L(T ) = {w V
| C
(w)
f(w)
(x
O
) =
/
0}
Intuitively we may think that a TANEP T =
(Γ, f, b) is a triple that consists in an ANEP, a Tur-
ing Machine and a bit. For an input string w we first
compute f(w) on the tape of the Turing Machine (by
this we mean that on the tape there will exist f(w) el-
ements of 1, while the rest are blanks). Then we begin
to use the ANEP Γ, and at each evolutionary or com-
munication step of the network we delete an 1 from
the tape of the Turing Machine. We stop when no 1
is found on the tape. Finally, we check the accepting-
mode bit, and, according to its value and the empty-
ness of C
(w)
f(w)
(x
O
), we decide whether w is accepted or
not.
Further, we define some computational com-
plexity measures by using ANEP/ANEPFC as the
computing model. To this aim we consider a
ANEP/ANEPFC Γ with the input alphabet V that
halts on every input. The time complexity of the halt-
ing computationC
(x)
0
,C
(x)
1
,C
(x)
2
, . . . C
(x)
m
of Γ on x V
is denoted by Time
Γ
(x) and equals m. The time com-
plexity of Γ is the function from N to N,
Time
Γ
(n) = max{Time
Γ
(x) | x V
, |x| = n}.
In other words, Time
Γ
(n) delivers the maximal num-
ber of computational steps done by Γ on an input word
of length n.
For a function f : N N and X {ANEP,
ANEPFC} we define:
Time
X
( f(n)) = {L | there exists an network of type
X , Γ which decides L, and n
0
such that
n n
0
(Time
Γ
(n) f(n))}.
Moreover, we write PTime
X
=
[
k0
Time
X
(n
k
).
The space complexity of the halting computation
C
(x)
0
, C
(x)
1
, C
(x)
2
, . . . C
(x)
m
of Γ on x V
is denoted by
Space
Γ
(x) and is defined by the relation::
Space
Γ
(x) = max
i∈{1,...,m}
(max
zX
G
card(C
(x)
i
(z))).
The space complexity of Γ is the function from N to
N,
Space
Γ
(n) = max{Space
Γ
(x) | x V
, |x| = n}.
Thus Space
Γ
(n) returns the maximal number of dis-
tinct words existing in a node of Γ during a computa-
tion on an input word of length n.
For a function f : N N and X {ANEP,
ANEPFC}we define
Space
X
( f(n)) = {L | there exists a network of type
X , Γ which decides L, and n
0
such that
n n
0
(Space
Γ
(n) f(n))}.
Moreover, we write PSpace
X
=
[
k0
Space
X
(n
k
).
The length complexity of the halting computation
C
(x)
0
, C
(x)
1
, C
(x)
2
, . . . C
(x)
m
of Γ on x L is denoted by
Length
Γ
(x) and is defined by the relation:
Length
Γ
(x) = max
wC
(x)
i
(z),i∈{1,...,m},zX
G
|w|.
The length complexity of Γ is the function from N to
N,
Length
Γ
(n) = max{Length
Γ
(x) | x V
, |x| = n}.
Unlike the Space measure, Length
Γ
(n) computes the
length of the longest word existing in a node of Γ dur-
ing a computation on an input word of length n.
For a function f : N N and X {ANEP,
ANEPFC} we define Length
X
( f(n)) ={L | there ex-
ists an network of type X , Γ which decides L and n
0
such that n n
0
(Length
Γ
(n) f(n))} Moreover, we
write PLength
X
=
[
k0
Length
X
(n
k
).
ICAART 2009 - International Conference on Agents and Artificial Intelligence
600
In the case of a TANEP T = (Γ, f, b) the time
complexity definitions are the following: for the word
x V
we define the time complexity of the compu-
tation on x as the number of steps that the TANEP
makes having the word x as input, Time
T
(x) = f(x).
Consequently, we define the time complexity of T
as a partial function from N to N, that verifies:
Time
T
(n) = max{ f(x) | x L(T ), |x| = n}. For a
function g : N N we define:
Time
TANEP
(g(n)) = {L | L = L(T )for a TANEP
T = (Γ, f, 1) with
Time
T
(n) g(n) for some n n
0
}.
Moreover, we write PTime
TANEP
=
[
k0
Time
TANEP
(n
k
).
Note that the above definitions were given for
TANEPs with the accepting-mode bit set to 1. Similar
definitions are given for the case when the accepting-
mode bit set to 0. For a function f : N N we de-
fine, as in the former case:
CoTime
TANEP
(g(n)) = {L | L = L(T )for a TANEP
T = (Γ, f, 0) with
Time
T
(n) g(n) for some n n
0
}.
We define CoPTime
TANEP
=
[
k0
CoTime
TANEP
(n
k
).
3 COMPLEXITY RESULTS
The main result obtained so far states the fact that
non-deterministic Turing machines can be simulated
efficiently by ANEPs:
Theorem 1
.
1
. (Manea et al., 2008; Manea et al., 2007b) For ev-
ery nondeterministic single-tape Turing machine M,
with working alphabet U, deciding a language L,
there exists an ANEP Γ, of size 5|U|+8, deciding the
same language L. Moreover, if M works within f(n)
time, then Time
Γ
(n) O( f(n)), and if M works within
f(n) space, then Space
Γ
(n) O(max{n, f(n)})..
2. (Dr
˘
agoi and Manea, 2008) For every nondeter-
ministic single-tape Turing machine M, with work-
ing alphabet U, deciding a language L, there exists
an ANEPFC Γ, of size 2|U| + 12, deciding the same
language L. Moreover, if M works within f(n) time,
then Time
Γ
(n) O( f(n)), and if M works within f(n)
space, then Length
Γ
(n) O(max{n, f(n)}).
Basically the both results stated in this Theorem
are based on the following approach: we construct
an ANEP/ANEPFC Γ that simulates the computa-
tion of the Turing machine M on an input word w
such that each move made by the Turing machine
M is simulated by Γ in a constant number of steps
of the ANEP/ANEPFC; moreover, Γ halts and ac-
cepts w if and only if M does this. More precisely,
Γ obtains in parallel all the IDs that M may reach
in one step from its previous ID in a constant num-
ber evolutionary and communication steps. Once M
reaches a final ID, a word enters the output node of
Γ. In the case when all computations of M on w stop
but M does not accept, Γ passes through two identi-
cal consecutive configurations, hence it halts without
accepting. Otherwise, both M and Γ continue their
computations forever. Thus, if L NTIME( f (n)),
then Time
Γ
(n) O( f(n)). Since all the strings pro-
cessed by the network have their length bounded by
the length of an ID of M plus a constant number of
symbols, it also results that if L NSPACE( f(n)),
then Length
Γ
(n) O( f(n)). Note that in the case of
Turing machines, the complexity classe are those de-
fined for single tapes machines.
The reversal of Theorem 1 holds as well:
Theorem 2
. (Manea et al., 2008; Dr
˘
agoi et al., 2007)
F
or any ANEP/ANEPFC Γ accepting the language L,
there exists a single-tape Turing machine M accepting
L. Moreover, M can be constructed such that either it
accepts in O((Time
Γ
(n))
2
) computational time or in
O(Length
Γ
(n)) space.
The proof of this Theorem is quite straightfor-
ward: the Turing Machine chooses and simulates
(non-deterministically) a possible succession of pro-
cessing and communication steps of Γ on the input
word. If this succession of steps leads to a string that
enters in the output node, then the input word is ac-
cepted.
A consequence of Theorems 1 and 2 is the follow-
ing:
Theorem 3
. (Manea et al., 2008; Dr
˘
agoi et al., 2007)
1
. NP = PTime
ANEP
= PTime
ANEPFC
.
2. PSPACE = PLength
ANEP
= PLength
ANEPFC
.
These results were improved from the size com-
plexity point of view: NP equals the class of lan-
guages accepted in polynomial time by ANEPs with
24 nodes and with the class of languages accepted
in polynomial time by ANEPFCs with 26 nodes
(see (Manea and Mitrana, 2007; Dr˘agoi and Manea,
2008)).
Finally one can obtain a characterization of P, also
based on the result of Theorem 1:
Theorem 4
. (Manea et al., 2008) A language L P iff
L
is decided by an ANEP/ANEPFC Γ such that there
exist two polynomials P, Q with Space
Γ
(n) P(n) and
Time
Γ
(n) Q(n).
It is worth mentioning that the last theorem does
not say that the inclusion PSpace
X
PTime
X
P
ACCEPTING NETWORKS OF EVOLUTIONARY PROCESSORS: COMPLEXITY ASPECTS - Recent Results and New
Challenges
601
holds, for some X {ANEP, ANEPFC}. The fol-
lowing facts are not hard to follow: we proved in
Theorem 3 that every NP language, hence the NP-
complete language 3-CNF-SAT, is in PTime
X
; but,
it is easy to see that 3-CNF-SAT can be decided
also by a deterministic Turing Machine, working in
exponential time and polynomial space. By Propo-
sition 1, such a machine can be simulated by an
ANEP/ANEPFC that uses polynomial space (but ex-
ponential time as well). This shows that 3-CNF-SAT
is in PTime
X
PSpace
X
, but it is not in P, unless
P = NP.
TANEPs offer us the possibility to characterize
uniformly both NP and CoNP:
Theorem 5
. (Manea, 2005) PTime
TANEP
= NP and
C
oPTime
TANEP
= CoNP.
As explained already, we can choose and simulate
non-deterministically with a Turing Machine M each
one of the possible succesion of processing and com-
munication steps applied on the input string by the
ANEP component of a TANEP T = (Γ, f, 1). Just
that in this case we are interested only in the first
f(x) steps of the ANEP, and there exist a polyno-
mial g such that f(x) g(|x|), for every possible input
string x. From these follows that M works in polyno-
mial time, and PTime
TAHNEP
NP. To prove that
NP PTime
TAHNEP
we also make use of Theorem
1: for a language L NP there exists an ANEP Γ and
a polynomial g such that x L if and only if x L(Γ)
and Time
Γ
(x) g(|x|). From this it follows that the
TANEP T = (Γ, f, 1), where f(x) = g(|x|), accepts L.
A similar proves the second part of the theorem, for
TANEPs with accepting bit 0.
Theorems 5 provides a common framework for
solving both problems from NP and from CoNP. For
example, suppose that we want to solve the member-
ship problem for a language L.
If L NP, using the proof of Theorems 1, we can
construct a polynomial TANEP T = (Γ, f, 1) that
accepts L.
If L CoNP, it results that CoL NP, and us-
ing the proofs of Theorems 1, we can construct
a polynomial TANEP T = (Γ, f, 1) that accepts
CoL. We obtain that (Γ, f, 0) accepts L.
Thus, Theorem 5 proves that the languages (the
decision problems) that are efficiently recognized
(solved) by the TANEPs (with both 0 and 1 as possi-
ble values for the accepting-mode bit) are those from
NP CoNP.
4 PROBLEM SOLVING
Recall that a possible correspondence between deci-
sion problems and languages can be done via an en-
coding function which transforms an instance of a
given decision problem into a word, see, e.g., (Garey
and Johnson, 1979). We say that a decision problem
P is solved in time O( f(n)) by ANEPs/ANEPFCs if
there exists a family G of ANEPs/ANEPFCs such that
the following conditions are satisfied:
1. The encoding function of any instance p of P hav-
ing size n can be computed by a deterministic Tur-
ing machine in time O( f(n)).
2. For each instance p of size n of the problem
one can effectively construct, in time O( f(n)), an
ANEP/ANEPFC Γ(p) G which decides, again
in time O( f(n)), the word encoding the given in-
stance. This means that the word is decided if and
only if the solution to the given instance of the
problem is “YES”. This effective construction is
called an O( f(n)) time solution to the considered
problem.
If an ANEP/ANEPFC Γ G constructed above
decides the language of words encoding all instances
of the same size n, then the construction of Γ is called
a uniform solution. Intuitively, a solution is uni-
form if for problem size n, we can construct a unique
ANEP/ANEPFC solving all instances of size n taking
the (reasonable) encoding of instance as “input”.
In (Manea et al., 2005) we propose a linear time
solution for the 3-CNF-SAT and Hamiltonian Path
problems, using ANEPs; also, in (Manea et al.,
2007b) we propose a linear solution for the Vertex-
Cover problem. In (Dr˘agoi et al., 2007) we pro-
pose another linear time solution for the Vertex-Cover
problem, solved this time by ANEPFCs.
5 CHALLENGES
We presented new characterizations of some well-
known complexity classes like P, NP, co-NP,
PSPACE based on ANEPs and ANEPFCs. We also
got upper bounds for the size of these networks.
However, we do not know how close to the optimal
size these bounds are. In our view, a comparison
with other computational models might lead to better
bounds.
Although we presented a characterization of
PSPACE in terms of a complexity measure, namely
Length, defined for ANEPs and ANEPFCs, this mea-
sure is rather artificial as it can never be smaller than
the length of the input word. We consider that another
ICAART 2009 - International Conference on Agents and Artificial Intelligence
602
measure able to capture in a better way the similarity
to the space measure defined for Turing machines is
needed. Such a measure might shed a new light on
the characterizations reported here.
On the other hand, the measure Space counts the
maximum number of words existing in a node at a
given step of a computation. This measure might also
be useful though it seems to be less important from
a biological point of view as an exponential number
of DNA molecules can be produced by a linear num-
ber of Polymerase Chain Reaction (PCR) steps. One
may remark that a limitation on the Space complexity
of a computation may be translated as a limitation of
the intrinsic power of this computing model to simu-
late by massive parallelism the nondeterminism of se-
quential machines. Another direction of research that
appears to be of interest is the exact role filters, evo-
lutionary operations, and underlaying structures play
with respect to the computational power of ANEPs
as well as their complexity. A first step was done in
(Dassow and Mitrana, 2008), where ANEPs without
insertion nodes were considered. An exhaustivestudy
in this direction is under way.
A very preliminary work regarding the role of fil-
ters is (Dassow et al., 2006), where generating NEPs
without filters are investigated. However, this work
which reports only partial results is devoted to an ex-
treme case for the generating model. Several variants
in between might also be considered.
All the results presented here are essentially based
on simulations of Turing machines. This is actu-
ally valid for almost all bio-inspired computational
models. Even the universal ANEPs are obtained
via simulations of Turing machines. In some sense,
these simulations are not quite natural as all the
bio-inspired models are mainly based on a possible
huge parallelism while Turing machine is a sequen-
tial model. Therefore, direct simulations of parallel
models as well as universal ANEPs derived directly
from ANEPs are of a definite interest.
Last but not least, our presentation was not con-
cern of practical matters regarding the possible bi-
ological or electronic implementation of these net-
works. There were reported some simulations on dif-
ferent computers under different softwares, see, e.g.,
(G´omez, 2008). Also some preliminary works on de-
signing electronic components that could implement
some aspects of ANEPs are under way.
REFERENCES
Castellanos, J., Mart´ın-Vide, C., Mitrana, V., Sempere, J.
(2001) Solving NP-complete Problems with Networks
of Evolutionary Processors, In International Work-
Conference on Artificial and Natural Neural Networks
(IWANN 2001), LNCS 2084, 621–628. Springer.
Csuhaj-Varj´u, E. and Salomaa, A. (1997) Networks of Par-
allel Language Processors. In New Trends in Formal
Languages, LNCS 1218, 299 - 318. Springer.
Csuhaj-Varj´u, E., Mitrana, V. (2000). Evolutionary Sys-
tems: A Language Generating Device Inspired by
Evolving Communities of Cells. Acta Informatica, 36,
913 – 926. Springer.
Dassow, J. and Mitrana, V. (2008). Accepting Networks of
Non-Inserting Evolutionary Processors, In Proceed-
ings of NCGT 2008: Workshop on Natural Computing
and Graph Transformations, 29–42.
Dassow, J., Martin-Vide, C., Mitrana, V. (2006). Free Gen-
erating Hybrid Networks of Evolutionary Processors,
In Formal Models, Languages and Applications Series
in Machine Perception and Artificial Intelligence 66,
65–78. World Scientific.
Dr˘agoi, C. and Manea, F. (2008). On the Descriptional
Complexity of Accepting Networks of Evolutionary
Processors with Filtered Connections. International
Journal of Foundations of Computer Science, 19:5,
1113 – 1132. World Scientific.
Dr˘agoi, C., Manea, F., Mitrana, V. (2007). Accepting Net-
works of Evolutionary Processors With Filtered Con-
nections. Journal of Universal Computer Science,
13:11, 1598 – 1614. Springer.
Errico, L., and Jesshope, C. (1994). Towards a New Ar-
chitecture for Symbolic Processing, In Artificial In-
telligence and Information-Control Systems of Robots
’94, 31–40. World Scientific.
Fahlman, S. E., Hinton, G.E., Seijnowski, T.J. (1983) Mas-
sively Parallel Architectures for AI: NETL, THISTLE
and Boltzmann Machines, In Proc. of the National
Conference on Artificial Intelligence, 109–113. AAAI
Press.
Garey, M., and Johnson, D. (1979). Computers and
Intractability: A Guide to the Theory of NP-
completeness, San Francisco, CA: W. H. Freeman.
G´omez Blas, N. (2008) Redes de Procesadores Evolutivos:
Autoaprendizaje de Filtros en las Conexiones, PhD
Thesis, Politechnical University of Madrid. (in Span-
ish).
Hillis, W.D. (1979). The Connection Machine. MIT Press,
Cambridge.
Loos, R., Manea, F., Mitrana, V. (2008) On Small, Reduced,
and Fast Universal Accepting Networks of Splicing
Processors, in press Theoretical Computer Science,
doi:10.1016/j.tcs.2008.09.048. Elsevier.
Manea, F. (2005). Timed Accepting Hybrid Networks of
Evolutionary Processors, In Artificial Intelligence and
Knowledge Engineering Applications: A Bioinspired
Approach, LNCS 3562, 122 – 132. Springer.
Manea, F., Martin-Vide, C., Mitrana, V. (2005). Solving
3CNF-SAT and HPP in Linear Time Using WWW,
In Machines, Computations and Universality, LNCS
3354, 269 – 280. Springer.
ACCEPTING NETWORKS OF EVOLUTIONARY PROCESSORS: COMPLEXITY ASPECTS - Recent Results and New
Challenges
603
Manea, F., Martin-Vide, C., Mitrana, V. (2007) Accept-
ing Networks of Splicing Processors: Complexity Re-
sults, Theoretical Computer Science, 371:1-2, 72–82.
Elsevier.
Manea, F., Martin-Vide, C., Mitrana, V. (2007). On the Size
Complexity of Universal Accepting Hybrid Networks
of Evolutionary Processors, Mathematical Structures
in Computer Science, 17:4, 753 771. Cambridge
University Press.
Manea, F., and Mitrana, V. (2007). All NP-problems Can Be
Solved in Polynomial Time by Accepting Hybrid Net-
works of Evolutionary Processors of Constant Size,
Information Processing Letters, 103:3, 112 – 118. El-
sevier.
Manea, F., Margenstern, M., Mitrana, V., Perez-Jimenez,
M. J. (2008). A New Characterization of NP, P, and
PSPACE With Accepting Hybrid Networks of Evo-
lutionary Processors, in press Theory of Computing
Systems, doi:10.1007/s00224-008-9124-z. Springer.
Mart´ın-Vide, C. and Mitrana, V. (2005) Networks of Evo-
lutionary Processors: Results and Perspectives, In
Molecular Computational Models: Unconventional
Approaches, 78-114. Idea Group Publishing.
P˘aun, G. and Sˆantean, L. (1989) Parallel Communicating
Grammar Systems: The Regular Case, Annals of Uni-
versity of Bucharest, Ser. Matematica-Informatica 38,
55 - 63.
P˘aun, G. (2000) Computing with Membranes, Journal of
Computer and System Sciences 61, 108 - 143. ACM
Press.
D. Sankoff et al. (1992) Gene Order Comparisons for Phy-
logenetic Inference: Evolution of the Mitochondrial
Genome, In Proceedings of the National Academy of
Sciences of the United States of America 89, 6575–
6579.
ICAART 2009 - International Conference on Agents and Artificial Intelligence
604