NETWORKS OF EVOLUTIONARY PROCESSORS AS NATURAL
LANGUAGE PARSERS
Gemma Bel-Enguix, M. Dolores Jim
´
enez-L
´
opez, Robert Mercas¸ and Alexander Perekrestenko
GRLMC, Rovira i Virgili University, Pl. Imperial T
`
arraco 1, Tarragona, Spain
Keywords:
Networks of Evolutionary Processors, Natural Language Processing, Parsing.
Abstract:
Networks of Evolutionary Processors (NEPs) introduced in Castellanos et al. (2001) are a new computing
mechanism directly inspired from the behaviour of cell populations. In the paper, we explore the possibility
of using Networks of Evolutionary Processors (NEPs) for modelling natural language an entity generated in
parallel by a modular architecture and specially syntax a modular device of specialized processors inside the
modular construct of language. An implementation of NEPs for parsing of simple structures is suggested.
Moreover, we introduce the concepts of parallel processing and linearity in the formalization of NEPs as
accepting devices, and suggest a new line of research by applying these networks to natural language.
1 INTRODUCTION
Networks of Evolutionary Processors (NEPs) are a
new computing mechanism directly inspired from the
behaviour of cell populations. Every cell is described
by a set of words (DNA) evolving by mutations,
which are represented by operations on these words.
At the end of the process, only the cells with correct
strings will survive. In spite of the biological inspi-
ration, the architecture of the system is directly re-
lated to the Connection Machine (Hillis, 1985) and
the Logic Flow paradigm (Errico and Jessope, 1994).
Moreover, the global framework for the development
of NEPs has to be completed with the biological back-
ground of DNA computing (P
˘
aun et al., 1998), mem-
brane computing (P
˘
aun, 2000) – that focalizes also in
the behaviour of cells –, and specially with the the-
ory of grammar systems (Csuhaj-Varj
´
u et al., 1994),
which share with NEPs the idea of several devices
working together and exchanging results.
First precedents of NEPs as generating devices
can be found in (Csuhaj-Varj
´
u and Salomaa, 1997)
and (Csuhaj-Varj
´
u and Mitrana, 2000). The topic was
introduced in (Castellanos et al., 2001), and further
developed in (Mart
´
ın-Vide et al., 2003; Castellanos
et al., 2003; Castellanos et al., 2005).
NEPs can be defined as a graph whose nodes are
processors performing some very simple operations
on strings and sending the resulting strings to other
nodes. Every node has filters that block some strings
from being sent and/or received. This functioning al-
lows the specialization of each processor, which is an
interesting feature for natural language processing.
In this paper, we propose to use Networks of Evo-
lutionary Processors (NEPs) as a modular description
device in linguistics. The idea of using HNEPs as rec-
ognizers is not original. A preliminary approach to
accepting NEPs was already introduced in (Margen-
stern et al., 2004) and developed in a series of contri-
butions (Castellanos et al., ; Manea, 2004; Manea and
Mitrana, 2007). Nevertheless, this is the first attempt
to deal with natural language processing issues from
a NEPs perspective.
As was shown in (Castellanos et al., 2003),
HNEPs with regular filters are Turing-equivalent.
This fact suggests that they can be used as a formal
base for a “programming language” for processing
natural language information on all levels. In this con-
text, (H)NEPs have the following advantages:
the basic structures of NEPs –nodes, filters and the
graph-based structure– can be used in a very natu-
ral way for modelling different kinds of linguistic
issues;
a HNEP simulator can be easily implemented as a
software environment for constructing language-
processing modules, and in particular, syntactic
parsers;
we can specify the output format of such a parser
according to the needs of the setting for which it
is being developed.
In this paper, we introduce a formalization of
619
Bel-Enguix G., Jiménez-López M., Merca¸s R. and Perekrestenko A. (2009).
NETWORKS OF EVOLUTIONARY PROCESSORS AS NATURAL LANGUAGE PARSERS.
In Proceedings of the International Conference on Agents and Artificial Intelligence, pages 619-625
DOI: 10.5220/0001809306190625
Copyright
c
SciTePress
NEPs suitable for natural language purposes. We il-
lustrate the applicability of this framework for lin-
guistic processing with an example of a NEP that rec-
ognizes simple syntactic structures using linear input
and output.
The paper is organized as follows. Section 2
presents the general definition of NEPs. In section 3,
some of the main features of NEPs are discussed. In
section 4, a new variant of NEPs for sentence recogni-
tion is introduced. An example of the functioning of
this new model is shown in section 5. Finally, section
6 is devoted to conclusions and directions for future
work.
2 NEPs: DEFINITION
Following (Castellanos et al., 2003) we introduce the
basic definition of NEPs.
Definition 1. A Network of Evolutionary Processors
of size n is a construct:
Γ = (V, N
1
, N
2
, ..., N
n
, G),
where:
V is an alphabet and for each 1 i n,
N
i
= (M
i
, A
i
, PI
i
, PO
i
) is the i-th evolutionary
node processor of the network. The parameters
of every processor are:
M
i
is a finite set of evolution rules of one of the
following forms only:
i. a b, where a, b V (substitution rules),
ii. a ε, where a V (deletion rules),
iii. ε a, where a V (insertion rules).
A
i
is a finite set of strings over V. The set A
i
is
the set of initial strings in the i-th node.
PI
i
and PO
i
are subsets of V
representing the
input and the output filter, respectively. These
filters are defined by the membership condition,
namely a string w V
can pass the input filter
(the output filter) if w PI
i
(w PO
i
).
G = ({N
1
, N
2
, . . . , N
n
}, E) is an undirected graph
called the underlying graph of the network. The
edges of G, that is the elements of E, are given in
the form of sets of two nodes. The complete graph
with n vertices is denoted by K
n
.
A configuration of a NEP is an n-tuple C =
(L
1
, L
2
, . . . , L
n
), with L
i
V
for all 1 i n. It rep-
resents the sets of strings which are present in any
node at a given moment.
A given configuration of a NEP can change either
by an evolutionary step or by a communicating step.
When changing by an evolutionary step, each compo-
nent L
i
of the configuration is changed in accordance
with the evolutionary rules associated with the node
i. The change in the configuration by an evolutionary
step is written as C
1
C
2
.
When changing by a communication step, each
node processor N
i
sends all copies of the strings it has,
able to pass its output filter, to all the node processors
connected to N
i
and receives all copies of the strings
sent by any node processor connected with N
i
, if they
can pass its input filter. The change in the configura-
tion by a communication step is written as C
1
` C
2
.
3 NEPs FOR MODELLING
NATURAL LANGUAGE
This formal construct can provide a good framework
for attempting a new description and formalization of
natural language. Three features of NEPs are crucial
for their application to language processing and, espe-
cially, to parsing technologies. NEPs are specialized,
modular communicating systems that work in paral-
lel.
NEPs are modular devices because they can be de-
scribed as distributed systems of contributing nodes,
each one of them carrying out just one type of opera-
tion. Every node should be defined depending on the
specific domain we aim to tackle. Moreover, the pro-
cessors of the network are specialized, since each one
is designed for a specific task, in a way that the final
success of the system depends on the correct working
of every agent and on the correct interaction between
them.
It is a commonplace belief in cognitive sci-
ence that complex computational systems are at
least weakly decomposable into components (Fodor,
1983). In general, modular theories in cognitive sci-
ence propose a number of independent but interacting
cognitive ‘modules’ that are responsible for each cog-
nitive domain.
The theory of modularity is also present in lin-
guistic approaches. In fact, the modular approach
to grammar has been shown to have important con-
sequences for the study of language (cf. (Sadock,
1991)). This has led many grammatical theories to
use modular models. The idea of having a system
made up of several independent components (syntax,
semantics, phonology, morphology, etc.) seems to be
a good choice to account for linguistic issues.
Several authors have defended as well internal
modularity in the different dimensions of grammar
(Everaert et al., 1988; Harnish and Farmer, 1984;
Weinberg, 1987). In (Crocker, 1991), for example,
ICAART 2009 - International Conference on Agents and Artificial Intelligence
620
a highly modular organization of syntax is suggested
where modules are determined by the representations
they recover.
Communication is the feature that accounts for the
social competences of modules. By means of com-
munication, agents can interact to achieve common
goals, work for their own interests, or even isolate.
Although different processes and operations are done
in parallel, NEPs also need to define some type of co-
ordination between nodes, since alternative steps of
communication have to be synchronized. Nodes com-
munication is said to be:
graph-supported, and
filter-regulated.
We say that the communication among nodes is
graph-supported because the edges of the graph gov-
ern its social interactions. Therefore, if there are non-
connected nodes in a NEP, these nodes do not com-
municate with other processors.
The fact that the communication is filter-regulated
means that it is driven by means of input and output
filters.
The goal of the input filter is to control the in-
formation or structures entering the node. This
helps the specialization, by the selection of the
strings/structures the node can process, and pro-
tects the module from possible harmful items.
With its output filter the node selects the informa-
tion it wants to share and also when it wants to
share it.
In what refers to the functioning of NEPs, the
main feature to be highlighted is the parallelism. By
means of parallelism, different tasks can be performed
at the same time by different processors. Some of lin-
guistic processes, as well as language generation in
general, are considered to be parallel. For apparently
sequential interactions (i.e. dialogue) parallelism al-
lows working with multi-modal exchanges.
Therefore, taking the four main modules usually
considered in linguistics – syntax, semantics, phonol-
ogy, morphology and considering the edges of the
underlying graph as a way for communicating or not
communicating, we can draw the simple scheme of
a “language generation NEP” that is shown in Figure
1. Ph stands for phonetics, M represents morphology,
Sy is syntax and Se refers to semantics. The semantic
node only communicates with the phonological and
the syntactic ones as there does not seem to be any
interaction between semantics and phonetics.
Ph
Sy Se
M








H
H
H
H
H
H
H
H
H
Figure 1: Modular Linguistic NEP.
4 NEPs FOR PARSING
In the sequel we will try to make some modifications
in the computational definition of NEPs for them to
work as parsers of natural language. For the sake
of simplicity, we establish a methodological restric-
tion: we will focus on simple sentences with the shape
[S V O], where S [NP], O [NP], that is, sen-
tences with the form [[NP] V [NP]]. Our purpose is to
check whether this mechanism is powerful and sim-
ple enough to be used as a model for parsing complex
linguistic strings.
As it has been already highlighted, the model
hereby presented takes advantage of the main features
of NEPs: modularity, specialisation and parallelism.
Adopting these characteristics in the modelling of our
device one can improve its efficiency and decrease the
complexity.
Modularity and specialisation can be useful be-
cause they allow designing processors which only ac-
cept, recognize and label a single type of syntactic
phrases or functions. Such strategy makes easier the
first work of classifying the lexical pieces according
to their grammatical category. By parallelism, all lex-
ical items will be taken and analyzed at the same time,
and afterwards, grammatical units can be packed in
different processors.
In general, the system has to perform two main
tasks: a) to recognize correct strings, like an automa-
ton could do, and b) to provide an output with labelled
elements that give account of the syntactic structure of
the input sentence.
To be able to accept and analyze a sentence, the
NEP has to perform the following steps: 1) to rec-
ognize every word, 2) to make a map of its linguistic
features, 3) to gather the non-terminal units in phrases
establishing their beginning and end, and finally 4) to
give a linear structural version of the whole sentence.
In order to implement such a NEP, we propose
several types of specialized nodes:
nodes specialized in accepting lexical items rec-
ognizing their grammatical categories;
nodes specialized for accepting and operating
with different types of phrases, i.e., nominal
NETWORKS OF EVOLUTIONARY PROCESSORS AS NATURAL LANGUAGE PARSERS
621
phrases (NP), prepositional phrases (PP);
nodes for accepting sentences.
The structure of the graph is given by the class
of sentences to be processed. Many different NEPs
could be designed for the parsing of the same type of
syntactic structures, but we are looking for the best
NEP for every structure in terms of number of nodes
and computational efficiency.
Moreover, since the structure we are working with
has just two types of sub-structures, namely NP and
V , at least three specialized nodes are needed, one for
sub-components of NP (just one element in a minimal
NP), one for the recognition and labelling of NP and
the other one for the recognition and analysis of V .
The sub-components of NP include nodes for lexical
units N for a noun and ART for an article. Since a
node for packing the final output is also necessary, a
graph with at least four nodes has to be designed.
Our device will consist of the following nodes: a)
a node for recognizing articles, ART b) three proces-
sors for labelling nouns Nc, N p, Nv, c) a node for
recognizing nominal phrases, NP, d) a node for an-
alyzing verbal structures, V , e) a node for analyzing
nominal phrases, NP and f) a node for labelling sen-
tences, P. Beside these nodes we will also need an
output node.
In the input filter of specialized nodes, the only
elements accepted will be those that can be part of
phrases they can recognize. In the output filter of
these nodes, only labelled phrases will be allowed to
pass and be sent to the other filters.
To perform the recognition process, two types of
alphabets are necessary: V , the alphabet of the in-
put symbols, which are terminal strings, i.e., lexical
items, and Σ the alphabet of grammatical types sym-
bols which correspond to grammatical categories –
plus feature symbols. For the simple sentences we
are dealing with, we recognize several strings sym-
bols belonging to Σ
which are needed to process the
sentence: [ ]
N
, [ ]
V
, [ ]
ART
, [ ]
NP
.
In order to model the subject-verb agreement,
some of these symbols will be provided with morpho-
logical markers. First of all, two different marks will
be established for the category [V ] in order to distin-
guish between the two different forms of the English
verb in present: s stands for the general form, and p
for the third person. In this way, when the node re-
ceives a lexical item x, it analyzes it, and inserts the
grammatical types symbols giving us [x]
vs
or [x]
vp
.
In order to fulfill the agreement with the verb, [N]
has to be recognized with the same parameters as the
verb and moreover with some that give us the agree-
ment with the article, {c, p, v}. On the other hand, in
order to model agreement inside a phrase, we intro-
duce separate nodes for the same phrase with different
morphological characteristics.
For distinguishing the article “a” from the articles
“an” and “the”, the feature [ART ] will be [ ]
a1
for “a”,
[ ]
a2
for “an” and [ ]
a3
for “the”, where the absence
of any symbol means it works for both singular and
plural. If the agreement is not accomplished inside
NP or between NP at the left of the verb and the verb
itself, then the sentence will not be recognized.
For delimiting the phrases as a group of several
elements belonging to Σ, and sentences as a group of
phrases, we introduce in our NEP a rule that will iso-
late these symbols from the ones belonging to the al-
phabet V .
With the elements we have just explained, a NEP
for sentence analysis can be defined as follows:
Definition 2. A NEP for the analysis of simple sen-
tences [[NP]V[NP]] is a general structure:
Γ = (V, Σ, {ART, N
v
, N
c
, N
p
, V, NP, P, Out}, G)
where:
V is the input vocabulary,
Σ is the grammatical type vocabulary,
{ART , N
v
, N
c
, N
p
, V , NP, P, Out} are the node
processors N
1
, N
2
, . . . , N
7
, Out of the network with
the following definition. For every node N
i
= (M
i
,
A
i
, PI
i
, PO
i
) :
M
i
is the finite set of evolution rules of the form:
i. a [a]
n
, where a {V Σ}
(insertion rule
with n indicating the indexes of the node),
ii. a ε, where a V (deletion rule in which all
elements of V are erased), or
iii. a, b [ab]
n
, where a, b V Σ (adjunction
rule in which two elements coming from dif-
ferent nodes are wrapped together);
A
i
is the set of strings over V in the initial con-
figuration,
PI
i
are the input filters over {V Σ}
, and
PO
i
are the output filters over {V Σ}
.
Out is the output node that has a special input
filter that compares the initial phrase shuffled with
Σ
, with the words that are trying to enter the node
(Inp tt Σ
= inputword)
G = (V, Ev) is the network graph where:
V = {N
1
, N
2
, . . . , N
7
, Out} are its nodes, and
Ev = (N
1
N
6
, N
2
N
6
, N
3
N
6
, N
4
N
6
, N
5
N
6
, N
6
N
7
,
N
7
Out) are the arcs.
The computation works almost like in a regular
NEP, combining evolutionary steps and communica-
tion steps. Moreover, the system is totally parallel,
even in the input mechanism, and every node applies,
during evolutionary steps, as many rules as it can.
ICAART 2009 - International Conference on Agents and Artificial Intelligence
622
The system stops when no operation can be per-
formed by the NEP or the Out node receives an in-
put. In the latter case, the recognition process ends,
the initial phrase is accepted as correct and its struc-
ture is displayed.
5 AN EXAMPLE
In this section, a NEP will be implemented for the
recognition of sentences [[NP]V [NP]]. The example
we use is the sentence The boy eats the apple.
Our NEP contains eight nodes. The alphabet
Σ consists of the symbols {[, ], 1, 2, 3, a, c, f , n, p, s, v}
that indicate the type of the structure.
The general definition of the system is as follows:
Γ = (V, Σ, {ART, N
v
, N
c
, N
p
, V, NP, P, Out}, G)
where:
V = {apple, apples, boy, boys, eat, eats, a, an, the}
Σ = {[, ], 1, 2, 3, a, c, f , n, p, s, v}
Out ={{(Inp tt x) x, where x Σ
},
/
0, {T he
boy eats an apple} tt Σ
, Σ
}
ART = {(a [a]
a1
, an [an]
a2
, the [the]
a3
),
{a, an, the},
/
0, {[a]
a1
, [an]
a2
, [the]
a3
)}}
N
v
= {(apple [apple]
nv
), {apple}, {[(V
Σ
)
]
nv
}}, {[(V Σ
)
]
nv
}}, that is, all third person
singular nouns starting with a vowel.
N
c
= {(boy [boy]
nc
), {boy}, {[(V Σ)
]
nc
,
{[(V Σ)
]
nc
}}, that is, all third person singular
nouns starting with a consonant.
N
p
= {(apples [apples]
np
, boys [boys]
np
),
I [I]
np
, you [you]
np
, {apples, boys}, {[(V
Σ)
]
np
, {[(V Σ)
]
np
, [I]
np
, [you]
np
}}, that is all
plural nouns and singular ones that are not third
person.
V = {(eats [eats]
vs
, eat [eat]
vp
), {eat, eats},
{[(V Σ)
]
vs
, [(V Σ)
]
vp
}, {[(V Σ)
]
vs
, [(V
Σ)
]
vp
}}
NP = {([x]
ax
1
[y]
ny
1
[[x]
ax
1
[y]
ny
1
]
f y
1
, ([y]
np
[[y]
np
]
f p
)),
/
0, (V Σ)
, {(V Σ)
\{[[V Σ]
a2
[V
Σ]
n{c, p}
]
Σ
, [[V Σ]
a1
[V Σ]
n{v, p}
]
Σ
}}}
P = {([x]
nx
1
[y]
vx
1
[z]
nz
1
[[x]
nx
1
[y]
vx
1
[z]
nz
1
]
x
1
),
/
0,
([V Σ)
]
f x
[V Σ)
]
vx
[V Σ)
]
f y
, (V Σ)
}
G = (V, Ev)
V = {ART, N
v
, N
c
, N
p
, V, NP, P, Out}
Ev = (ART NP, N
v
NP, N
c
NP, N
p
NP, V P,
NP P, P Out)
Here x, y, z, x
1
, y
1
, z
1
Σ V .
In order to understand how our example program
works we will “run” it on two different sentences. The
first one will be the phrase: “The boy eats an apple”.
Before the first computation step, ART will contain
the words the and an, Nc will contain the words boy
and apple, and V the verb eats. The second step
is a communication one. The third step, which is a
computation step, will produce in NP the combina-
tions {[the]
a3
[boy]
nc
, [an]
a2
[boy]
nc
, [the]
a3
[apple]
nv
,
[an]
a2
[apple]
nv
}. Using the output filter, NP is able to
exclude from the next computation the second and the
third component. Hence, after the fifth computation
step we will have in P the string set {[[the]
a3
[boy]
nc
]
f s
[eats]
vs
[[an]
a2
[apple]
nv
]
f s
, [[an]
a2
[apple]
nv
]
f s
[eats]
vs
[[the]
a3
[boy]
nc
]
f s
}. The last computation step, the
seventh one, is going to output [[ ]
a3
[ ]
nc
]
f s
[ ]
vs
[[ ]
a2
[ ]
nv
]
f s
which is the structure of the input phrase.
This implies that the phrase was correct.
The second phrase we will look at is “The
apples eats a boy”. It is easy to observe that this
phrase is a wrong one. Before the first computation
step ART will contain the words the and a, Nc
the words boy and apples and V the verb eats.
The third computation step will produce in NP the
string set {[[the]
a3
[apples]
np
]
f p
, [[the]
a3
[boy]
nc
]
f s
,
[[a]
a1
[boy]
nc
] f s, [[apples]
np
]
f p
}. By looking at
the verb we observe that in the fifth computation
step we will get the string set {[[the]
a3
[boy]
nc
]
f s
[eats]
vs
[[the]
a3
[apples]
np
]
f p
, [[a]
a1
[boy]
nc
]
f s
[eats]
vs
[[the]
a3
[apples]
np
]
f p
, [[the]
a3
[boy]
nc
]
f s
[eats]
vs
[[apples]
np
]
f p
, [[a]
a1
[boy]
nc
]
f s
[eats]
vs
[[the]
a3
[apples]
np
}. As no more computation steps
are possible and no output is produced the automaton
stops and concludes that the input phrase was wrong.
6 DISCUSSION AND FUTURE
WORK
In this paper we presented an application of NEPs
for the analysis and recognition of sentences of nat-
ural language. In the NEPs we have modelled, each
processor is specialized in the processing of different
syntactic patterns: NP, VP, S. An important feature of
the system is that both input and output are linear, the
lexical items are the input and syntactic structures are
the output.
It is clear that this is not even a preliminary ap-
proach, but rather a suggestion of a new research line.
We have already highlighted the advantages of such
constructs because of their characteristics and func-
tionality. For the future, a more precise analysis of the
components of the NEPs for parsing of natural lan-
NETWORKS OF EVOLUTIONARY PROCESSORS AS NATURAL LANGUAGE PARSERS
623
ART
[a]
a1
, [an]
a2
, [the]
a3
Nc
{[(ΣUV)*]
nc
}
[(ΣUV)*]
nc
V
{[(ΣUV)*]
vs
, [(ΣUV)*]
vp
}
[eat]
vs
, [eats]
vp
Nv
{[(ΣUV)*]
nv
}
[(ΣUV)*]
nv
Np
{[(ΣUV)*]
np
}
[I]
n1
U[you]
n2
U[(ΣUV)*]
np
NP
[(ΣUV)*]
ax
[(ΣUV)*]
ny
U[(ΣUV)*]
n1,n2
(ΣUV)*\ {[[ΣUV]
a2
[ΣUV]
n{c,p}
]
Σ*
,
[[ΣUV]
a1
[ΣUV]
n{v,p}
]
Σ*
P
[(ΣUV)*]
fx
[(ΣUV)*]
vx
[(ΣUV)*]
fy
(ΣUV)*
Output
((Input), Σ*)
Ш
Figure 2: Example.
guage would be necessary as well as a detailed study
of their connection with real neuronal capabilities.
There exist also some problems regarding the pro-
posed NEP-based approach to the language process-
ing. As has been mentioned before, the accepting
power of NEPs is that of a Turing machine which,
on the one hand, allows us to specify a program
without any restriction regarding the linguistic frame-
work we want to simulate and the linguistic facts we
want to deal with. But on the other hand, it ex-
cludes any possibility of the automatic validation of
the properties of a program and makes the check-
ing of a large program’s correctness very difficult
and time-consuming. This concerns in particular the
polynomial-time (Turing-)complexity of the running
time of a program with respect to the length of the
input which is generally considered as crucially im-
portant for automatic language processing.
At the same time, the specific structure of the ele-
mentary objects used in NEPs, makes out of this for-
malism a very convenient tool for modelling natural
language. From the practical perspective, it means
that NEPs can be used as a formal base for specialized
task-oriented programming environments for natural
language processing.
In this light, a possible future line of research
could be restrictions to the NEPs that, on the one
hand, will guarantee that any program executes in
polynomial time, based on the length of the input and,
on the other hand, will not hinder too much the frame-
work’s expressivity for linguistic purposes.
We claim that NEPs are a very convenient sys-
tem, not only for explaining natural language process-
ing, but also for simulating knowledge representation
and cognitive mechanisms. Moreover, NEPs provide
a consistent theoretical framework for the formaliza-
tion of human-computer interfaces. In this model, the
human capacity of transforming the world by means
of the word and knowledge can be approached by a
computational device that is rather simple in terms of
its size, structure and implementation.
REFERENCES
Castellanos, J., Leupold, P., and Mitrana, V. (2005). On
the size complexity of hybrid networks of evolu-
tionary processors. Theoretical Computer Science,
330(2):205–220.
Castellanos, J., Manea, F., de Mingo Lpez, L. F., and Mi-
trana, V. Accepting networks of splicing processors
with filtered connections. In MCU 2007, pages 218–
229.
Castellanos, J., Mart
´
ın-Vide, C., Mitrana, V., and Sempere,
J. M. (2001). Solving np-complete problems with net-
works of evolutionary processors. Lecture Notes in
Computer Science, IWANN 2001(2048):621–628.
Castellanos, J., Mart
´
ın-Vide, C., Mitrana, V., and Sempere,
J. M. (2003). Networks of evolutionary processors.
Acta Informatica, (39):517–529.
Crocker, M. (1991). Multiple meta-interpreteres in a logi-
cal model of sentence processing. In Brown, C. and
Koch, G., editors, Natural Language Understanding
ICAART 2009 - International Conference on Agents and Artificial Intelligence
624
and Logic Programming, III, pages 127–145. Elsevier
Science Publishers B.V., North-Holland.
Csuhaj-Varj
´
u, E., Dassow, J., Kelemen, J., and P
˘
aun, G.
(1994). Grammar Systems. A Grammatical Approach
to Distribution and Cooperation. Gordon and Breach,
London.
Csuhaj-Varj
´
u, E. and Mitrana, V. (2000). Evolutionary sys-
tems: a language generating device inspired by evolv-
ing communities of cells. Acta Informatica 36.
Csuhaj-Varj
´
u, E. and Salomaa, A. (1997). Networks of par-
allel language processors. Lecture Notes on Computer
Science 1218.
Errico, L. and Jessope, C. (1994). Towards a new architec-
ture of symbolic processing. In Plander, I., editor, AR-
tificial Intelligence and Information-Control Systems
of Robots’94, pages 31–40, Singapore. World Scien-
tific.
Everaert, M., Evers, A., Hybreqts, R., and Trommelent, M.,
editors (1988). Morphology and Modularity: In HOn-
our of Henk Schultink. Publications in Language Sci-
ences 29, Foris.
Fodor, J. (1983). The Modularity of Mind. The MIT Press,
Cambridge, MA.
Harnish, R. and Farmer, A. (1984). Pragmatics and the
modularity of the linguistic system. Lingua, 63:255–
277.
Hillis, W. (1985). The Connection Machine. MIT Press,
Cambridge, MA.
Manea, F. (2004). Using ahneps in the recognition of
context-free languages. In Proceedings of the Work-
shop on Symbolic Networks, ECAI 2004.
Manea, F. and Mitrana, V. (2007). All np-problems can
be solved in polynomial time by accepting hybrid net-
works of evolutionary processors of constant size. Inf.
Process. Lett., 103(3):112–118.
Margenstern, M., Mitrana, V., and Jim
´
enez, M. P. (2004).
Accepting hybrid networks of evolutionary proces-
sors. In Pre-proceedings of DNA 10, pages 107–117.
Mart
´
ın-Vide, C., Mitrana, V., P
´
erez-Jim
´
enez, M., and
Sancho-Caparrini, F. (2003). Hybrid networks of evo-
lutionary processors. In Proceedings of GECCO 2003,
Lecture Notes in Computer Science 2723, pages 401–
412, Berlin. Springer.
P
˘
aun, G. (2000). Computing with membranes. Journal of
Computer and Systems Sciences, 61(1):108–143.
P
˘
aun, G., Rozenberg, G., and Salomaa, A. (1998). DNA
Computing. New Computing Paradigms. Springer,
Berlin.
Sadock, J. (1991). Autolexical Synax. A Theory of Parallel
Grammatical Representations. University of Chicago
Press, Chicago.
Weinberg, A. (1987). Modularity in the syntactic parser.
In Garfield, J., editor, Modularity in Knowledge Rep-
resentation and Natural Language Understanding,
pages 259–276. The MIT Press, Cambridge, MA.
NETWORKS OF EVOLUTIONARY PROCESSORS AS NATURAL LANGUAGE PARSERS
625