THE LINGUISTIC RELEVANCE OF LINDENMAYER SYSTEMS
Leonor Becerra-Bonache
GRLMC, Rovira i Virgili University, Av. Catalunya, 35, 43002 Tarragona, Spain
Suna Bensch
Institutionen f¨or Datenvetenskap, Ume˚a Universitet, 90187 Ume˚a, Sweden
M. Dolores Jim´enez-L´opez
GRLMC, Rovira i Virgili University, Av. Catalunya 35, 43002 Tarragona, Spain
Keywords:
Lindenmayer systems, Natural language, Parallelism, Non-context freeness, Bio-inspired models.
Abstract:
In this paper, we investigate the linguistic relevance of Lindenmayer Systems (L Systems). L systems were
introduced in the late sixties by Aristid Lindemayer as a mathematical theory of biological development. Thus
they can be considered as one of the first bio-inspired models in the theory of formal languages. Two main
properties in L systems are 1) the idea of parallelism in the rewriting process and 2) their expressiveness to
describe non-context free structures that can be found in natural languages. Therefore, the linguistic relevance
of this formalism is clearly based on three main features: bio-inspiration, parallelism and generation of non-
context free languages. Despite these interesting properties, L systems have not been investigated from a
linguistic point of view. With this paper we point out the interest of applying these bio-inspired systems to the
description and processing of natural language.
1 INTRODUCTION
Most current natural language approaches show sev-
eral facts that invite to search for new formalisms to
account in a simpler and more natural way for natural
languages. In the last decades, biology has become a
rich source of models for other sciences. The knowl-
edge of the behavior of nature has influenced a num-
ber of areas such as artificial intelligence, mathemat-
ics or theoretical computer science, giving rise to new
perspectives in research. Natural computing. neural
networks, genetic algorithms and L systems are just
some examples of this eclosion of new bio-inspired
computational paradigms. The correspondences be-
tween several structures of natural language and biol-
ogy allow us, in the field of linguistics, to think that
maybe we can take advantage of the bio-inspired for-
mal models in theoretical computer science. In fact,
one of the goals in this research area is to offer sim-
ple bio-inspired theories for describing natural lan-
guages in order to make easier their manipulation and
their implementation in Natural Language Processing
(NLP) systems.
The question whether grammatical sentences of
natural languages form regular, context-free, context-
sensitive or recursively enumerable sets has been sub-
ject to many discussions since it was posed by Chom-
sky in 1957. There seems to be little agreement
among linguists concerning the position of natural
languages in the Chomsky hierarchy. It seems that
neither the family of regular (REG) or context-free
(CF) languages have enough expressiveness to de-
scribe the basic context-sensitive syntactic construc-
tions found in natural languages. Several attempts
have been made to prove the non-context-freeness of
natural languages (Bresnan et al., 1985; Culy and
Reidel, 1987; Shieber, 1987). Despite the fact that
the non-context-freeness of natural language has be-
come the standardly accepted theory, there are lin-
guists such as Pullum and Gazdar who, after review-
ing the various attempts to establish that natural lan-
guages are not context-free, come to the conclusion
that every published argument purporting to demon-
strate the non-context-freeness of some natural lan-
guage is invalid, either formally or empirically or both
(Pullum and Gazdar, 1987). Despite these arguments,
395
Becerra-Bonache L., Bensch S. and Dolores Jiménez-López M. (2010).
THE LINGUISTIC RELEVANCE OF LINDENMAYER SYSTEMS.
In Proceedings of the 2nd International Conference on Agents and Artificial Intelligence, pages 395-402
DOI: 10.5220/0002788003950402
Copyright
c
SciTePress
it seems to be an untenable position that all syntac-
tical aspects in natural languages can be captured by
context-free grammars. However, the overwhelming
bulk of natural language syntax is context-free.
Traditional linguistics presents a hierarchical
view of grammar in the sense of taking the organiza-
tional dimensions of language to be ‘levels’ obtain-
able one from another in a certain fixed order. In
this hierarchical view, the output of one component
serves as the input to the next. This conception of
grammar deprives components from any autonomy,
since each of them has to wait for information of the
previous one in order to start its task. The initial
grammar models investigated in computational lin-
guistics for natural language processing used to be
sequential grammar formalisms. However, hierachi-
cality and sequentiality have revealed as not appropri-
ate features to account for natural languages. There-
fore in natural language processing, researchers have
turned from the initial serial models to parallel ones.
And in theoretical linguistics, the problems with the
hierarchical view of grammar have led to linguistic
concepts/theories based on parallel and autonomous
components (Jackendoff, 1997), (Sadock, 1991).
It follows from what we have said up to now
that models in linguistics demand bio-inspired de-
vices that avoid the sequential rewriting and that have
enough expressiveness to describe natural languages.
We propose, in this paper, to investigate the linguistic
relevance of Lindenmayer systems and languages, in
particular of ET0L systems, which have such desired
properties. In fact, L systems are parallel and non
sequential grammatical formalism, they are biologi-
cally inspired and they can generate the non-context
free structures present in natural language.
The paper is organized as follows. Section 2
presents the basic idea and definition of Lindenmayer
systems and offers examples. Section 3 discusses
some of the advantages and the linguistic relevance
of this framework and section 4 presents some con-
clusions and directions for future work.
2 L SYSTEMS
2.1 Preliminaries
We assume the reader to be familiar with basic notions
in the theory of formal languages. With our notation
we mainly follow (Dassow and P˘aun, 1989). In gen-
eral, we have the following conventions: denotes
inclusion, while denotes strict inclusion. By V
+
we denote the set of nonempty words over the alpha-
betV; if the empty word λ is included, then we use the
notation V
. Let Σ and be two alphabets (which are
not necessarily different). Let σ be a mapping from
Σ into 2
. This definition is extended to a mapping
from Σ
by:
σ(λ) = {λ} and σ(w
1
w
2
) = σ(w
1
)σ(w
2
),
where w
1
, w
2
in Σ
.
Moreover, for a language L Σ
, let
σ(L) = {u | u σ(w) for some w L}.
Such a mapping σ is called a substitution from Σ into
.
2.2 L Systems
Aristid Lindenmayer introduced, in 1968 (Linden-
mayer, 1968), specific rewriting systems as models of
developmental biology, which today are called Lin-
denmayer systems or L systems. L systems model bi-
ological growth and because growth happens in multi-
ple areas of an organism, growth is parallel. This par-
allelism is the main difference from sequential rewrit-
ing systems of the Chomsky hierarchy. The investiga-
tions of L systems are an important and wide area in
the theory of formal languages. The modelling of dif-
ferent environmental influences, for example, growth
during day versus night, lead to different L systems
and thus to different L languages. The study of L lan-
guages has resulted in a language hierarchy, namely
the L system hierarchy. Lindenmayer systems are
well investigated parallel rewriting systems. For an
overviewsee (Kari et al., 1997) and for the mathemat-
ical theory of L systems see (Rozenbergand Salomaa,
1980).
Definition 2.1. An extended tabled Lindenmayer sys-
tem without interaction (ET0L system, for short) is a
quadruple G = (Σ, H, ω, ), where Σ is the alphabet,
is the terminal alphabet, Σ, H is a finite set of
finite substitutions from Σ into Σ
, and ω Σ
is the
axiom.
Definition 2.2. For x and y in Σ
and h H, we write
x =
h
y if and only if y h(x). A substitution h in H
is called a table.
Definition 2.3. The language generated by G is de-
fined as:
L(G) = {w
| ω =
h
i
1
w
1
=
h
i
2
. . . =
h
i
m
w
m
= w
for some m 0 and h
i
j
H with 1 j m}.
By L (ET0L) we denote the family of ET0L lan-
guages.
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
396
2.3 Examples
Example 2.1. Let
G
1
= ({A, B, C, a, b, c}, {h
1
, h
2
}, ABC, {a, b, c})
be an ET0L system, where h
1
and h
2
are given as fol-
lows:
h
1
= {A aA, B bB, C cC, a a, b
b, c c},
h
2
= {A a, B b, C c, a a, b b, c c}.
The axiom ABC can be rewritten using the first
table h
1
or the second table h
2
. Using the first three
productions A aA, B bB, C cC in h
1
adds a
symbol a, b, and c, respectively in every derivation
step. Using table h
2
terminates the derivation process.
Consider the derivation of the word a
2
b
2
c
2
: ABC =
h
1
aAbBcC =
h
2
aabbcc.
The language generated is L(G
1
) = {a
n
b
n
c
n
|
n, m 1}.
Example 2.2. Let
G
2
= ({A, B, a, b}, {h
1
, h
2
, h
3
, h
4
}, AA, {a, b})
be an ET0L system, where the tables are given as fol-
lows:
h
1
= {A aA, B B, a a, b b},
h
2
= {B A, A B, a a, b b},
h
3
= {B bB, A A, a a, b b},
h
4
= {A a, B b, a a, b b}.
Table h
1
introduces a symbol a in every deriva-
tion step and table h
3
introduces a symbol b in every
derivation step. Table h
2
serves as switch between the
symbols A and B and table h
4
terminates the deriva-
tion process.
Consider the derivation for the word baba:
AA =
h
2
BB =
h
3
bBbB =
h
2
bAbA =
h
4
baba.
The language generated is L(G
2
) = {ww | w
{a, b}
+
}.
Example 2.3. Let G
3
= ({A, B, C, D, a, b, c, d},
{h
1
, h
2
, h
3
, h
4
}, ABCD, {a, b, c, d}) be an ET0L sys-
tem, where the tables are given as follows:
h
1
= {A aA, C cC, B B, D D, a
a, b b, c c, d d},
h
2
= {A a, C c, B B, D D, a a, b
b, c c, d d},
h
3
= {B bB, D dD, A A, C C, a
a, b b, c c, d d},
h
4
= {B b, D d, A A, C C, a a, b
b, c c, d d}.
Using tables h
1
or h
3
introduce in every derivation
step the symbols a and c or the symbols b and d, re-
spectively. The tables h
2
and h
4
are used in order to
terminate the derivation process.
The language generated is L(G
3
) = {a
n
b
m
c
n
d
m
|
n, m 1}.
3 LINGUISTIC RELEVANCE OF
L SYSTEMS
The linguistic relevance of the above formalism is
based on three main features: parallelism, generation
of non-contextfree structures in natural language and
bio-inspiration.
3.1 Parallelism
Linguistics has always been studied from a linear and
sequential point of view. Indeed, the sound is pro-
duced in a sequential way, but due to many stud-
ies one comes to the conclusion that the production
could not be sequential. Moreover, the multimodal
approach to communication, where not just produc-
tion, but also gestures, vision and supra-segmental
features of sounds have to be tackled, refers to a par-
allel way of processing.
In general, formal and computational approaches
to natural language demand non-hierarchical, paral-
lel, distributed models in order to explain the com-
plexity of linguistic structures as the result of the
interaction of a number of independent but cooper-
ative modules. According to (Smith, 1991), if we
want to realize about how important is parallelism
in our world, we just need to look at the ‘most suc-
cessful computing device’, namely the human mind.
Many cognitive processes exhibit degrees of paral-
lelism. In computer science many procedures –in
particular those that try to emulate human cognitive
processes– call for parallel processing, either by re-
quiring concurrently executable subtasks or relying
on collective decision making. Computer networks,
distributed data bases, highly parallel computers, par-
allel logic programming languages present a philos-
ophy of computing very different from the traditional
Turing-von Neumann sequential one. Parallel compu-
tational models have been applied to several domains.
Language, vision, motor control and knowledge rep-
resentation are some examples.
In natural language processing, researchers have
turned from the initial serial models to highly paral-
lel ones. Serial models used to adopt a ‘syntax- first’
approach, where syntactic processing of the sentence
had to be done before semantic processing began,
THE LINGUISTIC RELEVANCE OF LINDENMAYER SYSTEMS
397
which in turn preceded pragmatic processing. In such
models, information of lower levels cannot be used to
correct decisions at higher levels with the consequent
explosion of syntactic possibilities. That situation led
to the preference of parallel and interactive models,
where a system is capable of using any type of knowl-
edge at any moment, without being constrained by a
serial and hierarchical structure. Marslen and Wil-
son reported about some experiments that gave psy-
chological evidence that processing at each level of
natural language description can constrain and guide
simultaneous processing at other levels, defending in
this way the parallelism in language processing.
In theoretical linguistics, the hierarchical view
of grammar has revealed as problematic and there
has been a search for systems with parallel and au-
tonomous components that cooperate in order to gen-
erate natural language. The advantages of parallel
models of grammar have been pointed out in the last
twenty five years by the arising of theories like Au-
tolexical Grammar (Sadock, 1991), Jackendoffs Par-
allel Architecture (Jackendoff, 1997) or Head-Driven
Phrase Structure Grammars (Pollard and Sag, 1994).
All of them can be seen as parallel-architecture mod-
els. Moreover, recently parallel formalisms from the
theory of formal languages have been applied to the
description of natural languages. Examples of this
can be, for instance, the application of grammar sys-
tems (Csuhaj-Varj´u et al., 1994) to natural language
description by defining Linguistic Grammar Systems
(LGS) (Jim´enez-L´opez, 2006). LGS is a framework
where the various dimensions of linguistic represen-
tation are arranged in a parallel distributed framework
and where the language of the system is the result of
the interaction of those independent cooperativemod-
ules that form the LGS. Another example of a parallel
model from formal languages applied to the descrip-
tion of natural languages is the use of Networks of
Evolutionary Processors (NEPs) (Castellanos et al.,
1985) –a new computing mechanism directly inspired
from the behavior of cell populations– suggested in
(Bel-Enguix et al., 2009) where an implementation of
NEPs for parsing of simple structures is suggested.
As mentioned earlier, Lindenmayer systems were
introduced in connection with biological develop-
ment. The new feature of Lindenmayer systems -
in contrast to the sequential rewriting systems of the
Chomsky hierarchy - was the parallel rewriting pro-
cess. In Lindenmayersystems in every derivation step
all available symbols are manipulated simultanously,
that is, in parallel, whereas in the sequential rewriting
systems of the Chomsky hierarchy in every derivation
step only one symbol is manipulated. Therefore, L
systems satisfied the parallelism demanded in the re-
search fields of natural language processing and theo-
retical linguistics.
3.2 Generation of Non-context Free
Structures
A grammatical formalism that attempts to model nat-
ural language syntax should have the same expressive
power as natural languages. But, how much power is
necessary to describe natural languages? This ques-
tion has been a subject of discussion since it was
posed by Chomsky in 1957. This debate was first fo-
cused on whether natural languages are CF or not.
CF grammars are well investigated in formal lan-
guage theory due to their wide applicability and their
mathematical properties. CF are simple devices and
offer some advantages: they are powerful enough to
describe most of the structures in natural languages
and, at the same time, restricted enough so that ef-
ficient parsers can be built. However, context-free
grammars are not always enough to account for nat-
ural languages. Many authors have brought as a
demonstration of the non-context-freeness of natural
languages examples of structures that are present in
some natural languages and that cannot be described
using a context-free grammar. Among those argu-
ments we can refer to:
‘Respectively’ constructions (Bar-Hillel and
Shamir, 1960);
English comparative clauses;
Mohawk noun-stem incorporation;
Morphology of words in Bambara (Culy and Rei-
del, 1987);
Dutch infinitival verb phrases (Bresnan et al.,
1985);
Assertions involving numerical expressions;
English ‘such that’ (Higginbotham, 1987);
English ‘sluicing’ clauses;
Subordinate infinitival clauses in Swiss-German
(Shieber, 1987).
Those structures are examples of the three follow-
ing non-context-free languages:
1. {xx | x V∗}, reduplication;
2. {a
n
b
n
c
n
| n 1}, {a
n
b
n
c
n
d
n
| n 1}, multiple-
agreements;
3. {a
n
b
m
c
n
d
m
| n, m 1}, cross-serial dependencies.
Those constructions have been found in different
languages, as follows from the above-mentioned list.
Bambara, Mohawk, Walpiri, Dutch, Swiss-German,
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
398
English or Romance languages are just some exam-
ples. These syntactic structures require more gener-
ative capacity than CF grammar provides to describe
natural languages. Therefore, it is of interest to study
grammatical formalisms with more generative power
than CF. However, context-sensitive grammars seems
not to be the right solution: they are too powerful,
many problems are undecidable, etc. Therefore, it is
desirable to find intermediate generative devices able
of conjoining the simplicity of context-free grammars
with the power of context-sensitive ones.
Within the field of formal languages, the above
idea has led to the branch of Regulated Rewriting
(Dassow and P˘aun, 1989). Matrix grammars, pro-
grammed and controlled grammars, random context
grammars, conditional grammars, etc. are examples
of devices that use context-free grammars while ap-
plying some restrictions to the rewriting process in
order to obtain context-free structures as well as the
non-context-free constructions present in natural lan-
guage. But, those devices present, in general, an ex-
cesive big generative power that leads to the gener-
ation of structures non-significative for natural lan-
guages. The idea of keeping under control the gener-
ative power, while generating context-free structures
and non-context-free constructions, has led to the
so-called mildly context-sensitive grammars (Joshi,
1985). Tree adjoining-grammars,head grammars, in-
dexed grammars, categorial grammars, simple matrix
grammars, etc. are well-known mechanisms that gen-
erate mildly context-sensitive languages.
Taking into account the problems that context-free
grammars seem to pose when applied to the syntax of
natural language, but having in mind the difficulty of
working with context-sensitive grammars, L systems
offer an alternative way to generate the non-context-
free structures present in natural languages while us-
ing context-free rules as it is shown in the following
example. The parallel rewriting mechanism and the
use of tables in ET0L provide the tools in order to
achieve more generativecapacity than that of context-
free grammars.
In the example of Figure 1, from (Shieber, 1987),
the verb h
¨
alfe requires a dative object, namely em
Hans, while the verb aastriiche requires an accusative
object, here es huus. As indicated by the lines, there
is a crossed agreement within the sentence between
the verbs and the corresponding objects concerning
their case. Theoretically an unbounded number of
verbs can occur in a Swiss-German sentence of this
kind and each of these would require its correspond-
ing object with the correct case marking. Thus, one
could extend all sentences, such that after a number
of occurrences of dative objects and after a number
of occurrences of accusative objects the correspond-
ing number of occurences of the verbs which require
a dative object and the corresponding number of oc-
curences of those verbs which require an accusative
object follows.
In the following we give an ET0L system G
SG
that
generates the Swiss-German sentence above. We will
only list the productions in every table in G
SG
and for
every table we only give the productions relevant for
our example. Let the axiom of G
GS
be:
ω = Pronoun A B NP C D Verb
and the tables given by:
h
1
= {Pronoun Pronoun, A AccOb j A, B
B, NP NP, C AccVerb C, D D, Verb
Verb},
h
2
= {Pronoun Pronoun, B DatObj B, A
A, NP NP, D DatVerb D, C C, Verb
Verb},
h
3
= {Pronoun Pronoun, A AccObj, B
B, NP NP, C AccVerb, D D, Verb
Verb},
h
4
= {Pronoun Pronoun, B DatOb j, A
A, NP NP, D DatVerb, C C, Verb
Verb},
h
5
= {Pronoun mer, DatObj
emHans, AccObj d
chind, Np
esHuus, DatVerb hael fe, AccVerb
loend, Verb aastriche}.
Using tables h
1
or h
2
generates in every deriva-
tion step an auxiliary symbol for an accusative object
or a dative object, respectively. Tables h
3
and h
4
ter-
minate the introduction of auxiliary symbols for ac-
cusative objects and dative objects, respectively. Ta-
ble h
5
rewrites the nonterminal symbols into terminal
words of Swiss-German.
For the case in which, in a Swiss-German sentence
as above, accusative and dative objects occur mixed
one can construct in a similar way an ET0L system
generating the language {ww | w {a, b}
+
}.
3.3 Bio-inspiration
Most current natural language approaches show sev-
eral facts that somehow invite to the search of new
formalisms to account in a simpler and more natural
way for natural languages. The fact that natural lan-
guage sentences cannot be placed in any of the fami-
lies of the Chomsky hierarchy in which current com-
putational models are basically based or the idea that
rewriting methods used in a large number of natural
language approaches seem to be not very adequate,
THE LINGUISTIC RELEVANCE OF LINDENMAYER SYSTEMS
399
mer
we
d’chind
the children
em Hans
Hans
es huus
the house
l¨ond
let
h¨alfe
help
aastriche
paint
‘we let the children help Hans paint the house’
ACC
DAT
Figure 1: A Swiss-German Sentence.
from a cognitive perspective, to account for the pro-
cessing of language lead us to look for a more natural
computational system to give a formal account of nat-
ural languages.
During the 20th century, biology has become a pi-
lot science, so that many disciplines have formulated
their theories under models taken from biology. Com-
puter science has become almost a bio-inspired field
thanks to the great development of natural comput-
ing and DNA computing. From linguistics, several at-
tempts of establishing structural parallelism between
DNA sequences and verbal language have been per-
formed. In general, it can be stated that the processing
of natural language can take great advantage of the
structural and ”semantic” similarities between those
codes. Natural language processing could become an-
other ”bio-inspired” science, by means of theoretical
computer science, that provides the theoretical tools
and formalizations which are necessary for approach-
ing such exchange of methodology. In fact, during the
last years, different bio-inspired methods have been
succesfuly applied to several natural language issues,
from syntax to pragmatics. Those methods are taken
mainly from computer science and are basically the
following: DNA computing, cell computing, mem-
brane computing and networks of evolutionary pro-
cessors. The main advantage of those bio-models is
to account for natural language in a more natural way
than classical models (based on rewriting).
L systems are the first bio-inspired model in the
field of formal language theory. Aristid Lindenmayer
introduced L systems in 1968 as a theoretical frame-
work in order to model the development of filamen-
tous organisms, which are composed of cells. These
cells receive inputs from their neighbours and change
their states and produce outputs based on their states
and the input received. Cell division is modelled by
inserting two new cells in the filament in order to
replace one cell. With these theoretical framework
of Lindenmayer an organism as a whole was pro-
vided that models individual acts of division, unequal
divisions, interaction of two or more cells and cell
enlargment. The possible combinations of interac-
tions among more than a handful of cells becomes
rapidly unmanagabale without a mathematical theory
and computer application, which are provided both in
the framework of Lindenmayer systems.
The biological inspiration of L systems is an inter-
esting property from a linguistic point of view since,
as we have said, in the last decades there seem to be a
tendency to use bio-inspired models for the descrip-
tion and processing of natural languages. Linguis-
tics has not been able to solve the problem of genera-
tion/understandig natural langauge, partly because of
the fail in the models adopted. Bio-inspired models
could be a possible solution since one of the advan-
tages of such kind of models is to offer more ‘natural’
tools than the ones used so far.
4 CONCLUSIONS
Formal language theory was born in the middle of the
20th century as a tool for modeling and investigat-
ing syntax of natural languages. Lindenmayer sys-
tems are defined as a bio-inspired model in the area
of formal languages. As we have shown in this paper,
L Systems present important features from a linguis-
tic perspective. However, although L Systems have
relevant linguistic properties, up to now they have not
been applied to describe natural languages.
Formal models in linguistics demand parallel de-
vices that are able to generate the structures (CF
and non-CF) present in natural languages with simple
mechanisms that describe/explain those structures in
a more natural way than the usual rewriting systems.
Taking into account that current biology has become
a pilot science, so that many disciplines have formu-
lated their theories under models taken from biology,
it seems natural that linguistics searches the improve-
ment of its models in this field, mainly if we take into
account the structural parallelism between biological
sequences and verbal language. Morever since lan-
guages, either natural or artificial, are particular cases
of symbol systems and the manipulation of symbols
is the stem of formal language theory, it seems ade-
quate to look for bio-inspired models that have been
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
400
defined in that research area. If we do so, linguis-
tics could become bio-inspired science, by means of
theoretical computer science, that provides the theo-
retical tools and formalizations which are necessary
for approaching such exchange of methodology. It is
clear that interdisciplinarity must be an essential trait
of the research on language. Linguistics, biology and
computer science collaborate through the framework
of formal language theory to give rise to the emer-
gence of new scientific models that providenew ideas,
tools and formalisms that can improve the descrip-
tion, analysis and processing of natural or artificial
languages.
Now, Lindenmayer systems are the first bio-
inspired model proposed in the field of formal lan-
guages and it is also the first one that replaces the
sequential rewriting by the parallel one. Moreover,
as we have shown, by using a L system you can eas-
ily generate the so-called non-context free structures
in natural language. Therefore, it seems that Linden-
mayer systems offer a great deal of the properties that
seem to be necesssary in order to approach linguistic
structures. Many parallel bio-inspired methods from
the field of theoretical computer science have been
successfully applied to several NLP issues, from syn-
tax to pragmatics (DNA computing, membrane com-
puting and networks of evolutionary processors). The
main advantage of all those bio-models is to account
for natural language in a more natural way than clas-
sical models. L systems is another example of bio-
model, with many of the same properties as the ones
that have been already applied to linguistics, so we
consider that it could be interesting to apply L sys-
tems to the description/processing of natural language
in order to see if this first parallel bio-inspired model
may improve current linguistic approaches.
ACKNOWLEDGEMENTS
The research of Leonor Becerra-Bonache was sup-
ported by a Marie Curie International Fellowship
within the 6
th
European Community Framework Pro-
gramme.
REFERENCES
Bar-Hillel, Y. and Shamir, E. (1960). Finite-state languages:
Formal representations and adequacy problems. Bul-
letin of Research Council of Israel, 8F:155–166.
Bel-Enguix, G., Jim´enez-L´opez, M., Mercas, R., and
Perekrestenko, A. (2009). Networks of evolutionary
processors as natural language parsers. In Proceed-
ings of the First International Conference on Agents
and Artificial Intelligence - ICAART 2009, pages 619–
625. INSTICC Press.
Bresnan, J., Kaplan, R., Peters, S., and Zaenen, A. (1985).
Cross-serial dependencies in Dutch. In Savitch, W.,
Bach, E., Marsh, W., and Safran-Naveh, G., editors,
The Formal Complexity of Natural Language, pages
286–319. Kluwer, Dordrecht.
Castellanos, J., Mart´ın-Vide, C., Mitrana, V., and Sempere,
J. (1985). Solving np-complet problems with net-
works of evolutionary processors. In Mira, J. and
Prieto, A., editors, IWANN 2001, pages 621–628.
Springer, Berlin.
Csuhaj-Varj´u, E., Dassow, J., Kelemen, J., and P˘aun, G.
(1994). Grammar Systems: A Grammatical Approach
to Distribution and Cooperation. Gordon and Breach,
London.
Culy, C. and Reidel, D. (1987). The complexity of the vo-
cabulary of Bambara. In Savitch, W., Bach, E., Marsh,
W., and Safran-Naveh, G., editors, The Formal Com-
plexity of Natural Language, pages 349–357. Kluwer,
Dordrecht.
Dassow, J. and P˘aun, G. (1989). Regulated Rewriting in
Formal Language Theory. Akademie-Verlag, Berlin.
Higginbotham, J. (1987). English is not a context-free lan-
guage. In Savitch, W., Bach, E., Marsh, W., and
Safran-Naveh, G., editors, The Formal Complexity
of Natural Language, pages 335–348. Kluwer, Dor-
drecht.
Jackendoff, R. (1997). The Architecture of the Language
Faculty. MIT Press, Cambridge.
Jim´enez-L´opez, M. (2006). A grammar systems approach
to natural language grammar. Linguistics and Philos-
ophy, 29:419–454.
Joshi, A. (1985). How much context-sensitivity is required
to provide reasonable structural descriptions: Tree ad-
joining grammars. In Dowty, D., Karttunen, L., and
Zwicky, A., editors, Natural Language Parsing: Psy-
chological, Computational and Theoretical Perspec-
tives, pages 206–250. Cambridge University Press,
New York, NY.
Kari, L., Rozenberg, G., and Salomaa, A. (1997). L sys-
tems. In Rozenberg, G. and Salomaa, A., editors,
Handbook of Formal Languages, volume 1. Springer.
Lindenmayer, A. (1968). Mathematical models for cellular
interaction in development, i and ii. Journal of Theo-
retical Biology, 18:280–315.
Pollard, C. and Sag, I. (1994). Head-Driven Phrase Struc-
ture Grammar. Chicago University Press.
Pullum, G. and Gazdar, G. (1987). Natural languages and
context-free languages. In Savitch, W., Bach, E.,
Marsh, W., and Safran-Naveh, G., editors, The For-
mal Complexity of Natural Language, pages 138–182.
Kluwer, Dordrecht.
Rozenberg, G. and Salomaa, A. (1980). The Mathematical
Theory of L-Systems. Academic Press, New York.
Sadock, J. (1991). Autolexical Syntax - A Theory of Paral-
lel Grammatical Representations. The University of
Chicago Press.
THE LINGUISTIC RELEVANCE OF LINDENMAYER SYSTEMS
401
Shieber, S. (1987). Evidence against the context-freeness
of natural languages. In Savitch, W., Bach, E., Marsh,
W., and Safran-Naveh, G., editors, The Formal Com-
plexity of Natural Language, pages 320–334. Kluwer,
Dordrecht.
Smith, G. (1991). Computers and Human Languages. Ox-
ford University Press, New York.
ICAART 2010 - 2nd International Conference on Agents and Artificial Intelligence
402