Our approach is related to both natural language evolution and grammatical infer-
ence. Roughly speaking, we search for a mechanism able to infer the grammar of a
system that is constantly changing. We apply a logically based transformational gram-
mar formalism in which the grammar itself may be affected in a derivation step. We
intend to model the linguistic competences of a silent listener having the reflective ca-
pability of being able to inspect and revise its competences according to current usages.
In contrast to some of the models mentioned above, it is not based on neither artificial
intelligence nor simulation of communicating agents. Instead we propose a strictly for-
mal approach to the problem of language evolution, showing how a grammar can adapt
to new words and ways of building phrases without any external means.
In order to reach this objective, we apply a formalism called Adaptable Grammars,
based on an earlier proposal of [13–15]. This grammar formalism was invented in the
1980ies, originally for describing phenomena in software systems and programming
languages. Later, it has been known under the name Christiansen grammars in work
by [16, 17]. It has been applied to formal linguistics only recently [18]. The work of [16]
have applied these grammars for grammatical, evolutionary programming; the authors
motivate their approach by the observation that with such grammars they can do with
shorter derivations of target programs.
In [18], such grammars are demonstrated to capture standard non-context-free lan-
guages used in the literature, that represent the central natural language properties of
reduplication, crossed dependencies, and multiple agreements. In the present work, we
take this a step further considering language evolution. A surprisingly simple imple-
mentation of Adaptable Grammars in Prolog was shown in [18], which, with a few
extensions, have been used for the experiments shown in the present paper.
The Adaptive Grammars of the present paper, explained as extensions to Definite
Clause Grammars (DCGs) [19], are inherently related to Abductive Logic Program-
ming (ALP); see, e.g., [21] for an overview. Normally, the process of finding new rules
(logical, grammatical, ...) is associated with induction and, in our context, Inductive
Logic Programming (ILP); see, e.g., [22] for overview. ILP differs from ALP by con-
sidering a larger set of observations, and use powerful machine learning techniques,
including generalization steps and statistics to produce rules that cover as many cases
as possible. The work by [12] can be seen as an algorithmic counterpart to our work.
They describe an algorithm for induction of context free grammars which is incremen-
tal in the sense that it takes one sample of a time and presents a well-defined grammar
after each step. It is explained as an extension of the classical CYK parsing algorithm
that it works bottom-up in a breadth-first way. In case it gives up, an induction step
inspects the store of unreduced items to suggest reductions which then are collected
to new grammar rules. Obviously such methods must be tested out for any practical
application of Adaptable Grammars for larger corpora, although they have not, to our
knowledge, been tested for natural language corpora.
In section 2 we give an introduction to adaptable grammars and indicate the fun-
damental principles for how they may be used for describing language evolution. Sec-
tion 3 introduces additional notation, which is applied in section 4 that demonstrates
grammars for evolution in simplified natural language setting. Finally, section 5 gives
some concluding remarks and ideas for future work.
58