
sidered, it is possible to find all possible fol-
lowing characters. From the complementary set
it is possible to generate every non-valid com-
mand. Having a look at the LL(1) Grammar
from 5.3, the first line indicates that the first
possible character is a ”C”, all other characters
are invalid. Hence, commands with all invalid
characters have to be recognized as invalid. In
the next step starting with the valid character
”C”. Based on that the next valid characters
(”R”) are determined. Every other character is
invalid and is appended to the ”C” and send to
the LL(1) Parser. For example, this causes the
commands ”CA”, ”Ca”, ”CB”, ”Cb”, ... to be
invalid.
With that the other components of RUNMDB
need only be tested with an immensely smaller
set of commands. The M3 Language information
contained in the LL(1) Grammar eliminates the
need for some semantic checks. For instance, the
first four semantic checks from Section 4.4 are al-
ready caught in the grammar since, e.g. only ex-
isting < +M3Identi f ier > can be created or all
possible features are already stored in the LL(1)
Grammar. By increasing the complexity of the
grammar, more semantic properties can be intro-
duced, but this leads to the fact that the checking
of a command for validity takes longer and thus
a trade-off between the runtime and the less re-
quired semantic checks must be made.
• Lexer: Only all syntactically correct commands
must be considered. The command objects cre-
ated in the lexer are put together into strings and
checked whether these strings correspond to the
original command.
• Semantic Checker: For the remaining semantic
checks, test cases need to be generated. These
can be mostly automatically derived from the M3
Language and information about list sizes from
the implementation and follows white box test-
ing approaches (IEEE, 2021). For example the
eighth semantic check in Section 4.4 examines if
the program operates correctly when list sizes are
exceeded. This may occur if the selected Posi-
tion is outside the defined list size. Therefore, all
possible update commands are generated from the
grammar and the Position parameter is varied be-
yond the list limits. Having a Position which is out
of bounds, an error message has to be generated.
All further test cases for the semantic checks are
generated in a similar way. The test cases collec-
tively result in a manageable closed set of com-
mands. By basing test cases on a closed set of
valid commands or the meta-modeling language,
the testing of semantic checks can be fully traced
back to a finite set of test cases.
• CRUD ModelProcessor: The ModelProcessor
operations are time-dependent on each other, re-
sulting in an infinite number of states to be
checked. The implementation is based on the
CRUDs, so there is a separate operation for
Create, Read, Update, and Delete. Within each
CRUD, the set to be tested can be reduced to the
following scenarios. It is notable that these sce-
narios rely on static lists in their implementation
and the first-in, first-out (fifo) principle.
– Create: Elements are created and stored in
static lists depending on their type and remain
on their position until they are deleted.
1. Create an element at the end of a list
2. Create an element between existing elements
3. Try to create an element at the end of a full list
– Read: The values can be either in fixed sized
lists or in single entries. Read
1. information from an empty model
2. updated information from single values
3. updated information from list values
4. information from full lists
– Update: If values are stored in lists, the posi-
tion can also be specified.
1. Update every feature with a value
2. Update with a value at every possible position
3. Try to add values to fill lists
4. Change references
– Delete: All values of the deleted object are re-
set to their default values. Delete elements
1. where no information was updated
2. with updated values and full value lists
3. with updated references
Any behavior, at any time, can be traced back to
one of these scenarios. This procedure enables the
limitless set of combinations to be narrowed down
to a finite set of scenarios that can be tested. These
explanations have shown that it is feasible to test
the implementation for any possible command by
utilizing a combination of a Meta-modeling lan-
guage and a LL(1) Grammar, in order to avoid
any unexpected behavior. This provides an impor-
tant foundation for a potential certification pro-
cess for RUNMDB. At present, at least the tests
for the LL(1) Parser, the Lexer, and the Semantic
Checker can be generated automatically from the
single source of truth available through the com-
bination of LL(1) Grammar and Meta-modeling
MODELSWARD 2024 - 12th International Conference on Model-Based Software and Systems Engineering
114