passing) tests within our base languages, we then
enhance the parser and metamodel with the new
language constructs.
The potential Umple syntax might look like the
following.
class Student {
Integer x;
// this is the potential invariant syntax
[x >= 18] }
Next, we migrate the custom code written in the
behaviour tests into the code generation process to
validate the generated syntax. Following that, we
deploy a new version of Umple itself and update the
original behaviour tests to use the new language
constructs (as opposed to having to write the
behaviour by hand, as was required before the
feature was available). These tests themselves
remain relatively unchanged; we simply update the
tests to use the new language constructs.
In pure TDD methodology, the process is not just
about testing; but rather about designing the system
in a modular fashion maintaining low coupling and
well-defined interfaces. The process is also about
capturing the intention of the software (i.e.
automated tests) that can be easily verified (i.e. re-
running the test suite) effectively enhancing
reusability.
For example, the act of manually testing and
modifying (i.e. debugging) an application until it
works benefits only the developer performing the
task. It cannot be replicated easily, as the debugging
steps are not documented and are lost once the
debugging exercise is complete. Conversely, by
capturing the testing process through automation, all
developers can benefit as knowledge is gained about
the true behaviour of the system that can be easily
re-run and re-verified.
In the case of building a new programming
language (or in the case of Umple, extending
existing base languages), we first need to be
concerned with testing the tooling itself. But,
because the outputs of such systems are systems too,
they can also be tested (i.e. semantic testing of
systems generated using the new language).
In addition, because Umple is implemented in
itself, we are able to capture the debugging effort of
new code generation behaviour in automated tests,
and then modify the underlying Umple language to
replicate that behaviour natively, as shown with the
OCL constraint example. In summary, we enhance
the Umple language so that we can refactor Umple
(which is written in Umple) to make use of the
enhanced language elements; ‘eating our own dog
food’, so-to-speak.
5 CASE STUDIES
To further explore and validate our MLTDD
approach, we applied it to two industrial projects,
Appstats and OSL. These were undertaken in a
software company specializing in the development
of online process solutions.
5.1 Case Study 1: Appstats
Appstats (Forward, 2012) is a small open-source
logging and statistics library that provides a
"counting" framework with features such as built-in
caching, delayed processing, as well as the creation
of ad-hoc and scheduled reporting. The language
provides a simple, yet effective, DSL. The source
code is about 2 KLOCs with 97% coverage from
over 650 tests. The basic structure of an appstats
query is
# <action> <timeframe> <host/server> <context
filter> <group filter>
Actions are user defined and could include things
like # logins, # objects created, # exceptions. Date
ranges support several formats such as "between
Mar, 2010 and Mar, 2011", "today", "last
year|month|week|day", etc. The host/server simply
allows the user to grab statistics from a particular
server such as testing, versus staging versus
production. Finally, all data logged by appstats is
tagged with, and is searchable / groupable by any
number of contexts (as defined by the user - not
appstats), examples include tracking the logged in
user, the address searched, number of results found,
duration of request etc.
By instrumenting an application appstats logs,
you enable very powerful and efficient queries with
a simple DSL. In addition, developers can write 3rd
party plug-ins to augment the "logging" statistics
with other data sources to facilitate a uniform API
between the raw data and any application reporting
functionality.
The approach used to test Appstats was the same
as that used by Umple. Below, we discuss the
unique characteristics of testing the statistics
collection mechanism of Appstats. Just like any
other type of test, you need to emulate the situation
you are testing, and then verify that the right things
has occurred. For Appstats statistics, there are a few
items that require configuration, otherwise it is no
different than testing other aspects of the system.
The examples below are written in Ruby using
the RSpec testing framework.
1) Reset the logged and simulate the current time.
MODELSWARD2014-InternationalConferenceonModel-DrivenEngineeringandSoftwareDevelopment
232