homogeneous MC is much simpler than heteroge-
neous. Any discrepancies between the expected and
actual output models mentioned are not, necessarily, a
reflection of the transformation tools, but of the trans-
formations that were submitted to the transformation
tool contest.
4.1.1 EMF Compare
In order to use EMF Compare for MTT, we config-
ure it to ignore unique identifiers. In Figure 1, we
show the comparison results of the expected output
with the output models that resulted from transforma-
tions from two M2M tools, Flock (Rose et al., 2010a)
and MOLA (Kalnins et al., 2005) on the left and right
of the figure, respectively. The output model from
Flock is identical to our expected output. From a test-
ing perspective, this means that EMF Compare is able
to show when a test case is executed as expected. For
the MOLA transformation, there seems to be a sym-
metric list of changes under many of the model ele-
ments. For example, for every control flow or object
flow reference missing when compared to its corre-
sponding element, a reference addition of the same
type appears. It is likely these differences are triv-
ial and these references are to the same object: EMF
Compare generates false positives, that is, elements
that are identified as different but should not be. In
addition, it is very difficult to pinpoint the differences.
4.1.2 ECL
While ECL requires more work to define the rules for
matching models, it excels at matching corner cases
based on domain knowledge and user input (Kolovos,
2009). So, if we are able to ascertain the lower level
problems, we can reduce the number of false positives
using an appropriate ECL rule.
We find the lower level problem from our example
in Figure 1: The elements are not matched because
the MOLA transformation sets the name and visibil-
ity of these elements to null and public, respectively,
rather than to unset, as in our expected output. The
issue is not that the MOLA transformation is wrong in
doing this, it is that our comparison method should
interpret them as equal. We specify this using the
ECL as demonstrated in the rule definitions in Fig-
ure 2. The top rule block accounts for the Object-
Flow false positives while the bottom rule block ac-
counts for ControlFlow false positives. Implement-
ing this rule removes 34 of 95 listed differences out-
lined in our transformation test. Many remaining false
positives can be rectified with analogous rule defini-
tions to those in Figure 2.
Figure 2: ECL rules to remove false positives.
4.2 Heterogeneous Comparison
An interesting application of MC in MTT that has
yet to be investigated is comparing metamodels in
a heterogeneous transformation and using that to
guide testing input, that is, allow test generation
from metamodel MC. This differs from existing ap-
proaches (Sen et al., 2009). We provide a brief il-
lustration using EMF Compare in Figure 3 showing
MC of the different metamodels from the provided
case study, with the evolved model on the left. This
list of differences might be a good starting place for
test-case generation. For example, to test a model
transformation with respect to the StateMachine el-
ement in isolation, we could write assertions that en-
sure those 17 or so changes have been represented ac-
cordingly.
It is clear that EMF Compare is not well suited
for heterogeneous comparisons. The EMF Compare
matching algorithm produces the same, relatively un-
helpful information in that they fail to match what, se-
mantically, we know should be a match near the top of
the model hierarchy. Only straight-forward matches
are discovered, such as those that have the same or
similar names. For example, StateMachine, State,
and PseudoState are present in both metamodels, but
it is difficult to identify the differences of their chil-
dren and other should-be matches at the same level
are missed.
Thus, we are left with ECL or, possibly, SmoVer, if
it were to be extended appropriately. We would write
rules, like the ones in Figure 2, that match UML 1.4
components to their corresponding 2.2 components.
We can indicate matching metamodel elements at all
levels and will, consequently, be left with more mean-
ingful comparisons. While this is somewhat equiva-
lent to writing the actually transformation itself, it is
done from a comparison and declarative perspective,
allowing for an extra level of verification.
ApplicationofModelComparisonTechniquestoModelTransformationTesting
309