QVT transformer maps agnostic-platform behaviour
from the testing model into specific elements of the
target platform. This second transformation, also re-
trieve data and context information, generating all the
possible combinations for testing. Each combination
will result in a single executable test in GP. When the
transformation is completed, it can be persisted into
the database. Finally, once tests are executed in GP,
using its API, results are integrated in the source test-
ing model.
6 CONCLUSIONS
An architecture has been proposed for testing follow-
ing a model-based approach. The developed mod-
elling language allows to design test plans which are
agnostic about the final target platform. It represents
architectural and behavioural aspects using an inte-
grated notation, and provides notation for test data
and context information. Models designed accord-
ing to this language become a generic repository to
be used in any testing environment. Transformation
engines with the appropriate transformation rules are
responsible for generating the right test cases for each
target platform. The language includes the reuse of
concepts, reducing design time. Transformation en-
gines are also aware of these reuse possibilities, since
they would be written by experts in each target testing
platform and the modelling language. This approach
allows a quick starting of the testing phase, right when
the developmentprocess begins with the requirements
specification. The test plan can be enhanced in an
iterative way as more information is obtained from
the development process until it captures all the needs
established. The graphical editor and transformation
engines hides the complexity of testing languages and
platforms from non-expert testers, being possible that
more people include tests in their working routine.
In the future, behavioural options should be en-
riched, since only sequential behaviour is considered
right now. Results from test executions will be re-
trieved and included in the models at the repository.
In the long term, some support to deal with changes
in the SUT could be also considered. Increasing the
available transformation engines should be also con-
sidered.
ACKNOWLEDGEMENTS
The work for this paper was partially supported by
funding from ISBAN and PRODUBAN, under the
Center for Open Middleware initiative.
REFERENCES
Baker, P., Dai, Z. R., Grabowski, J., Haugen, O., Samuels-
son, E., Schieferdecker, I., and Williams, C. E. (2004).
The UML 2.0 Testing Profile.
Baker, P., Dai, Z. R., Grabowski, J., Haugen, O., Schiefer-
decker, I., and Williams, C. (2007). Model-Driven
Testing: Using the UML Testing Profile. Springer-
Verlag New York, Inc., Secaucus, NJ, USA.
Briand, L. C. and Labiche, Y. (2001). A UML-Based Ap-
proach to System Testing. In Proc. of the 4th Int. Con-
ference on The Unified Modeling Language, Model-
ing Languages, Concepts, and Tools, pages 194–208,
London, UK, UK. Springer-Verlag.
Cristi´a, M. and Monetti, P. R. (2009). Implementing and
applying the stocks-carrington framework for model-
based testing. In ICFEM, pages 167–185.
Davis, C. et al. (2009). Software Test Engineering with IBM
Rational Functional Tester: The Definitive Resource.
IBM Press, 1st edition.
Eclipse(a) (2011). Eclipse Foundation: Test & Performance
Tools Platform (TPTP). http://www.eclipse.org/tptp.
Eclipse(b) (2010). Eclipse Graphical Modeling Framework
(GMF). www.eclipse.org/modeling/gmf/.
Eclipse(c) (2013). Eclipse Modeling Framework (EMF).
www.eclipse.org/modeling/emf/.
Fowler, M. (2003). UML Distilled: A Brief Guide to
the Standard Object Modeling Language. Addison-
Wesley Longman Publishing Co., Inc., Boston, MA,
USA, 3rd edition.
Fowler, M. (2010). Domain Specific Languages. Addison-
Wesley Professional, 1st edition.
Holmes, A. and Kellogg, M. (2006). Automating functional
tests using Selenium. In Agile Conf., pages 270–275.
Javed, A. Z., Strooper, P., and Watson, G. (2007). Auto-
mated generation of test cases using model-driven ar-
chitecture. In Automation of Software Test , 2007. AST
’07. 2nd Int. Workshop on, pages 3–3.
Karl, K. (2013). GraphWalker. www.graphwalker.org.
Lamancha, B. P. et al. (2009). Automated Model-based
Testing Using the UML Testing Profile and QVT. In
Proc. of the 6th Int. Workshop on Model-Driven Engi-
neering, Verification and Validation, MoDeVVa ’09,
pages 1–10, New York, NY, USA. ACM.
Meszaros, G. (2006). XUnit Test Patterns: Refactoring Test
Code. Prentice Hall, Upper Saddle River, NJ, USA.
OMG (2011). Meta Object Facility (MOF) 2.0
Query/View/Transformation Specification, v1.1.
Pedrosa, C., Lelis, L., and Vieira Moura, A. (2013). In-
cremental testing of finite state machines. Software
Testing, Verification and Reliability, 23(8):585–612.
Rao, A. (2011). HP QuickTest Professional WorkShop Se-
ries: Level 1 HP Quicktest. Outskirts Press.
Tahchiev, P., Leme, F., et al. (2010). JUnit in Action. Man-
ning Publications Co., Greenwich, CT, USA.
Utting, M. and Legeard, B. (2007). Practical Model-Based
Testing: A Tools Approach. Morgan Kaufmann Pub-
lishers Inc., San Francisco, CA, USA.
Wendland, M.-F. et al. (2013). Fokus!MBT: A Multi-
paradigmatic Test Modeling Environment. In Proc.
of the Workshop on ACadeMics Tooling with Eclipse,
ACME ’13, pages 1–10, New York, NY, USA. ACM.
MODELSWARD2015-3rdInternationalConferenceonModel-DrivenEngineeringandSoftwareDevelopment
246