Tests (PUT) (Tillmann and Schulte, 2006) is another
project whose focus is on coverage. It is very similar
to the Agitator tool, once it is also based on
symbolic execution techniques and constraint
solving to acquire a high coverage.
As general conclusion, we could assert that the
state of the art is very poor in researches that try to
establish a complete automated test environment. In
fact, the own definition of complete automated test
environment is an open-question. A possible reason
for that scenario is the fragmentation of software
testing researchers into several disjoint communities
(Bertolino, 2007), which have their isolated goals
and directions. Thus, investigations about integration
architectures, which could associate several isolated
automated test practices, may accelerate the
definition of such “utopian” environments.
7 CONCLUSION
This paper has discussed our experience in adapting
and using a test management solution, which was
based on the open source Testlink tool. Our focus
was on extending this tool with capabilities of
automation, intelligent control and use of statistic
metrics. To that end, we have specified a modular
test architecture and performed some experiments
using a subset or such architecture. The main
simplifications were: we do not use the TC
generation and mapping modules, the planning
module only managers priority and temporal
constraints, historical statistic metrics are only used
to find tests with high priority of failure, the result
module uses the own Testlink features and the
execution module was not totally configured, so that
several situations are not covered by the knowledge
base. Such situations are mainly related to failure
recovery procedures and they are the principal
targets for future researches.
ACKNOWLEDGEMENTS
This work was supported by the National Institute of
Science and Technology for Software Engineering
(INES – www.ines.org.br), funded by CNPq, grants
573964/2008-4.
REFERENCES
Aljahdali, S., Hussain, S., Hundewale, N., Poyil, A., 2012,
Test Management and Control, Proceedings of the 3rd
IEEE International Conference on Software Enginee-
ring and Service, pp.429,432, doi: 10.1109 /ICSESS.
2012.6269496.
Bertolino, A. 2007. Software Testing Research:
Achievements, Challenges, Dreams, Future of
Software Engineering, pp. 85-103.
Boshernitsan, M., Doong, R. and Savoia, A. 2006. From
Daikon to Agitator: lessons and challenges in building
a commercial tool for developer testing. In Proc.
ACM/SIGSOFT International Symposium on Software
Testing and Analysis, pp. 169–180.
Chin, L., Worth, D., Greenough, C. 2007. A Survey of
Software Testing Tools for Computational Science,
RAL Technical Reports, RAL-TR-2007-010.
Filho, C., Ramalho, G. 2000. JEOPS - The Java
Embedded Object Production System, Lecture Notes
In Computer Science, Vol. 1952, pp. 53 - 62, Springer-
Verlag, London, UK.
Frantzen, L., Tretmans, J. and Willemse, T. 2006. A
symbolic framework for model-based testing. In
Lecture Notes in Computer Science (LNCS) 4262, pp.
40–54. Springer-Verlag.
Ghallab, G., Nau, D., Traverso, P. 2004. Automated
Planning: theory and practice, Morgan Kaufmann
Publishers.
Lino, N., Siebra, C., Silva, F., Santos, A., 2008, An
Autonomic Computing Architecture for Network Tests
of Mobile Devices, Proceedings of the 7th
International Information and Telecommunication
Technologies Symposium, Foz do Iguaçu, Brazil.
Polo, M., Reales, P., Piattini, M., Ebert, C., 2013, Test
Automation, IEEE Software, 30(1):84- 89.
Prasanna, M., Sivanandam, S., Venkatesan, R.,
Sundarrajan, R. 2005. A Survey on Automatic Test
Case Generation, Academic Open Internet Journal, 15.
Saff, D. and Ernst, M. 2004. An experimental evaluation
of continuous testing during development. In Proc.
ACM/SIGSOFT International Symposium. on Software
Testing and Analysis, pp. 76–85.
Schreiber, G., Akkermans, H., Anjewierden, A., Hoog, R.,
Shadbolt, N., Velde, W., Wielinga, B., 1999,
Knowledge Engineering and Management: The
CommonKADS Methodology. The MIT Press.
Tate, A., 2003, <I-N-C-A>: an Ontology for Mixed-
Initiative Synthesis Tasks. Proceedings of the IJCAI
Workshop on Mixed-Initiative Intelligent Systems,
Acapulco, Mexico.
Tillmann, N. and Schulte, W. 2006. Unit tests reloaded:
Parameterized unit testing with symbolic execution.
IEEE Software, 23(4):38–47.
Wang, H. 2008. A Review of Six Sigma Approach:
Methodology, Implementation and Future Research,
4th International Conference on Wireless Communi-
cations, Networking and Mobile Computing, pp.1 – 4.
Wielinga, B., Schreiber, A. and Breuker, J. 1992. KADS:
a modelling approach to knowledge engineering,
Knowledge Acquisition Journal, 4(1): 5-53.
Yamaura, T., 1998, How to design practical test cases,
IEEE Software, 15(6):30-36.
ICSOFT-EA2014-9thInternationalConferenceonSoftwareEngineeringandApplications
276