achieving efficient test automation, both tests had to
be integrated in the new workflow, and with the same
test suite for both.
The remainder of this paper is organized in the
following manner. First, we review related work and
present some background material in order to make
this paper self-contained. Then we explain both the
as-is and the proposed test workflows. Based on that,
we elaborate on the tool chain, including two newly
developed tools. Finally, we evaluate our approach
and conclude.
2 RELATED WORK
There is wide agreement that complete automation re-
duces testing cost, especially when regression test-
ing is involved. For example, 64% of the respon-
dents agreed to that in a literature review and prac-
titioner survey (Rafi et al., 2012). In particular, there
is some empirical evidence from a case study report in
the context of agile development (Collins and de Lu-
cena, 2012). Also within the automotive industry,
test automation is reported to be widely applied, see,
e.g., the results from a questionnaire survey (Altinger
et al., 2014).
Unfortunately, supporting tools on the market of-
fer a poor fit for the needs of automated testing ac-
cording to (Rafi et al., 2012), where 45% of the re-
spondents agreed to that. Note, that this survey was
on automated testing of software in general, but did
not cover testing of embedded software.
There exist several books like (Broekman and
Notenboom, 2003)(Gr
¨
unfelder, 2013), which specif-
ically deal with testing of embedded systems. Due to
the wide variety of embedded systems, they focus on
test techniques in general. They often present chal-
lenges in applying these techniques but cannot show
general solutions, because tools and test environments
often have to be tailored specifically for the embedded
system to be tested.
Conrad (Conrad, 2009; Conrad, 2012) deals with
verification and validation in the context of the IEC
61508 or ISO 26262, respectively. This work pro-
poses workflows using the same test cases on both
the simulation model (which is also used to generate
the implementation code) and the object code (possi-
bly running on the real hardware). We build on these
workflows and adapt them to the needs of the given
test environment. In addition, we describe obstacles
when putting this workflow into practice and provide
tool support to resolve them.
In the long run, the test cases should be gener-
ated automatically. However, test case generation is
not widely used yet in the automotive industry ac-
cording to (Altinger et al., 2014). Still, we found
a proposal for a technique in (Kamma and Maruthi,
2014). It deals with the requirement from safety stan-
dards such as ISO 26262 on unit testing to check all
functional requirements and achieve 100% coverage
of auto-generated code.
3 BACKGROUND
In order to make this paper self-contained, let us
present some essential background information. First,
we sketch how both code generation and simulation
are possible with a given commercial tool. Then we
explain the overall test environment.
3.1 Code Generation and Simulation
The embedded software development is supported
by the tool ASCET (Advanced Simulation and Con-
trol Engineering Tool) (ETAS, 2016). This tool is
dedicated to model-based development of automotive
software. It can directly generate C code for the em-
bedded platform hardware from a given model. Al-
ternatively, it can simulate the model on the host plat-
form. Actually, ASCET generates C code for such a
simulation. It is important to note that this C code
is different from the C code for the embedded plat-
form, e.g., because of stubbing. Figure 1 illustrates
both paths in terms of data flows. It also shows that
inputs in addition to the ASCET model are involved
as explained below.
3.2 Test Environment including the
Embedded Platform Hardware
Figure 2 illustrates the structure of the open-loop test
environment including the given embedded platform
hardware, and the data flows involved. This test en-
vironment includes a PC for control, an ECU as the
embedded platform, and a few other hardware com-
ponents.
TPT (Time Partition Testing) (PikeTec, 2016) is a
tool for testing embedded systems. It supports several
tasks in the course of testing, such as model-based
test-case generation as well as administration, exe-
cution, evaluation and documentation of tests. TPT
controls the tools INCA and LABCAR Software. All
these tools run on the PC for control of the open-loop
test environment.
INCA (Integrated Calibration and Acquisition
System) (ETAS3, 2016) is a software tool for the
ICSOFT 2017 - 12th International Conference on Software Technologies
428