domly generated test suite and the generated complete
test suite. The use of our equivalence classes of the in-
put parameters reduces the number of redundant test
cases in the randomly generated test suite by 87.6 %.
During the experiments, we have successively in-
jected 9 errors in the FGS algorithm code to investi-
gate the error detection capability of both test suites.
We have used two different test criteria: First, a test
case detects an error if the distance between the cal-
culated centroid position and a given position is larger
than a predefined value. Second, a test case detects an
error if the distance of the erroneous calculated po-
sition and an assumed error-free calculated position
exceeds a specified value. We have observed that dif-
ferent test criteria lead to different test results. For the
first test criterion, the complete test suite detects 3 in-
jected errors while the randomly generated test suite
detects 1 injected error. The error detection capability
of the complete test suite is about 3 times higher than
the capability of the randomly generated test suite.
But both test suites do not detect all errors. In another
experiment, we have used the second test criterion.
In this case, not all test cases in the test suites detect
all errors. In the case of 3 injected errors and an un-
intended error, the percentage of error detecting test
cases in the complete test suite is again about 3 times
higher than for the randomly generated test suite. For
the other 6 injected errors, the percentage of error de-
tecting test cases is for both test suites about 99 %.
The experiments showed that a systematic test us-
ing our proposed partitioning approach increases the
error detection capability of a given test suite. This
makes the partitioning approach efficient and effec-
tive. In addition, it facilitates the automated genera-
tion, the execution, and the evaluation of test cases.
So far, we have injected errors in the application
code. But in space, many missions suffer from cos-
mic radiation that flips bits in binary code or cause
hot pixels in input images. We plan to investigate the
efficiency of our approach by injecting errors in input
data or in the binary code of the application in future
work. Finally, we have evaluated our approach with
a single application. Later on, we plan to investigate
the flexibility of our approach for other applications,
for example, blob feature extraction in the robotics
domain (Bruce et al., 2000; Merino et al., 2006).
REFERENCES
Bhat, A. and Quadri, S. (2015). Equivalence class par-
titioning and boundary value analysis-a review. In
Intl. Conf. on Computing for Sustainable Global De-
velopment (INDIACom), pages 1557–1562. IEEE.
Bringmann, E. and Kr
¨
amer, A. (2006). Systematic testing
of the continuous behavior of automotive systems. In
International Workshop on Software Engineering for
Automotive Systems, pages 13–20. ACM.
Bruce, J., Balch, T., and Veloso, M. (2000). Fast and
inexpensive color image segmentation for interactive
robots. In IEEE/RSJ Intl. Conf. on Intelligent Robots
and Systems, volume 3, pages 2061–2066. IEEE.
DLR (2017). Gr
¨
unes Licht f
¨
ur europ
¨
aisches Weltraumte-
leskop PLATO. http://www.dlr.de/dlr/desktopdefault.
aspx/tabid-10081/151 read-22858/#/gallery/27241.
ECSS Executive Secretariat (2008). Space engineering.
SpaceWire – Links, nodes, routers and networks.
ESA (2012). ESA’s ’Cosmic Vision’. http://www.esa.int/
Our Activities/Space Science/ESA s Cosmic Vision.
Grießbach, D. (2018). Fine Guidance System Performance
Report. DLR, Berlin.
Huang, W.-l. and Peleska, J. (2016). Complete model-based
equivalence class testing. Intl. Journal on Software
Tools for Technology Transfer, 18(3):265–283.
Kaner, C. (2004). Teaching domain testing: A status re-
port. In Conference on Software Engineering Educa-
tion and Training, pages 112–117. IEEE.
Marcos-Arenal, P., Zima, W., De Ridder, J., Aerts, C., Huy-
gen, R., Samadi, R., Green, J., Piotto, G., Salmon,
S., Catala, C., et al. (2014). The PLATO Simula-
tor: modelling of high-precision high-cadence space-
based imaging. Astronomy & Astrophysics, 566:A92.
Merino, L., Wiklund, J., Caballero, F., Moe, A., De Dios,
J. R. M., Forssen, P.-E., Nordberg, K., and Ollero, A.
(2006). Vision-based multi-UAV position estimation.
IEEE robotics & automation magazine, 13(3):53–62.
PENDER ELECTRONIC DESIGN GmbH (2011). Gr-
xc6s-product sheet.
Peter Liggesmeyer (2009). Software-Qualit
¨
at: Testen,
Analysieren und Verifizieren von Software. Spektrum
Akademischer Verlag, 2 edition.
The HDF Group (April 05, 2018). Hdf5. https://portal.
hdfgroup.org/display/HDF5/HDF5.
Varshney, S. and Mehrotra, M. (2014). Automated software
test data generation for data flow dependencies using
genetic algorithm. International Journal, 4(2).
Witteck, U. (2018). Automated test generation for satellite
on-board image processing. Master thesis, TU Berlin.
ICSOFT 2019 - 14th International Conference on Software Technologies
26