Table 2: Measurement plans used during the suggestion process and the cycles where they were used. Metrics of the plans
are represented by the indexes describe in Table 1.
Metrics Cycles
MP1 2, 5, 6, 7, 8 1
MP2 4, 5, 6, 12 2, 4, 17, 22, 23, 24
MP3 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15 3, 5, 18
MP4 8, 9, 10, 11 6, 30
MP5 7, 8, 9, 10, 11 7, 8, 9
MP6 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 13, 14, 15 10
MP7 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 11, 19, 20
MP8 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 12, 21
MP9 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 13, 14, 15, 16
MP10 3, 4, 5, 6, 8, 9, 10, 11, 12 25
MP11 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 26, 32
MP12 1, 2, 3, 4, 5, 6, 8, 9, 10, 11 27
MP13 1, 3, 4, 5, 6, 8, 9, 10, 11, 12 28
MP14 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 29
MP15 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 31
to have a solid formal basis to improve the software
measurement implementation phase, in order to in-
crease the interoperability, scalability and maintain-
ability of the measurement process.
Furthermore, we design an automated measure-
ment plan suggestion framework to add flexibility,
and expert-independent analysis. The learning tech-
nique SVM is used combined with the RFE algorithm
and allow to manage a huge amount of data.
Our methodology has been implemented and suc-
cessfully experimented on a real use case. The tool
was able to manage a large data set, the tool managed
16 million unclassified vectors.
As future works, we plan to integrate the formal
design model to an industrial modeling tool of our
partner of the European project MEASURE.
Regarding the suggestion tool, we plan to validate
our approach by comparing our results with results
of the actual processes. To support a bigger amount
of data by increasing the number of unclassified in-
stances and using a training file with more samples
with a larger vector. Then, to improve the suggestion
we plan to add the possibility to generate at runtime a
novel combined metric by basing on the analysis. In
addition, we expect to improve the analysis visualiza-
tion and reporting to a better readability.
Finally, we project to define innovative metrics as
emotional one-which measure the user emotions-for
measuring the quality of video games, or the quality
of user experience for VoD use as example. In other
words, to measure usability of an industrial system.
REFERENCES
Fenton, N. and Bieman, J. (2014). Software metrics: a rig-
orous and practical approach. CRC Press.
Gao, K., Khoshgoftaar, T. M., Wang, H., and Seliya, N.
(2011). Choosing software metrics for defect predic-
tion: an investigation on feature selection techniques.
Software: Practice and Experience, 41(5):579–606.
Group, O. M. (2012). Structured metrics metamodel (smm).
(October):1–110.
ISO/IEC (2010). Iso/iec 25010 - systems and software en-
gineering - systems and software quality requirements
and evaluation (square) - system and software quality
models. Technical report.
Khalid, S., Khalil, T., and Nasreen, S. (2014). A survey of
feature selection and feature extraction techniques in
machine learning. In Science and Information Con-
ference (SAI), 2014, pages 372–378. IEEE.
Laradji, I. H., Alshayeb, M., and Ghouti, L. (2015). Soft-
ware defect prediction using ensemble learning on se-
lected features. Information and Software Technology,
58:388–402.
Prasad, M. C., Florence, L., and Arya, A. (2015). A study
on software metrics based software defect prediction
using data mining and machine learning techniques.
International Journal of Database Theory and Appli-
cation, 8(3):179–190.
Shepperd, M., Bowes, D., and Hall, T. (2014). Researcher
bias: The use of machine learning in software defect
prediction. IEEE Transactions on Software Engineer-
ing, 40(6):603–616.
Vapnik, V. N. and Vapnik, V. (1998). Statistical learning
theory, volume 1. Wiley New York.
Wang, H., Khoshgoftaar, T. M., and Napolitano, A. (2011).
An empirical study of software metrics selection us-
ing support vector machine. In The 23rd International
Conference on Software Engineering and Knowledge
Engineering (SEKE), pages 83–88.
ENASE 2018 - 13th International Conference on Evaluation of Novel Approaches to Software Engineering
290