5 RELATED WORKS
Standardization institutes put lots of efforts in defin-
ing. They focus on the definition and formaliza-
tion of software quality models such as the ISO9126
that qualifies and quantifies functional and non-
functional properties with software metrics (Carvallo
and Franch, 2006). Besides, two other standardiza-
tion institutes worked in that way to propose two com-
monly used norms namely ISO/IEC25010 (Kitchen-
ham, 2010) and OMG SMM (Bouwers et al., 2013)
in order to guide the measurement plan specifica-
tion. These two last standards have been reviewed
by the research and industrial community, and are
adapted, integrated and applied in many domains.
In the research literature, several works on software
metrics selection for software quality have been pro-
vided (Gao et al., 2011). Recent techniques based
on learning approaches have been proposed. Most
of them are dedicated to software defect predic-
tion (Shepperd et al., 2014), (MacDonald, 2018),
(Laradji et al., 2015), metrics selection (Bardsiri and
Hashemi, 2017) or even Software testing (Kim et al.,
2017). However, even if these techniques have intro-
duced considerable progress to improve the software
quality, they have still some limitations. The mea-
surement plan is still manually fixed by the project
manager or the experts in charge of its definition. Fur-
thermore, the implementation of the measures is de-
pendent on the developer and reduce the scalability,
maintainability and the interoperability of the mea-
surement process.
While a current study shows the lacks in the use of
learning technique for software measurement analysis
(Hentschel et al., 2016), there are in literature some
works which use supervised learning algorithms, es-
pecially for software defect prediction (Laradji et al.,
2015; Shepperd et al., 2014) or for prioritize software
metrics (Shin et al., 2011). Indeed, there are a lot of
software metrics, and currently the measurement pro-
cesses execute all the metrics continuously. This lat-
ter shows that we can prioritize the metrics and thus
reduce the number of metrics to be executed.
There are also works which propose to use un-
supervised learning technique to estimate the quality
of software (Zhong et al., 2004b) as ”expert-based”.
They also propose to base on clustering techniques to
analyze software quality (Zhong et al., 2004a). Other
works propose to combine supervised and unsuper-
vised learning techniques to predict the maintainabil-
ity of an Oriented Object software (Jin and Liu, 2010).
But all of these works focus on the analysis or pre-
diction of one software property. The aim of our
approach is to allow the less of expert dependency
to evaluate all the software engineering process, and
to suggest flexible mp continuously according to the
software need.
6 CONCLUSION &
PERSPECTIVES
In this paper, we proposed to improve our previous
work by reducing the expert dependency to the man-
agement of the analysis process. For that, we propose
to use an unsupervised learning algorithm X-MEANS
to take the place of the expert and to generate auto-
matically an analysis model by learning from an his-
torical database. The objective is to reduce the man-
agement cost, and the time cost.
Well implemented and experimented, this ap-
proach shows the possibility to generate a reliable
model with a low time cost, and also to verify the va-
lidity of manual models.
The promising results demonstrate us the benefi-
cial contribution of using learning techniques in the
software measurement area. Thereby, as perspective,
it could be interesting to analyze the differences be-
tween automated models and manual models and also
to increase the independence to the expert by gener-
ating automatically the correlations between clusters
and metrics subsets. A statistic method on the weight
of features could be envisaged in future works.
REFERENCES
Bardsiri, A. K. and Hashemi, S. M. (2017). Machine learn-
ing methods with feature selection approach to esti-
mate software services development effort. Interna-
tional Journal of Services Sciences, 6(1):26–37.
Bouwers, E., van Deursen, A., and Visser, J. (2013). Evalu-
ating usefulness of software metrics: an industrial ex-
perience report. In Notkin, D., Cheng, B. H. C., and
Pohl, K., editors, 35th International Conference on
Software Engineering, ICSE ’13, San Francisco, CA,
USA, May 18-26, 2013, pages 921–930. IEEE Com-
puter Society.
Carvallo, J. P. and Franch, X. (2006). Extending the iso/iec
9126-1 quality model with non-technical factors for
cots components selection. In Proceedings of the 2006
International Workshop on Software Quality, WoSQ
’06, pages 9–14, New York, NY, USA. ACM.
Dahab, S., Porras, J. J. H., and Maag, S. (2018). A novel for-
mal approach to automatically suggest metrics in soft-
ware measurement plans. In Proceedings of the 13th
International Conference on Evaluation of Novel Ap-
proaches to Software Engineering, ENASE 2018, Fun-
chal, Madeira, Portugal, March 23-24, 2018., pages
283–290.
ENASE 2019 - 14th International Conference on Evaluation of Novel Approaches to Software Engineering
196