7 CONCLUSIONS AND FUTURE
WORK
In this paper, we presented our approach – CQE to
support efficient code quality estimation by enhanc-
ing the current FB tool. The CQE is a three-step ap-
proach that make use of FB, surveying developers,
and calculating quality metrics. During the experi-
ment, we have used 80 JARs to train our knowledge
base after assessing FB reports with 8 experienced de-
velopers. The survey template contains 3 more sever-
ity categories than what FB provides (2 priorities: M,
H) in the bug report. By evaluating 10 testing JARs,
we have seen that these extra severity categories can
cause the code quality metric to slightly vary com-
paring to FB. CQE approach provides an automatic
and efficient way to estimate and improve code qual-
ity with the help of statistics (classification) of errors
provided by our approach. Furthermore by maintain-
ing a knowledge base obtained from initial surveys,
the subsequent code quality estimation processes can
be fully automated. This automatic process will sup-
port project managers with efficient decision making.
In future we will use other quality metrics like
Weighted Code Errors per Function point (WCEF) to
identify quality factors other than weighted error den-
sity. We also like to focus on specific application (like
web based or database) based severe errors. We also
like to integrate our approach into FB to give clear
picture of severity of error and code quality estimated
by CQE metric.
In summary, our approach is useful for checking
the quality at different levels like:
• Global Quality: Expert knowledge could be taken
from a wide range of developers. This would cre-
ate a huge sample size, as large variation in opin-
ions would be expected.
• Corporate Quality: Use experts across company
and use that knowledge to estimate the code qual-
ity according to corporate norms. This could be
useful to assess in house teams or the quality of
software that comes from outsourced third par-
ties. Third parties could use this tool to assess the
client’s code quality.
ACKNOWLEDGEMENTS
This work was supported, in part, by Science
Foundation Ireland grant 10/CE/I1855 to Lero -
the Irish Software Engineering Research Centre
(www.lero.ie).
REFERENCES
Ayewah, N., Hovemeyer, D., Morgenthaler, J., Penix, J.,
and Pugh, W. (2008). Using static analysis to find
bugs. Software, IEEE, 25(5):22 –29.
Ayewah, N. and Pugh, W. (2009). Using checklists to re-
view static analysis warnings. DEFECTS ’09, pages
11–15, New York, NY, USA. ACM.
Ayewah, N., Pugh, W., Morgenthaler, J. D., Penix, J., and
Zhou, Y. (2007). Evaluating static analysis defect
warnings on production software. PASTE ’07, pages
1–8, New York, NY, USA. ACM.
Binkley, D. (2007). Source code analysis: A road map.
In Future of Software Engineering, 2007. FOSE ’07,
pages 104 –119.
Boehm, B. W., Brown, J. R., and Lipow, M. (1976). Quanti-
tative evaluation of software quality. ICSE ’76, pages
592–605, Los Alamitos, CA, USA. IEEE Computer
Society Press.
Chirila, C., Juratoni, D., Tudor, D., and Cretu, V. Towards
a software quality assessment model based on open-
source statical code analyzers. (SACI), 2011, pages
341 –346.
Daniel, G. (2004). Software quality assurance: From theory
to implementation. chapter 21. Pearson Education.
Fang, X. (2001). Using a coding standard to improve pro-
gram quality. In Quality Software, 2001. Proceed-
ings.Second Asia-Pacific Conference on, pages 73 –
78.
Kim, S. and Ernst, M. D. (2007). Which warnings should
i fix first? ESEC-FSE ’07, pages 45–54, New York,
NY, USA. ACM.
Muntean, G.-M., Perry, P., and Murphy, L. (2007). A
comparison-based study of quality-oriented video
on demand. Broadcasting, IEEE Transactions on,
53(1):92 –102.
Nagappan, N. and Ball, T. (ICSE,2005). Static analysis
tools as early indicators of pre-release defect density.
pages 580 – 586.
Rutar, N., Almazan, C., and Foster, J. (2004). A comparison
of bug finding tools for java. pages 245 – 256.
Schulmeyer, G. and McManus, J. (1999). Handbook of soft-
ware quality assurance. Prentice Hall.
Shen, H., Fang, J., and Zhao, J. (ICST,2011). Efindbugs:
Effective error ranking for findbugs. pages 299 –308.
Yang, H. and Ward, M. (2003). Successful Evolution Of
Software Systems. Artech House Computer Library.
Artech House.
CQE-AnApproachtoAutomaticallyEstimatetheCodeQualityusinganObjectiveMetricFromanEmpiricalStudy
205