
6 CONCLUSION
In this paper, we reported an experiment to investigate
the influence of review time on ad hoc model inspec-
tions. In the experiment we analyzed the influence
of review time on effectiveness, confidence, and effi-
ciency. The experiment was conducted with 200 par-
ticipants that conducted a total of 520 ad hoc model
inspections. Most important, analysis of the data sets
showed that review time does not have a significant
influence on the effectiveness of ad hoc model inspec-
tions. For confidence, we found a small influencing
effect, for efficiency a high effect.
Analysis of variance (ANOVA) showed that re-
view time leads to significantly different confidence
and efficiency. Post hoc tests showed that a short
review time of up to ten minutes negatively influ-
ences the confidence the reviewer has in the decisions
made. In contrast to assumptions made in the related
work, we found out that large review times also have
a negative influence. For a review time greater than
thirty minutes, confidence and efficiency is signifi-
cantly lower as for moderate review time.
REFERENCES
Albers, K., Beck, S., B
¨
uker, M., Daun, M., MacGregor, J.,
Salmon, A., Weber, R., and Weyer, T. (2016). System
function networks. In Advanced Model-Based Engi-
neering of Embedded Systems, Extensions of the SPES
2020 Methodology, pages 119–144. Springer.
Basili, V. R., Green, S., Laitenberger, O., Lanubile, F.,
Shull, F., Sørumg
˚
ard, S., and Zelkowitz, M. V.
(1996). The Empirical Investigation of Perspective-
Based Reading. Empirical Software Engineering,
1(2):133–164.
Bavota, G., Gravino, C., Oliveto, R., De Lucia, A., Tor-
tora, G., Genero, M., and Cruz-Lemus, J. A. (2011).
Identifying the Weaknesses of UML Class Diagrams
during Data Model Comprehension. In Model Driven
Engineering Languages and Systems, pages 168–182,
Berlin, Heidelberg. Springer.
Berling, T. and Runeson, P. (2003). Evaluation of a per-
spective based review method applied in an industrial
setting. IEE Proceedings - Software, 150(3):177–184.
Boehm, B. W. (1987). Industrial software metrics top 10
list. IEEE Software, 4(5):84–85.
Conradi, R., Mohagheghi, P., Arif, T., Hegde, L. C., Bunde,
G. A., and Pedersen, A. (2003). Object-Oriented
Reading Techniques for Inspection of UML Models –
An Industrial Experiment. In ECOOP 2003 – Object-
Oriented Programming, pages 483–500, Berlin, Hei-
delberg. Springer.
d. Mello, R. M., Teixeira, E. N., Schots, M., Werner, C.
M. L., and Travassos, G. H. (2012). Checklist-based
inspection technique for feature models review. In
2012 Sixth Brazilian Symposium on Software Compo-
nents, Architectures and Reuse, pages 140–149.
Daun, M., Brings, J., Krajinski, L., and Weyer, T. (2019a).
On the benefits of using dedicated models in val-
idation processes for behavioral specifications. In
IEEE/ACM Int. Conf. on Software and System Pro-
cesses, pages 44–53. IEEE.
Daun, M., Brings, J., Obe, P. A., and Stenkova, V. (2021).
Reliability of self-rated experience and confidence as
predictors for students’ performance in software engi-
neering. Empirical Software Engineering, 26(4):80.
Daun, M., Brings, J., and Weyer, T. (2017). On the impact
of the model-based representation of inconsistencies
to manual reviews: Results from a controlled experi-
ment. In Conceptual Modeling: 36th Int. Conf., pages
466–473. Springer.
Daun, M., Brings, J., and Weyer, T. (2020). Do instance-
level review diagrams support validation processes of
cyber-physical system specifications: results from a
controlled experiment. In Int. Conf. on Software and
System Processes, pages 11–20.
Daun, M., Weyer, T., and Pohl, K. (2014). Validating the
functional design of embedded systems against stake-
holder intentions. In Int. Conf. on Model-Driven En-
gineering and Software Development, pages 333–339.
IEEE.
Daun, M., Weyer, T., and Pohl, K. (2019b). Improving man-
ual reviews in function-centered engineering of em-
bedded systems using a dedicated review model. Soft-
ware and Systems Modeling, 18(6):3421–3459.
de Alfaro, L. and Henzinger, T. A. (2001). Interface au-
tomata. SIGSOFT Softw. Eng. Notes, 26(5):109–120.
de Almeida, J. R., Camargo, J. B., Basseto, B. A., and Paz,
S. M. (2003). Best practices in code inspection for
safety-critical software. IEEE Software, 20(3):56–63.
Doolan, E. P. (1992). Experience with fagan’s inspec-
tion method. Software: Practice and Experience,
22(2):173–182.
Dunsmore, A., Roper, M., and Wood, M. (2003). Practi-
cal code inspection techniques for object-oriented sys-
tems: an experimental comparison. IEEE Software,
20(4):21–29.
Fagan, M. E. (1976). Design and Code Inspections to Re-
duce Errors in Program Development. IBM Systems
Journal, 15(3):182–211.
Fagan, M. E. (1986). Advances in Software Inspections.
IEEE Trans. Software Eng., 12(7):744–751.
Figl, K., Mendling, J., and Strembeck, M. (2013a). The In-
fluence of Notational Deficiencies on Process Model
Comprehension. Journal of the Association for Infor-
mation Systems, 14(6).
Figl, K., Recker, J., and Mendling, J. (2013b). A study
on the effects of routing symbol design on process
model comprehension. Decision Support Systems,
54(2):1104–1118.
He, L. and Carver, J. C. (2006). PBR vs. checklist: a repli-
cation in the n-fold inspection context. In Travassos,
G. H., Maldonado, J. C., and Wohlin, C., editors, 2006
International Symposium on Empirical Software En-
gineering, pages 95–104. ACM.
ENASE 2024 - 19th International Conference on Evaluation of Novel Approaches to Software Engineering
130