@fault is found → (
∥terminate, NOP(context, clean), NOP(other, clean)∥)
@element and context are not faulty → (
∥set to 0(this, clean), set to 0(context, clean),
NOP(other, faulty)∥
change φ
r
Horváth
)
@element is not but context is faulty → (
∥NOP(this, clean), set to 0(context, faulty),
set to 0(other, clean)∥
change φ
r
Horváth
)
@don’t know → (
∥NOP(this, nothing), NOP(context, nothing),
NOP(other, nothing)∥
change φ
r
Horváth
)
Example 5.7: Effects of Horváth et al’s system.
. . .
{l
1
→ clean, l
2
→ faulty, . . . , l
42
→ nothing, . . . } → (
∥R1(l
1
, clean), R2(l
2
, faulty), . . . , NOP(l
42
, nothing), ∥)
. . .
l
i
∈ L
Gong
, labels (i.e., the source code lines)
Example 5.8: Effects of Gong et al’s system.
6 CONCLUSION AND FUTURE
WORKS
In this paper, we presented the first revision of a ter-
minology for feedback systems with a notation that
can describe the constituents of the systems. The ca-
pabilities of the terminology were illustrated with two
detailed examples. Each of these sample systems in-
corporates the developers’ knowledge into the fault
localization processes.
The key benefit of our terminology that it will
help to evaluate various feedback systems by aiding
their comparison with each other or with their ref-
erence implementations. We could save a consider-
able amount of time during the comparison of the two
systems if they were utilizing a common terminol-
ogy. The effort to connect the various terms with their
counterparts in the other system could be eliminated.
Our terminology revealed several other properties
and connections, which we plan to enumerate and cat-
egorize in a following research. At first, we will col-
lect more feedback systems that are already published
to address SE-related issues. The identified properties
of these systems will be used to improve the versatil-
ity of the terminology and the expressive power of the
notation. Users’ surveys will validate this enhanced
terminology and notation to improve their usability
further. Our longterm goal is to define a true “lin-
gua franca” for the stakeholders to define and express
the users’ feedback they expect and the reactions they
give.
ACKNOWLEDGMENTS
The research was supported by the Ministry of Inno-
vation and Technology NRDI Office within the frame-
work of the Artificial Intelligence National Labora-
tory Program (RRF-2.3.1-21-2022-00004).
Project no. TKP2021-NVA-09 has been imple-
mented with the support provided by the Ministry of
Innovation and Technology of Hungary from the Na-
tional Research, Development and Innovation Fund,
financed under the TKP2021-NVA funding scheme.
REFERENCES
Gong, L., Lo, D., Jiang, L., and Zhang, H. (2012). Inter-
active fault localization leveraging simple user feed-
back. In IEEE International Conference on Software
Maintenance, ICSM.
Hao, D., Zhang, L., Zhang, L., Sun, J., and Mei, H. (2009).
Vida: Visual interactive debugging. In 2009 IEEE
31st International Conference on Software Engineer-
ing, pages 583–586.
Hassenzahl, M. and Tractinsky, N. (2006). User experience
- a research agenda. Behaviour and Information Tech-
nology, 25(2):91–97. cited By 1458.
Horváth, F., Beszédes, Á., Vancsics, B., Balogh, G., Vidács,
L., and Gyimóthy, T. (2020). Experiments with inter-
active fault localization using simulated and real users.
In Proceedings of the 36th IEEE International Con-
ference on Software Maintenance and Evolution (IC-
SME’20), pages 290–300.
ISO. Iso/iec 14977:1996 - information technology — syn-
tactic metalanguage — extended bnf. https://www.iso.
org/standard/26153.html. (Accessed on 11/01/2021).
Korel, B. and Laski, J. (1988). Stad - a system for testing
and debugging: User perspective. pages 13–20. cited
By 20.
Law, E.-C., Roto, V., Hassenzahl, M., Vermeeren, A., and
Kort, J. (2009). Understanding, scoping and defining
user experience: A survey approach. pages 719–728.
cited By 599.
McIntosh, S., Kamei, Y., Adams, B., and Hassan, A. (2016).
An empirical study of the impact of modern code re-
view practices on software quality. Empirical Soft-
ware Engineering, 21(5):2146–2189. cited By 112.
Thongtanunam, P., Tantithamthavorn, C., Kula, R.,
Yoshida, N., Iida, H., and Matsumoto, K.-I. (2015).
Who should review my code? a file location-based
ICSOFT 2022 - 17th International Conference on Software Technologies
232