Authors:
Collins Mathews
1
;
Kenny Ye
1
;
Jake Grozdanovski
1
;
Marcus Marinelli
1
;
Kai Zhong
1
;
Hourieh Khalajzadeh
2
;
Humphrey Obie
2
and
John Grundy
2
Affiliations:
1
Faculty of Information Technology, Monash University, Melbourne, Australia
;
2
HumaniSE Lab, Monash University, Melbourne, Australia
Keyword(s):
Human-centric Issues, App Reviews, Machine Learning, End-user, Human-centred Design.
Abstract:
In modern software development, there is a growing emphasis on creating and designing around the end-user. This has sparked the widespread adoption of human-centred design and agile development. These concepts intersect during the user feedback stage in agile development, where user requirements are re-evaluated and utilised towards the next iteration of development. An issue arises when the amount of user feedback far exceeds the team’s capacity to extract meaningful data. As a result, many critical concerns and issues may fall through the cracks and remain unnoticed, or the team must spend a great deal of time in analysing the data that can be better spent elsewhere. In this paper, a tool is presented that analyses a large number of user reviews from 24 mobile apps. These are used to train a machine learning (ML) model to automatically generate the probability of the existence of human-centric issues, to automate and streamline the user feedback review analysis process. Evaluation
shows an improved ability to find human-centric issues of the users.
(More)