• In the case study shown in this paper, only a lim-
ited number of developers is participating to the
project. Visualizing data of projects with large
teams might result in too colorful graphs. We plan
to consider bigger communities and to be able to
focus on sub-communities in our visualization.
• The trade-offs between stability and visual clutter
should be investigated more formally. In order to
improve the readability of the word clouds, in the
case study of this paper a short period (i.e., two
weeks) was selected, as the team presented a high
level of activity. OvERVIeW should be able to
deal with high levels of activity of the teams.
The word cloud is generated at one point in time.
This steady approach can be expanded by including
time information. Time information can be repre-
sented, for example, by the time elapsed since the
last commit, or the time during which a developer
did not commit at all, or the time in which a code
is handled only by a developer. This would require
a multivariate facet approach, in which it is not suffi-
cient to use a simple transfer function to map only in-
tensity information. The combination of the resulting
tool with a prediction algorithm (Abrahamsson et al.,
2011; Fronza et al., 2011a) would enable to visualize,
e.g., the evolution of effort distribution in the project.
Furthermore, interactive techniques for flexible
word cloud navigation and manipulation should be
considered. For example, the technique used in (Liu
et al., 2014) supports multifaceted viewing of word
clouds .
In this work we applied qualitative, task-oriented
evaluation to understand if OvERVIeW shows infor-
mation effectively and we received positive feedback;
still, evaluation needs to be extended. In particu-
lar, we need to assess if the output of OvERVIeW
is understandable and easy-to-remember. To this
end, we plan to perform an experiment to eval-
uate OvERVIeW by asking developers to perform
some tasks using different visualizations (including
OvERVIeW) and to provide their feedback about
OvERVIeW. Finally, we plan to perform case stud-
ies using more OSS projects. In this context, it would
be useful to collect feedback from users that are living
a scenario such as the one described in Section 3.1.
REFERENCES
Abrahamsson, P., Fronza, I., Moser, R., Vlasenko, J., and
Pedrycz, W. (2011). Predicting development effort
from user stories. In Empirical Software Engineering
and Measurement (ESEM), 2011 International Sym-
posium on, pages 400–403.
Bateman, S., Gutwin, C., and Nacenta, M. (2008). Seeing
things in the clouds: the effect of visual features on
tag cloud selections. In Proceedings of the nineteenth
ACM conference on Hypertext and hypermedia, HT
’08, pages 193–202, New York, NY, USA. ACM.
Caudwell, A. H. (2010). Gource: visualizing software
version control history. In Proceedings of the ACM
international conference companion on Object ori-
ented programming systems languages and applica-
tions companion.
Ciani, A., Minelli, R., Mocci, A., and Lanza, M. (2015).
Urbanit: Visualizing repositories everywhere. In
Software Maintenance and Evolution (ICSME), 2015
IEEE International Conference on, pages 324–326.
Crowston, K., Wei, K., Li, Q., Eseryel, U. Y., and Howi-
son, J. (2005). Coordination of free/libre open source
software development. In In Proceedings of the In-
ternational Conference on Information Systems (ICIS
2005), Las Vegas, pages 181–193.
Cubranic, D. and Booth, K. S. (1999). Coordinating open-
source software development. In Proceedings of the
8th Workshop on Enabling Technologies on Infras-
tructure for Collaborative Enterprises, WETICE ’99,
pages 61–68, Washington, DC, USA. IEEE Computer
Society.
Cui, W., Wu, Y., Liu, S., Wei, F., Zhou, M., and Qu, H.
(2010). Context-preserving, dynamic word cloud vi-
sualization. Computer Graphics and Applications,
IEEE, 30(6):42 –53.
D’Ambros, M., Lanza, M., and Gall, H. (2005). Fractal
figures: Visualizing development effort for CVS enti-
ties. In Proc. Int’l Workshop on Visualizing Software
for Understanding (Vissoft), pages 46–51. IEEE Com-
puter Society Press.
Delft, F., Delft, M., and van Deursen Delft, A. (2010). Im-
proving the requirements process by visualizing end-
user documents as tag clouds. In Proc. of Flexitools
2010.
di Bella, E., Fronza, I., Phaphoom, N., Sillitti, A.,
Succi, G., and Vlasenko, J. (2012). Pair program-
ming and software defects - a large, industrial case
study. IEEE Transactions on Software Engineering,
99(PrePrints):1.
Diehl, S. (2007). Software Visualization: Visualizing
the Structure, Behaviour, and Evolution of Software.
Springer-Verlag New York, Inc., Secaucus, NJ, USA.
Feinberg, J. (2010). Wordle. In Steele, J. and Iliinsky,
N., editors, Beautiful Visualization: Looking at Data
through the Eyes of Experts, chapter 3. O’Reilly Me-
dia.
Few, S. (2012). Show me the numbers : designing tables
and graphs to enlighten. Analytics Press.
Few, S. (2013). Information Dashboard Design: Displaying
data for at-a-glance monitoring. Analytics Press.
Fowler, M. (2006). Code owner-
ship. Retrieved Feb. 10, 2016, from
http://martinfowler.com/bliki/CodeOwnership.html.
Fronza, I. (2013). Opening statement. Cutter IT Journal,
26(1):3–5.
OvERVIeW: Ownership Visualization Word Cloud
411