knowledge management field), to the collective di-
mension (a much less debated issue in the field). To
this aim, we will be close to the idea of collective
knowledge of Hecker in (Hecker, 2012). He de-
notes “shared knowledge” (not necessarily at con-
scious level) the knowledge closely related to the
mesh of common experiences. These experiences
are those that people have within a common cultural
background (Collins, 2007), and withing knowledge-
sharing activities, not necessarily all of formal na-
ture (like in corporate education and training, staff
communication, and so on) but also embedded in and
constituted by social relations (Davenport and Prusak,
1998; Brown and Duguid, 1991). In particular, we
will focus on tacit collective knowledge, i.e. practice-
related knowledge that a community of practition-
ers holds and exhibits to coordinate, or just mutually
align, their activities without centralized decision-
making or explicit mutual communication. We will
also focus on how to externalize it, not necessarily in a
set of formalized “facts”, but in terms of community-
gluing narratives and discourses that are exchanged
and appropriated within that community.
This paper presents the case of conference ranking
as the output of an initiative of collective, knowledge
exploitation
1
.
With practice-related knowledge we mean some-
thing different, and wider, than either procedural
knowledge or know-how. It is what a community
of practice (broadly meant) knows, tacitly more of-
ten than not, about what its members do, that is how
single practices are articulated, even independently of
each other, to form the overall practice connoting the
community. Here ‘tacitly’ means that, a priori, no
single member can know how her community, as a
whole, performs the above mentioned set of connot-
ing practices, like performing surgical procedures in
a community of surgeons, or writing academic papers
in a community of scholars on the same topics. Exter-
nalizing tacit collective knowledge, thus, relates to a
twofold transformation: from the tacit to the explicit
dimension; and from the collective to the individual
dimension. We draw our practical approach from two
main user studies, which we undertook in large and
distributed communities of expert practitioners: one
1
The reader should mind that whether the conference
ranking itself (which we could extract from the responses
gathered during the study) can be considered the external-
ization of the tacit collective knowledge of scholars (about
which conferences are the best ones); or just an explicit el-
ement reflecting this knowledge and potentially triggering
discussion and reflection within the community itself for its
evolution, it is a matter of conceptual preferences towards
this elusive concept, and a matter of concern that is outside
the paper’s aims and scope
study has been already described both in the medi-
cal literature (Randelli et al., 2012) and in the knowl-
edge management one (Cabitza, 2012). Conversely,
the other study is presented here for the first time.
In Section 2 we will describe it in more details, in
terms of its main motivations, the methods we em-
ployed to externalize collective knowledge, and the
results obtained. The following discussion will make
points to propose some general ideas on tacit collec-
tive knowledge externalization, also as triggers for
further discussion and awareness-raising in commu-
nities of practice.
2 THE CASE OF THE HCI
RESEARCH COMMUNITY
The case at hand regards a user study that we under-
took in April 2015. This study was promoted at a
joined national meeting of two organizations of com-
puter science and IT scholars and professors in Italy,
namely the GII (Group of Italian Professors of Com-
puter Engineering) and the GRIN (Group of Italian
Professors of Computer Science), collecting around
800 members each. These joined their forces to pro-
pose to the National Agency for Research Assessment
a reference classification, or unified ranking, of com-
puter science international conferences (on the basis
of their impact and alleged quality). The goal was to
propose that works published in the proceedings of
conferences could be considered in the next national
research assessment exercise, as the previous one had
been focused on journal publications solely. The GII-
GRIN joint task force thus produced “the GII-GRIN
Computer Science and Computer Engineering Con-
ference Rating” (in what follows simply the “GII-
GRIN conference rating”): this rating
2
was produced
by implementing an algorithm capable of processing
three of the main conference rankings available on-
line
3
. After a round of iterations, this algorithm was
capable of indexing 3,210 conferences and success-
fully rank 608 out of these (19%), by associating them
to one out of three quality classes
4
. In all those cases
(the large majority) where the algorithm could not
take a decision on the basis of the available infor-
mation, the GII-GRIN conference rating system re-
ports the conference as associated with a provisional
2
Available at http://goo.gl/Ciiyb8.
3
Namely, the Computing Research and Education Asso-
ciation of Australasia, or CORE; the Microsoft Academic
Search Conference Ranking, or MAS; and the Brazilian
Simple H-Index Estimation, or SHINE
4
1 – excellent conferences, 2 – very good ones, 3 – good
quality ones
KMIS 2015 - 7th International Conference on Knowledge Management and Information Sharing
160