means of questionnaires (Rovai, 2002; Pigliapoco &
Bogliolo, 2007).
Rovai (2002) introduced the so called Classroom
Community Scale (CCS) which uses a 20-item test.
The questionnaire takes into account the four
dimensions of PSoC which are spirit (friendship,
cohesion, bonding among learners), trust
(credibility, benevolence, confidence among
learners), interaction (honesty in feedback, trust, and
safety among learners), and common expectations
(commonality of the same goals, that is learning).
The answers to questions range in a [0-4] interval
corresponding to “strongly agree, agree, neutral,
disagree, and strongly disagree”. CCS distinguishes
between CCS connectedness (which represents the
feelings of the community of students regarding
their cohesion, spirit, trust, interdependence, and
social presence) and CCS learning (which represents
the feelings of community members regarding the
construction of understanding through discussions
and the sharing of values and beliefs) (Rovai, 2002).
Pigliapoco & Bogliolo (2007) elaborated two
alternative indicators: Membership and SCITT
(which stays for the dimensions of Spirit,
Commonality, Interaction, Trust granted and Trust
received) expressed in a [0-10] interval. Membership
corresponds to the score of the following direct
question asked to students: “How much do you feel
a member of a community?”. SCITT is an indicator
obtained from five questions asked to investigate the
dimensions of PSoC summarized in its acronym.
Recent studies have shown that PSoC felt by
students plays a key role in affecting their
performance (Picciano, 2002), satisfaction (Johnston
et al., 2005; Shea et al, 2002), and persistence (Carr,
2000; Frankola, 2001) in academic degree programs.
2.3 Statistical Analysis
The core of the proposed methodology is based on
the statistical analysis of collected data. To this
purpose we define a domain as a set of data gathered
from a sample the members of which share a
common feature. For instance, a domain can be
represented by the data collected from a group of
students belonging to the same cohort where the
academic year of enrollment is the feature shared by
all the members. Similarly, the distinguishing
feature of the data belonging to the same domain
could be the teaching methodology (e.g., e-learning,
face-to-face learning, blended learning). Notice that
the definition of domain given so far is completely
general, in order to be possibly tailored to any
parameter of interest.
Both for direct and indirect monitoring, the collected
data can be processed in three different ways called:
i) intra-domain analysis, ii) inter-domain analysis,
and iii) cross-processing.
Intra-domain analysis makes it possible to evaluate
the average trend and the variations of a particular
phenomenon within a single domain. For example,
given a set of LCQs filled in by students belonging
to the same cohort, it is possible to evaluate the
average learning trend of the cohort (by plotting the
average learning values over time) and its intra-
cohort variations (by computing standard deviations
within the cohort). In the same way, considering a
single topic of a course as the common feature of a
given domain, the intra-domain analysis can be
carried out to evaluate subject-specific learning (by
averaging the scores of all questions referred to the
given topic) or knowledge retention (by comparing
the scores achieved on the same topic over time).
Inter-domain analysis makes it possible to point out
differences/similarities between two or more
domains. For instance, in case of two domains
discriminated on the basis of the teaching
methodology, inter-domain analysis highlights the
differences between face-to-face students and
distance-learning students by comparing the average
values computed over the two different domains.
Finally, cross-processing allows us to capture
correlations between two or more phenomena taken
into consideration either in intra- or in inter-domain
analyses. For example, cross-processing can be used
to cross-validate two different assessment systems
(by computing correlations between LCQ results and
exam grades) or to point out the relationship
between different classes or subjects treated during
the course (by computing correlations between
subject-specific learning values).
2.4 Software Requirements
A software platform supporting the implementation
of the assessment methodology described so far
should provide specific features to enable: the
creation of any type of questions, the administration
of any type of questionnaires, the performance of all
the statistical analyses outlined in Section 2.3, and a
flexible management of access rights and
ownerships.
2.4.1 Questionnaires Creation
The software tool must allow privileged users (i.e.
tutors, instructors, and administrators) to create their
own sets of questions (such as open-text,
A COMPUTER-AIDED METHODOLOGY FOR DIRECT AND INDIRECT MONITORING OF THE LEARNING
PROCESS
35