the section of the textbook where this question is ad-
dressed. For example, Figure 2 shows an example of
discussion related to the topic on page 235 for Sec-
tion 2.3.7 of a fictional math wall. A learner posted a
question marked with red light, and after a thread of
messages, the same learner posted a message to thank
for getting an answer to him/her question by marking
with a green light.
This questions/answers/comments production di-
rectly related with the material can be considered as
the main part associated with the behavior of the KP
model. In fact this is a change of behavior which is
strongly correlated with the material itself. The pos-
sibility to tag other sources (including the teacher)
helps to disentangle whether the question is more re-
lated with the topic or other factors.
Results: the key performance indicator is an ag-
gregation of the quantity values calculated in the pre-
vious KP’s levels (see Section 2).
ROI: In order to gain a better insight of the devel-
opment, all the subjects will be given analytical tools
in the form of dashboards providing important infor-
mation and automatic analysis (Buckingham Shum
and Ferguson, 2012; Siemens and Baker, 2012). The
information provided is not to be intended as a re-
placement of the specific capabilities of the teachers,
but it is thought as a further tool to have a good un-
derstanding of the success or failure of their initiatives
for improving the quality of their materials.
4 CONCLUSIONS
The evaluation of a learning material is a complex
task; there are many difficulties associated with this
task which can be grouped in two main categories:
(1) the great variety of material (textbooks, slides, syl-
labi, handouts, etc.), and (2) the fact that the learning
process is related to the interaction among teachers
and students and the material itself. This paper has
presented a revised definition of the KP model with
the intent to assess the quality of the material within
a Social LMS. To this goal, we introduced some new
key performance indicators associated with both sub-
jective (e.g., social data), and objective aspects (e.g.,
grades). The newly proposed steps of the KP model
include: the difficulties found by the learners in ap-
proaching the material, the increased performances
due to the material, how much has been produced
about the material, and a final assessment on the re-
sults. At the end, the final evaluation (ROI) takes into
account if the improved results due to the material are
worth the difficulties in its usage. As an example, we
proposed how to instance the material evaluation pro-
cedure with a modified version of a wall structured
on the syllabus of a course. Our wall is a dynamic ob-
ject where the topics of a course are graphically repre-
sented as a knowledge graph to provide an immediate
logical connections between the arguments.
In future works, we are planning to investigate
new applications of the wall. Normally a class journal
is kept during the year, where one can find the daily
work of the class (teaching of the day, whether a test
has been performed, etc.). The journal’s activities can
be structured by the proposed wall. This triggers a
timing of the unit itself which can be used later in or-
der to consider a realistic workload for the students
and compare it with the expected one.
REFERENCES
Buckingham Shum, S. and Ferguson, R. (2012). Social
learning analytics. Ed. Tech. & Society.
Claypool, M., Brown, D., Le, P., and Waseda, M. (2001).
Inferring user interest. IEEE Internet Computing,
5:32–39.
Dominoni, M., Pinardi, S., and Riva, G. (2010). Omega
network: An adaptive approach to social learning. In
10th Inter. Conference on Intelligent Systems Design
and Applications, ISDA 2010, November 29 - Decem-
ber 1, 2010, Cairo, Egypt, pages 953–958. IEEE.
Guerin, J. T. and Michler, D. (2011). Analysis of under-
graduate teaching evaluations in computer science. In
Proceedings of the 42Nd ACM Technical Symposium
on Computer Science Education, SIGCSE ’11, pages
679–684, New York, NY, USA. ACM.
Kirkpatrick, D. L. and Kirkpatrick, J. D. (2010).
Evaluating training programs : the four levels.
ReadHowYouWant.com ; Berrett-Koehler Publishers,
[Sydney, NSW, Australia]; San Francisco, CA.
Larsson, E., Amirijoo, M., Karlsson, D., and Eles, P. I.
(2007). What Impacts Course Evaluation?
Newstrom, J. W. (1995). Evaluating training programs: The
four levels, by donald l. kirkpatrick. (1994). Human
Resource Development Quarterly, 6(3):317–320.
Osguthorpe, R. T. and Graham, C. R. (2003). Blended
learning environments: Definitions and directions.
Quarterly Review of Distance Education, 4(3):227–
233.
Phillips, J. J. and Phillips, P. P. (2003). Using action plans to
measure roi. Performance Improvement, 42(1):24–33.
Siemens, G. and Baker, R. S. J. d. (2012). Learning ana-
lytics and educational data mining: Towards commu-
nication and collaboration. In Proceedings of the 2Nd
International Conference on Learning Analytics and
Knowledge, LAK ’12, pages 252–254, New York, NY,
USA. ACM.
Tour´e, F., A¨ımeur, E., and Dalkir, K. (2014). AM2O - An
Efficient Approach for Managing Training in Enter-
prise. In Proceedings of the International Conference
on Knowledge Management and Information Sharing,
pages 405–412.