questions (like A.1.4, A.1.8. and B.1.1, see Table 1)
and matters that are left out in the mid-term
evaluation (e.g. teachers proficiency in English,
B.3.1). Besides, it could be considered to encourage
the teachers to use different kinds of consultations
by faculty developers and/or peers to interpret the
student feedback (ratings and comments) and
discuss relevant measures to take (Penny and Coe,
2004).
The present study considered improvements over
one semester as measured by end-of-term student
evaluations as opposed to long-term improvements
as well as studies including interviews with
instructors and students. These limitations were
discussed in more detail in the introduction of this
paper.
6 CONCLUSIONS
An empirical study conducting midterm as well as
end-of-term student evaluations in 35 courses at the
Technical University of Denmark was carried out in
the fall of 2010. In half of the courses the teachers
were allowed access to the midterm evaluations, and
the other half (the control group) was not. The
general trend observed was that courses where
teachers had access to the midterm evaluations got
improved evaluations at end-of-term compared to
the midterm evaluations, whereas the control group
decreased in ratings. In particular, questions related
to the student feeling that he/she learned a lot, a
general satisfaction with the course, a good
continuity of the teaching activities, and the teacher
being good at communicating the subject show
statistically significant differences in changes of
evaluations from midterm to end-of-semester
between the two groups. The changes are of a size
0.1-0.2 which is relatively large compared to the
standard deviation of the scores where the student
effect is removed of approximately 0.7.
If university leaders are to choose university- or
department-wise evaluation strategies, it is worth
considering midterm evaluations to facilitate
improvements of ongoing courses as measured by
student ratings.
ACKNOWLEDGEMENTS
The authors would like to thank all the teachers and
students who participated in the study, the Dean of
Undergraduate Studies and Student Affairs Martin
Vigild for supporting the project, and LearningLab
DTU for assistance in carrying out the study.
Furthermore, the authors thank five anonymous
reviewers for their valuable comments.
REFERENCES
L. M. Alamoni, 1999. Student Rating Myths Versus
Research Facts from 1924 to 1998: Journal of
Personnel Evaluation in Education, 13(2): 153-166.
Vibeke Normann Andersen, Peter Dahler-Larsen &
Carsten Strømbæk Pedersen, 2009. Quality assurance
and evaluation in Denmark, Journal of Education
Policy, 24(2): 135-147.
L. P. Aultman, 2006. An Unexpected Benefit of Formative
Student Evaluations: College Teaching, 54(3): 251.
J. Biggs and C. Tang, 2007. Teaching for Quality
Learning at University, McGraw-Hill Education, 3
rd
Ed.
D. E. Clayson, 2009. Student Evaluations of Teaching:
Are They Related to What Students Learn? A Meta-
Analysis and Review of the Literature: Journal of
Marketing Education, 31(1): 16-30.
P. A. Cohen, 1980. Effectiveness of Student-Rating
Feedback for Improving College Instruction: A Meta-
Analysis of Findings: Research in Higher Education,
13(4): 321-341.
P. A. Cohen, 1981. Student rating of instruction and
student achievement. Review of Educational Research,
51(3): 281–309.
A. Cook-Sather, 2009. From traditional accountability to
shared responsibility: the benefits and challenges of
student consultants gathering midcourse feedback in
college classrooms, Assessment & Evaluation in
Higher Education, 34(2): 231-241.
K. Edström, 2008. Doing course evaluation as if learning
matters most: Higher Education Research &
Development, 27(2): pp. 95–106.
R. Fisher and D. Miller, 2008. Responding to student
expectations: a partnership approach to course
evaluation: Assessment & Evaluation in Higher
Education, 33(2): 191–202.
A. G: Greenwald and G. M. Gillmore, 1997. Grading
leniency is a removable contaminent of student
ratings, The American Psychologist, 52(11): 1209-16.
R. Johnson, J. Freund and I. Miller, 2011. Miller and
Freund’s Probability and Statistics for Engineers,
Pearson Education, 8
th
Ed.
D. Kember, D. Y. P. Leung and K.P. Kwan, 2002. Does
the use of student feedback questionnaires improve the
overall quality of teaching? Assessment and
Evaluation in Higher Education, 27: 411–425.
C. S. Keutzer, 1993. Midterm evaluation of teaching
provides helpful feedback to instructors, Teaching of
psychology, 20(4): 238-240.
R. Likert, 1932. A Technique for the Measurement of
Attitudes, Archives of Psychology 140: 1–55.
EffectsofMid-termStudentEvaluationsofTeachingasMeasuredbyEnd-of-TermEvaluations
-AnEmpericalStudyofCourseEvaluations
309