Authors:
Peter Hubwieser
and
Andreas Mühling
Affiliation:
Technische Universität München, Germany
Keyword(s):
Large Scale Studies, Competencies, Item Response Theory, Rasch Model, Computational Thinking.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Business Analytics
;
Clustering and Classification Methods
;
Data Analytics
;
Data Engineering
;
Knowledge Discovery and Information Retrieval
;
Knowledge-Based Systems
;
Mining High-Dimensional Data
;
Structured Data Analysis and Statistical Methods
;
Symbolic Systems
;
Visual Data Mining and Data Visualization
Abstract:
In preparation of large scale surveys on computer science competencies, we are developing proper competency models and evaluation methodologies, aiming to define competencies by sets of exiting questions that are testing congruent abilities. For this purpose, we have to look for sets of test questions that are measuring joint psychometric constructs (competencies) according to the responses of the test persons. We have developed a methodology for this goal by applying latent trait analysis on all combinations of questions of a certain test. After identifying suitable sets of questions, we test the fit of the mono-parametric Rasch Model and evaluate the distribution of person parameters. As a test bed for first feasibility studies, we have utilized the large scale Bebras Contest in Germany 2009. The results show that this methodology works and might result one day in a set of empirically founded competencies in the field of Computational Thinking.