
bility. This may result in a good model performance
that reaches only the dominant class. Whereas, when
employing a loss-based score for the Genetic Algo-
rithm (GA) variant, the model demonstrates superior
results in terms of minimizing loss, although with a
slight compromise in accuracy. We believe it’s more
effective to consider loss as a score for selecting the
best training samples, as this reduces the overall loss
while maintaining an acceptable measure of accuracy.
5 CONCLUSION
As data become available over time, traditional offline
approaches of training and evaluating analytical mod-
els to predict students performance become obsolete
and unsuitable. Nowadays, online incremental learn-
ing is increasingly being used to update online Ma-
chine Learning (ML) models with new data received
over time. This work is concerned with memory-
based approaches that consist in using rehearsal tech-
niques to recall a small training exemplar set that con-
tains previous data and new data to retrain the online
model. One of the major concerns in this regard is
how to construct this training exemplar while receiv-
ing new data over time. Typically, a random selection
of samples is made, which can deteriorate the model’s
performance. In this paper, we proposed a memory-
based online incremental learning approach that is
based on the use of the genetic algorithm heuristic to
build the training exemplar set. The approach respects
the memory space constraints as well as the balance
of class labels when forming the training exemplar.
Indeed, compared to an exiting approach based
on random selection of training samples when build-
ing the training exemplar, our approach based on GA
enhances the model accuracy up to 10%. Further it
shows a better stability and less variations in terms of
accuracy. As a future work, we intend to evaluate the
proposed approach with a variety of ML models in
addition to random forest. Further, we intend to as-
sess its effectiveness using other score types such as
the F1-score.
REFERENCES
Ade, R. and Deshmukh, P. (2014). Instance-based vs batch-
based incremental learning approach for students clas-
sification. International Journal of Computer Appli-
cations, 106(3).
Ben Soussia, A., Roussanaly, A., and Boyer, A. (2021).
An in-depth methodology to predict at-risk learners.
In European Conference on Technology Enhanced
Learning, pages 193–206. Springer.
Chang, B. (2021). Student privacy issues in online learning
environments. Distance Education, 42(1):55–69.
Gepperth, A. and Hammer, B. (2016). Incremental learning
algorithms and applications. In European symposium
on artificial neural networks (ESANN).
Hayes, T. L., Kafle, K., Shrestha, R., Acharya, M., and
Kanan, C. (2020). Remind your neural network to pre-
vent catastrophic forgetting. In European Conference
on Computer Vision, pages 466–483. Springer.
He, J., Mao, R., Shao, Z., and Zhu, F. (2020). Incremen-
tal learning in online scenario. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 13926–13935.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J.,
Desjardins, G., Rusu, A. A., Milan, K., Quan, J.,
Ramalho, T., Grabska-Barwinska, A., et al. (2017).
Overcoming catastrophic forgetting in neural net-
works. Proceedings of the national academy of sci-
ences, 114(13):3521–3526.
Kotsiantis, S., Patriarcheas, K., and Xenos, M. (2010).
A combinational incremental ensemble of classifiers
as a technique for predicting students’ performance
in distance education. Knowledge-Based Systems,
23(6):529–535.
Kulkarni, P. and Ade, R. (2014). Prediction of student’s
performance based on incremental learning. Interna-
tional Journal of Computer Applications, 99(14):10–
16.
Kuzilek, J., Hlosta, M., and Zdrahal, Z. (2017). Open
university learning analytics dataset. Scientific data,
4(1):1–8.
Labba, C. and Boyer, A. (2022). When and how to update
online analytical models for predicting students per-
formance? In European Conference on Technology
Enhanced Learning, pages 173–186. Springer.
Rebuffi, S.-A., Kolesnikov, A., Sperl, G., and Lampert,
C. H. (2017). icarl: Incremental classifier and rep-
resentation learning. In Proceedings of the IEEE con-
ference on Computer Vision and Pattern Recognition,
pages 2001–2010.
Sirshar, M., Hassan, T., Akram, M. U., and Khan, S. A.
(2021). An incremental learning approach to automat-
ically recognize pulmonary diseases from the multi-
vendor chest radiographs. Computers in Biology and
Medicine, 134:104435.
Yan, S., Zhou, J., Xie, J., Zhang, S., and He, X. (2021).
An em framework for online incremental learning of
semantic segmentation. In Proceedings of the 29th
ACM International Conference on Multimedia, pages
3052–3060.
Yang, Q., Gu, Y., and Wu, D. (2019). Survey of incremental
learning. In 2019 chinese control and decision confer-
ence (ccdc), pages 399–404. IEEE.
CSEDU 2024 - 16th International Conference on Computer Supported Education
212