Python IDE for beginners) with additional data-
collection plugins (Marvie-Nebut & Peter, 2023). The
final objective is to propose indicators for monitoring,
guiding, and evaluating remotely (DLE).
6 CONCLUSION
The proposed standard teaching scenario focuses on
skills through blended-oriented lessons and a
formative digital production to develop
computational thinking. The peer review process
reinforces reflective learning. Despite the complexity
of unit testing, the approach improves understanding
of algorithms and their design, debugging skills and a
willingness to validate solutions, helping future
engineers gain perspective. According to data
collected between 2021 and 2023, difficulty is
strongly influenced by students' previous training
path, in line with their age and social intelligence. The
cognitive load of beginners can only be mitigated by
more time devoted to them during the sessions and
the professional style of the trainer-tutors; a
parameter that has not been explored. The 3-index set
(counter-performance index, score variables of final
expressed difficulty, and positive feeling)
demonstrates the effect of the device on learning and
postures, and helps in learning profile analysis.
However, it is not sufficient to fully analyze the
learning processes of computational thinking.
To this end, larger student flows are required to
overcome the limitations of this work, but the
proposed training scenario is stable. The priority is to
instrument the Scilab coding environment, then to
identify students' coding processes in computational
thinking, and to determine learning profiles using
relevant contextualized indicators.
REFERENCES
Baron, G.-L., Drot-Delange, B., Grandbastien, M., & Tort,
F. (2014). Computer Science Education in French
Secondary Schools : Historical and Didactical
Perspectives. ACM Trans. Comput. Educ., 14(2), 11:1-
11:27. https://doi.org/10.1145/2602486
Falchikov, N. (2005). Improving Assessment through
Student Involvement: Practical Solutions for Aiding
Learning in Higher and Further Education.
Garousi, V., Rainer, A., Lauvås, P., & Arcuri, A. (2020).
Software-testing education : A systematic literature
mapping. Journal of Systems and Software, 165,
110570. https://doi.org/10.1016/j.jss.2020.110570
Gervais, J. (2016). The operational definition of
competency-based education. The Journal of
Competency-Based Education, 1(2), 98-106.
https://doi.org/10.1002/cbe2.1011
Grzega, J. (2005). Learning By Teaching: The Didactic
Model LdL in University Classes.
Hattie, J., & Timperley, H. (2007). The Power of Feedback.
Review of Educational Research, 77 (1), 81–112. doi:
10.3102/003465430298487
Kolb, A. Y., & Kolb, D. A. (2005). Learning Styles and
Learning Spaces: Enhancing Experiential Learning in
Higher Education. Academy of Management Learning
& Education, 4(2), 193-212.
Martraire, C., Thiéfaine, A, Bartaguiz, D., Hiegel, F. &
Fakih, H (2022) Software craft (French) Ed : Dunod,
289 p., ISBN : 978-2-10-082520-2
Marvie-Nebut, M., & Peter, Y. (2023). Apprentissage de la
programmation Python : une première analyse
exploratoire de l’usage des tests. In J. Broisin, et al.
(Éds.), Actes de l’atelier Apprendre la Pensée
Informatique de la Maternelle à l’Université. APIMU
2023 (p. 1-8). https://hal.science/hal-04144206
Nicol, D. J., & Macfarlene-Dick, D. (2006). Formative
assessment and self-regulated learning: A model and
seven principles of good feedback practice. Studies in
Higher Education, 31(2), 199–218. doi:10.1080/0307
5070600572090
Nuninger, W. (2017). Integrated Learning Environment for
blended oriented course: 3-year feedback on a skill-
oriented hybrid strategy. HCI Int, 9-14 July. In: Zaphiris
P., Ioannou A. (Ed.) LCT, 10295, Springer, Cham. doi:
10.1007/978-3-319-58509-3_13
Nuninger, W. (2024). Hybrid and Formative Self and Cross
Peer Review process to support Computational and
Algorithmic Thinking. In CSEDU 2024 (Vol. 2).
SCITEPRESS (Science and Technology Publication,
Lda). doi:10.5220/0012712800003693
Sadler, D. R. (2010) Beyond feedback: developing student
capability in complex appraisal, Assessment &
Evaluation in Higher Education, 35:5, 535-550, doi:
10.1080/02602930903541015
Scatalon, L. P., Carver, J. C., Garcia, R. E., & Barbosa, E. F.
(2019). Software Testing in Introductory Programming
Courses: A Systematic Mapping Study. 50th ACM
Technical Symp. on Computer Sc. Educ., 421-427.
https://doi.org/10.1145/3287324.3287384
Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017).
Demystifying computational thinking. Educational
Research Review, 22, 142-158. https://doi.org/10.1016/
j.edurev.2017.09.003
Raelin, J. A. (2008). Work-based Learning. Bridging
Knowledge and Action in the Workplace. (New and
revised edition). San Francisco, CA: Jossey-Bass.
Thomas, G., Martin, D. & Pleasants, K. (2011). Using self-
and peer-assessment to enhance students’ future-
learning in higher education. Journal of University
Teaching & Learning Practice, 8(1), 1-17.
Topping, K. (2009). Peer Assessment. Theory into Practice,
48(1), 20–27. doi:10.1080/00405840802577569
Schein, E. H. (2013). Humble Inquiry: The Gentle Art of
Asking Instead of Telling". Berrett-Koehler Publishers
Vuorikari, R., Kluzer, S. & Punie, Y. (2022), DigComp 2.2:
EU 31006 EN, doi:10.2760/490274, JRC128415.