AI Summary of Peer-Reviewed Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. [See full disclosure ↓]

Publishing process signals: MODERATE — reflects the venue and review process. — venue and review process.

Pre-course aptitude tests predicted introductory programming performance

Two students in a university computer laboratory work at desktop computers; one student in the foreground focuses on a monitor while another student is visible at a workstation in the background, with various lab equipment and monitors visible in the modern academic setting.
Research area:Computer ScienceComputer Science ApplicationsStudent Assessment and Feedback

What the study found: A pre-course aptitude test was able to predict first introductory programming assessment results for first-year computer science students. The study found that the random forest regressor was more consistent than the random forest classifier, although there was still a sizeable margin of error.
Why the authors say this matters: The authors conclude that this approach could help identify students who may need additional support at the outset of a programming course. They also say the work provides a foundation for future support interventions in introductory programming modules.
What the researchers tested: The researchers collected data from 285 first-year computer science undergraduates and used a pre-course aptitude test developed for this study. The test gathered information on students' backgrounds, prior experiences, perceived confidence, and likelihood of holding appropriate mental models for core programming concepts, and the data were used to train regression and classification models.
What worked and what didn't: The random forest classifier performed well during training but less well on the hold-out test set, which the authors describe as moderate overfitting. The random forest regressor showed a more similar level of performance between training and testing and was not seen to be overfitting the data.
What to keep in mind: The abstract notes a limited amount of data and class imbalance as likely contributors to the classifier's overfitting. It also states that there was still a sizeable margin of error, so the regressor is described as having potential rather than being definitive.

Key points

  • A pre-course aptitude test was used to predict first introductory programming assessment results.
  • The study analyzed data from 285 first-year computer science undergraduate students.
  • The random forest classifier performed better in training than on the hold-out test set.
  • The random forest regressor showed more consistent performance between training and testing.
  • The authors say the approach could help identify students who may need additional support early on.

Disclosure

Research title:
Pre-course aptitude tests predicted introductory programming performance
Authors:
Oliver Kerr, Linden J. Ball, Nicky Danino
Institutions:
University of Lancashire, Leeds Trinity University
Publication date:
2026-03-30
OpenAlex record:
View
AI provenance: This post was generated by OpenAI. The original authors did not write or review this post.