Designing Formative Adaptive Assessment for Engineering Education

A young woman with blonde hair wearing glasses and a black jacket sits at a desk in a modern computer lab, typing on a keyboard while viewing a technical blueprint or CAD design on a large monitor, with other students and computers visible in the background.
Image Credit: Photo by RUT MIIT on Unsplash (SourceLicense)

AI Summary of Peer-Reviewed Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓

International Journal of Engineering Pedagogy (iJEP)·2026-03-03·Peer-reviewed·View original paper ↗·Follow this topic (RSS)
Publication Signals show what we were able to verify about where this research was published.MODERATECore publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
  • ✔ Peer-reviewed source
  • ✔ No retraction or integrity flags

Key findings from this study

  • The study found that the integrated IRT-CAT and Bayesian diagnostic framework achieved high estimation accuracy (r = 0.912) across learner proficiency estimates.
  • The authors report that satisfactory reliability for formative assessment was demonstrated across most learner profiles, with precision limitations primarily at proficiency extremes.
  • The researchers demonstrate that structural constraints in assessment performance relate to item bank coverage and curriculum representation rather than to algorithmic limitations.

Overview

This study presents a formative adaptive assessment framework designed for engineering education that integrates Item Response Theory-based computerized adaptive testing with Bayesian network diagnostic modelling. The framework addresses the limitation of traditional assessment approaches by emphasizing pedagogical effectiveness alongside measurement efficiency. The system dynamically adapts item selection based on evolving proficiency estimates while simultaneously prioritizing diagnostic information about under-assessed competencies. Assessment design relies on dichotomous multiple-choice items explicitly aligned with engineering learning outcomes, enabling competency-oriented feedback and instructional interpretation within a curriculum-aligned structure.

Methods and approach

The framework combines an IRT-based CAT engine with probabilistic diagnostic modelling to support adaptive item selection and competency assessment. Item calibration was conducted using empirical data from 612 university students in computer science. System performance was evaluated through simulation-based methodology involving 500 simulated learners. The assessment structure incorporates dichotomous multiple-choice items aligned with engineering learning outcomes. Adaptive control strategies were designed to balance item selection efficiency with diagnostic capacity, ensuring dynamic adaptation to learner proficiency while maintaining curriculum alignment throughout the assessment process.

Results

The study found that the proposed framework achieved high estimation accuracy with a correlation coefficient of 0.912 between estimated and true proficiency. Reliability analysis indicated satisfactory performance for formative use across most learner profiles. However, precision degradation occurred at the extremes of the proficiency continuum, and imbalances in item exposure were detected across the assessment. The researchers report that structural limitations were primarily attributable to item bank coverage and curriculum representation rather than to adaptive algorithms themselves. These findings suggest that while the IRT-CAT mechanism functions effectively, practical implementation constraints relate to assessment design considerations rather than computational or methodological deficiencies.

Implications

The framework establishes adaptive assessment as a pedagogically grounded tool capable of supporting formative learning contexts beyond traditional measurement-focused applications. The integration of diagnostic modelling with CAT methodology demonstrates how adaptive systems can simultaneously serve learning regulation, instructional decision-making, and quality assurance functions in engineering education. The high estimation accuracy achieved suggests that adaptive assessment can provide reliable competency measurement while maintaining educational utility, though systematic attention to item bank development remains essential for equitable assessment across proficiency levels.

Scope and limitations

This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.

Disclosure

  • Research title: Designing Formative Adaptive Assessment for Engineering Education
  • Authors: Mohamed El Msayer, Bouchra Bouihi, Abdelmajid Bousselham, Essaadia Aoula, Adel Deraoui
  • Institutions: Laboratoire de Recherche Scientifique, Université Hassan II Mohammedia, University of Hassan II Casablanca
  • Publication date: 2026-03-03
  • DOI: https://doi.org/10.3991/ijep.v16i1.60479
  • OpenAlex record: View
  • PDF: Download
  • Image credit: Photo by RUT MIIT on Unsplash (SourceLicense)
  • Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.

Get the weekly research newsletter

Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.

More posts