Overview
This article examines the alignment between AI-generated feedback technologies and the cognitive and motivational demands of text revision processes. The study addresses a documented gap wherein learners frequently fail to engage in revision despite receiving AI-generated feedback, proposing that this misalignment stems from insufficient consideration of revision as a multi-stage process with distinct requirements. The analysis positions the problem not as a matter of feedback quality but rather as a systemic mismatch between feedback characteristics and learners' revision needs, drawing on process-oriented writing research to establish a framework for evaluation.
Methods and approach
The research employs a theoretically grounded analytical framework derived from process-oriented writing research. Text revision is conceptualized as a sequence of interrelated cognitive, motivational, and strategic sub-processes. Two AI-based feedback tools, Khan Academy Writing Coach and FelloFish, are subjected to systematic analysis to assess the extent to which their feedback mechanisms align with the identified requirements of revision processes. The analytical approach prioritizes structural examination of how feedback tools operationalize their functions relative to learner revision demands.
Key Findings
The analysis identifies three central tensions between AI feedback design and revision process requirements. First, the immediacy of AI feedback conflicts with learners' need for critical distance from their own texts during revision. Second, outsourcing core revision processes to AI systems risks diminishing learner agency and intrinsic motivation, potentially undermining the development of independent revision capacities. Third, the analyzed tools demonstrate insufficient embedding of revision feedback within meaningful writing tasks and communicative goals, isolating feedback from authentic communicative contexts and purposes.
Implications
The findings suggest that AI-based feedback tools require explicit design alignment with revision processes to effectively support learner engagement. Without such alignment, these technologies may obstruct rather than facilitate revision activity despite their capacity for personalization and immediacy. For instructional practice, this indicates the necessity of deliberate pedagogical mediation when integrating AI feedback into writing curricula, ensuring that feedback mechanisms serve rather than displace learner agency and authentic writing purposes. Implications for tool design include reconsidering feedback timing, preserving space for learner cognitive and emotional processing, and integrating feedback within sustained communicative contexts. Future empirical research should examine how explicit process alignment affects learner revision behavior and the long-term development of revision skills.
Disclosure
- Research title: Revising with AI: aligning feedback technologies with learners’ revision processes
- Authors: Gerrit Helm, Florian Hesse
- Institutions: Friedrich Schiller University Jena
- Publication date: 2026-02-26
- DOI: https://doi.org/10.3389/feduc.2026.1702406
- OpenAlex record: View
- PDF: Download
- Image credit: Photo by expresswriters on Pixabay (Source • License)
- Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.


