This paper addresses a long-standing gap in how work about future possibilities is judged. It introduces a practical assessment framework built from decades of peer-review experience across books, dissertations, articles, and applied projects. The framework groups concrete criteria into five dimensions: methodological transparency, epistemological coherence, plausibility and evidence, ethical positioning, and impact orientation. It also offers checklists and decision trees tailored to different kinds of outputs and pays attention to challenges faced by new evaluators in interdisciplinary settings.
What the study examined
The paper looks at how outputs that explore possible futures are evaluated, noting that this area is less developed than peer-review traditions in many established fields. It gathers lessons from decades of reviewing across a wide range of publication types — from monographs and doctoral theses to journal articles and applied project reports. The aim is to create a shared approach to judging quality without losing the creative and pluralistic spirit that characterizes the field.
Key findings
The author presents a practical, transferable framework organized around five core dimensions:
- Methodological transparency — clarity about how insights were produced and why particular methods were chosen.
- Epistemological coherence — how well the work’s assumptions about knowledge and evidence hold together.
- Plausibility and evidence — whether claims are grounded in a defensible combination of reasoning and supporting materials.
- Ethical positioning — attention to the values, responsibilities, and power implications embedded in the work.
- Impact orientation — consideration of the intended effects and relevance of the output.
To help reviewers apply these dimensions, the paper supplies detailed checklists and decision trees that reflect the specific challenges of different output types, including non-traditional formats such as scenario reports, briefs, and experiential artifacts. There is special attention to emerging evaluators, who often work across disciplinary boundaries and must be mindful of power asymmetries in review processes.
Why it matters
By offering concrete criteria and practical tools, the framework aims to raise the quality threshold for evaluating work about the future while preserving its imaginative and diverse approaches. Establishing a shared language and benchmarks can help reviewers, editors, funders, and practitioners make more consistent judgments about rigor and relevance. The emphasis on ethical positioning and the needs of new evaluators underscores an interest in fairness and inclusivity within review processes.
The paper positions this framework as a resource that can be transferred across publication types, supporting clearer, more defensible assessments without constraining creative practice.
Disclosure
- Research title: Advancing Peer Review in Futures Studies: A Practical Framework for Assessing Foresight Outputs
- Authors: Alireza Hejazi
- Journal / venue: Zenodo (CERN European Organization for Nuclear Research) (2026-02-23)
- DOI: 10.5281/zenodo.18092941
- OpenAlex record: View on OpenAlex
- Links: Landing page
- Image credit: Photo by Christina Morillo on Pexels (Source • License)
- Disclosure: This post was generated by Artificial Intelligence. The original authors did not write or review this post.


