AI Summary of Peer-Reviewed Research
This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓
Publication Signals show what we were able to verify about where this research was published.MODERATECore publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
- ✔ Peer-reviewed source
- ✔ Published in indexed journal
- ✔ No retraction or integrity flags
Key findings from this study
- The study establishes that domain and modelling correctness of semantic resources require human participation and resist full automation.
- The authors report that 144 papers published across 15 years reveal patterns in evaluation methodologies but lack coherent theoretical grounding.
- The review identifies that recurring best practices exist for incorporating human evaluators into semantic resource quality assurance despite limited systematic documentation.
Overview
Semantic resources including ontologies and knowledge graphs support intelligent applications across diverse domains. Quality assurance constitutes an integral component of their operational lifecycle. Certain quality dimensions—particularly domain correctness and modelling correctness—resist automated evaluation and necessitate human participation. Human-centric evaluation of semantic resources (HESR) remains essential yet theoretically and empirically underdeveloped as a research area. This study develops a theoretical framework for characterizing HESR and synthesizes evidence from 144 papers published over 15 years.
Methods and approach
The authors conducted a systematic mapping study spanning 15 years of published research. The investigation integrated existing literature into a unified theoretical framework defining and characterizing HESR. Data collection and analysis scripts supporting the framework remain openly available. The mapping study examined concrete HESR approaches, identified emerging trends, and established best practices within the field.
Results
The framework establishes HESR as a distinct evaluation domain requiring theoretical specification and empirical grounding. Analysis of the 144 papers revealed patterns in evaluation methodologies, resource types assessed, and assessment mechanisms employed. The study identifies recurring practices in how human evaluators participate in quality verification processes. Trends emerged regarding which semantic resource characteristics receive human evaluation attention and which evaluation approaches predominate.
Implications
The theoretical framework provides researchers and practitioners with structured foundations for designing and conducting HESR activities. Establishing common terminology and characteristics enables systematic comparison across HESR implementations and facilitates knowledge accumulation in this area. The framework clarifies which resource quality dimensions remain dependent on human judgment and cannot be addressed through automated mechanisms.
The identified trends and best practices offer practical guidance for organizations developing semantic resources. Understanding current approaches helps practitioners select appropriate evaluation strategies aligned with their resource characteristics and application contexts. The openly available data and analysis scripts support reproducibility and enable extension of the mapping study findings.
The systematic characterization of HESR creates a foundation for future research addressing gaps in theoretical understanding and methodological innovation. The study demonstrates that human-centric evaluation represents a substantive subdomain within semantic resource quality assurance requiring dedicated investigation. These insights position HESR as an area meriting continued scholarly attention and methodological development.
Scope and limitations
This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.
Disclosure
- Research title: Human-centric Evaluation of Semantic Resources: A Systematic Mapping Study
- Authors: Marta ; id_orcid 0000-0001-9301-8418 Sabou, Stefani Tsaneva, Miriam Fernández, María Poveda, Mari Carmen Suárez-Figueroa
- Institutions: The Open University, Universidad Politécnica de Madrid, Vienna University of Economics and Business
- Publication date: 2026-03-05
- DOI: https://doi.org/10.1145/3800939
- OpenAlex record: View
- Image credit: Photo by ThisisEngineering on Unsplash (Source • License)
- Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.
Get the weekly research newsletter
Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.


