Self-Supervised Heterogeneous Graph Neural Network with Multi-scale Meta-Path Contrastive Learning

A person with curly hair sits at a desk in a modern technology workspace, viewing multiple computer monitors displaying various data visualizations, network diagrams, and analytics dashboards, with a curved display showing green and purple graphical elements overhead.
Image Credit: Photo by ThisIsEngineering on Pexels (SourceLicense)

AI Summary of Peer-Reviewed Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓

International Journal of Computational Intelligence Systems·2026-02-25·Peer-reviewed·View original paper ↗·Follow this topic (RSS)
Publication Signals show what we were able to verify about where this research was published.STRONGWe verified multiple publication signals for this source, including independently confirmed credentials. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
  • ✔ Peer-reviewed source
  • ✔ Published in indexed journal
  • ✔ No retraction or integrity flags

Overview

Heterogeneous graph neural networks (HGNNs) present significant challenges in simultaneously capturing local neighborhood structure and global relational patterns. This work introduces HMMC, a self-supervised framework that leverages multi-scale meta-path embeddings and contrastive learning to improve heterogeneous graph representation learning. The framework addresses fundamental limitations in existing methods where meta-path selection involves trade-offs between representation completeness and noise introduction.

Methods and approach

HMMC integrates two primary technical components. The multi-scale meta-path embedding mechanism operates concurrently across different meta-path lengths to capture structural information at varying granularities, mitigating issues where excessively short paths provide insufficient context and excessively long paths introduce noise. The framework employs a cross-view self-supervised contrastive learning approach that optimizes representations across multiple perspectives of the heterogeneous graph structure. A novel star-shaped contrastive loss function is introduced to address the degradation caused by noisy negative samples in conventional contrastive learning paradigms. This loss exploits center-neighborhood structural dependencies as supervisory signals, maintaining representation consistency for positive pairs while preventing over-smoothing effects that arise from embedding convergence among similar nodes.

Key Findings

Experimental validation across multiple public heterogeneous graph datasets demonstrates quantitative improvements over established baseline methods, with performance gains ranging from 0.5% to 4.1% across different benchmarks. The results indicate enhanced representation power, robustness, and generalization capability relative to state-of-the-art approaches. The multi-scale meta-path mechanism successfully resolves the inherent tension between local structural granularity and global semantic consistency in heterogeneous graph representation learning.

Implications

The proposed methodology advances heterogeneous graph representation learning by providing a unified framework that simultaneously addresses multiple technical challenges in the domain. The star-shaped contrastive loss function presents a generalizable approach to mitigating negative sample noise in self-supervised settings beyond heterogeneous graphs. The demonstrated performance improvements suggest the framework has substantial applicability to downstream tasks requiring robust heterogeneous graph representations.

Disclosure

  • Research title: Self-Supervised Heterogeneous Graph Neural Network with Multi-scale Meta-Path Contrastive Learning
  • Authors: Yufei Wu, Xiumei Wen, Fanxing Meng, Yingxue Mu
  • Publication date: 2026-02-25
  • DOI: https://doi.org/10.1007/s44196-025-01116-8
  • OpenAlex record: View
  • Image credit: Photo by ThisIsEngineering on Pexels (SourceLicense)
  • Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.

Get the weekly research newsletter

Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.

More posts