AI Summary of Peer-Reviewed Research
This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓
Publication Signals show what we were able to verify about where this research was published.STRONGWe verified multiple publication signals for this source, including independently confirmed credentials. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
- ✔ Peer-reviewed source
- ✔ Published in indexed journal
- ✔ No retraction or integrity flags
Key findings from this study
- The study found that LPS-GNN processes 100 billion edge graphs in 10 hours using a single GPU while improving prediction accuracy in user acquisition scenarios.
- The authors report that the proposed LPMetis graph partitioning algorithm outperforms existing state-of-the-art methods on multiple evaluation metrics.
- The researchers demonstrate that the framework achieves 8.24% to 13.89% performance improvements over state-of-the-art models when deployed on real-world datasets.
Overview
LPS-GNN is a scalable graph neural network framework designed to perform representation learning on graphs containing up to 100 billion edges using a single GPU. The framework addresses computational bottlenecks inherent in iterative message-passing techniques and neighbor explosion issues in large-scale graphs.
Methods and approach
The authors examine existing graph partitioning methods and propose LPMetis, a superior graph partition algorithm optimized for large-scale processing. The framework incorporates a subgraph augmentation strategy to enhance predictive performance. LPS-GNN demonstrates compatibility with various GNN algorithms and has been deployed on the Tencent platform for evaluation.
Results
LPS-GNN processed 100 billion edge graphs in 10 hours on a single GPU, achieving a 13.8% performance improvement in user acquisition scenarios. Testing on public and real-world datasets showed performance lifts ranging from 8.24% to 13.89% over state-of-the-art models in online applications. LPMetis outperformed current partitioning approaches across multiple evaluation metrics.
Implications
The framework's ability to handle 100 billion-edge graphs on commodity hardware reduces infrastructure requirements for industrial-scale graph mining. The design's compatibility with multiple GNN algorithms enables broader adoption across diverse graph mining tasks beyond the tested user acquisition domain. Successful production deployment on the Tencent platform validates the framework's practical applicability in large-scale commercial systems.
Scope and limitations
This summary is based on the study abstract and available metadata. It does not include a full analysis of the complete paper, supplementary materials, or underlying datasets unless explicitly stated. Findings should be interpreted in the context of the original publication.
Disclosure
- Research title: LPS-GNN : Deploying Graph Neural Networks on Graphs with 100-Billion Edges
- Authors: Xu Cheng, Liang Yao, Feng He, Yukuo Cen, Yufei He, Chenhui Zhang, Wenzheng Feng, Hongyun Cai, Jie Tang
- Institutions: National University of Singapore, Sun Yat-sen University, Tencent (China), Tsinghua University, Zhipu AI (China)
- Publication date: 2026-03-31
- DOI: https://doi.org/10.1145/3801100
- OpenAlex record: View
- Image credit: Photo by Google DeepMind on Pexels (Source • License)
- Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.
Get the weekly research newsletter
Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.


