This repository provides our PVLDB 2024 tutorial paper, slides, and a list of important works on the related topics as follows.
- NeutronOrch: Rethinking sample-based GNN training under CPU-GPU heterogeneous environments. arXiv:2311.13225 (2023).
- Company-as-tribe: Company financial risk assessment on tribe-style graph with hierarchical graph neural networks. In KDD. 2712–2720. 2022.
- DGCL: An efficient communication library for distributed GNN training. In EuroSys. 130–144. 2021.
- DSP: Efficient GNN training with multiple GPUs. In PPoPP. 392–404. 2023.
- Efficient scaling of dynamic graph neural networks. In SC. 1–15. 2021.
- Exgc: Bridging efficiency and explainability in graph condensation. arXiv preprint arXiv:2402.05962 (2024).
- Optimizing DNN computation graph using graph substitutions. VLDB 13, 12 (2020), 2734–2746. paper code
- STile: Searching hybrid sparse formats for sparse deep learning operators automatically. Proceedings of the ACM on Management of Data 2, 1 (2024), 1–26. paper code
- P3: Distributed deep graph learning at scale. In OSDI. 551–568. 2021.
- Graph neural networks for recommender system. In WSDM. 1623–1625. 2022.
- ETC: Efficient training of temporal graph neural networks over large-scale dynamic graphs. VLDB 17, 5 (2024), 1060–1072. paper code
- SIMPLE: Efficient temporal graph neural network training at scale with dynamic data placement. Proceedings of the ACM on Management of Data 2, 3 (2024), 1–25. paper code
- Traversing large graphs on GPUs with unified memory. VLDB 13, 7 (2020), 1119–1133.
- Open graph benchmark: Datasets for machine learning on graphs. NeurIPS 33 (2020), 22118–22133.
- Opinion leaders for information diffusion using graph neural network in online social networks. TWEB 17, 2 (2023), 1–37.
- Accelerating graph sampling for graph machine learning using GPUs. In EuroSys. 311–326. 2021.
- Improving the accuracy, scalability, and performance of graph neural networks with roc. MLSys 2 (2020), 187–198.
- Redundancy-free computation for graph neural networks. In KDD. 997–1005. 2020.
- Pre-training on large-scale heterogeneous graph. In KDD. 756–766. 2021.
- Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
- Cache-based gnn system for dynamic graphs. In CIKM. 937–946. 2021.
- Orca: Scalable temporal graph neural network training with theoretical guarantees. Proc. ACM Manag. Data 1, 1 (2023), 52:1–52:27. paper code
- Zebra: When temporal graph neural networks meet temporal personalized PageRank. VLDB 16, 6 (2023), 1332–1345. paper code
- Cc-gnn: A community and contraction-based graph neural network. In ICDM. IEEE, 231–240. 2022.
- DAHA: Accelerating GNN training with Data and Hardware Aware Execution Planning. VLDB 17, 6 (2024), 1364–1376.
- Pagraph: Scaling GNN training on large graphs via computation-aware caching. In SoCC. 401–415. 2020.
- SANCUS: Staleness-aware communication-avoiding full-graph decentralized training in large-scale graph neural networks. VLDB 15, 9 (2022), 1937–1950. paper code
- NeutronStar: Distributed GNN training with hybrid dependency management. In SIGMOD. 1301–1315. 2022.
- GNNAdvisor: An adaptive and efficient runtime system for GNN acceleration on GPUs. In OSDI. 515–531. 2021.
- Kernel ridge regression-based graph dataset distillation. In KDD. 2850–2861. 2023.
- Large-scale graph neural networks: The Past and New Frontiers. In KDD. 5835– 5836. 2023.
- GNNLab: a factored system for sample-based GNN training over GPUs. In EuroSys. 417–434. 2022.
- Feature-oriented sampling for fast and scalable GNN training. In ICDM. IEEE, 723–732. 2022. paper code
- DUCATI: A dual-cache training system for graph neural networks on giant graphs with the GPU. Proc. ACM Manag. Data 1, 2 (2023), 166:1–166:24. 2023. paper code
- NSCaching: simple and efficient negative sampling for knowledge graph embedding. In ICDE. IEEE, 614–625. 2019.
- Learning on large-scale text-attributed graphs via variational inference. arXiv preprint arXiv:2210.14709 (2022). 2022.
- Structure-free graph condensation: From large-scale graphs to condensed graph-free data. NeurIPS 36 (2024). 2024.
If you find this useful for your work, please consider citing it as follows:
@article{vldb24shen,
title = {Efficient Training of Graph Neural Networks on Large Graphs},
author = {Shen, Yanyan and Chen, Lei and Fang, Jingzhi and Zhang, Xin and Gao, Shihong and Yin, Hongbo},
journal = {Proc. {VLDB} Endow.},
year = {2024},
doi = {10.14778/3685800.3685844},
}