This repository is the implementation of WWW 2025 paper: Rethinking and Accelerating Graph Condensation: A Training-Free Approach with Class Partition.
CGC improves the class-to-class matching paradigm of existing graph condensation methods to the novel class-to-node matching paradigm, therefore achieving an exceedingly efficient condensation process with advanced accuracy.
For more works about graph condensation, please refer to our TKDE'25 survey paper 🔥Graph Condensation: A Survey and paper list Graph Condensation Papers.
All experiments are implemented in Python 3.9 with Pytorch 1.12.1.
To install requirements:
pip install -r requirements.txt
- For Cora and Citeseer, they will be downloaded from PyG.
- Ogbn-products will be downloaded from OGB.
- For Ogbn-arxiv, Flickr and Reddit, we use the datasets provided by GraphSAINT. They are available on Google Drive link (alternatively, BaiduYun link (code: f1ao)). Rename the folder to
./dataset/
at the root directory. Note that the links are provided by GraphSAINT team.
To condense the graph using CGC and train GCN models:
$ python main.py --gpu 0 --dataset reddit --ratio 0.001 --generate_adj 1
For more efficient graphless variant CGC-X:
$ python main.py --gpu 0 --dataset reddit --ratio 0.001 --generate_adj 0
All scripts of different condensation ratios are provided in run.sh
.
Results will be saved in ./results/
.
We express our gratitude for the contributions of GGond and FAISS in their remarkable work.
@inproceedings{gao2025rethinking,
title={Rethinking and Accelerating Graph Condensation: A Training-Free Approach with Class Partition},
author={Gao, Xinyi and Ye, Guanhua and Chen, Tong and Zhang, Wentao and Yu, Junliang and Yin, Hongzhi},
booktitle={Proceedings of the ACM on Web Conference 2025},
year={2025}
}