Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
gzn00417 committed Aug 27, 2024
1 parent 04fe286 commit be07ae1
Show file tree
Hide file tree
Showing 4 changed files with 39 additions and 3 deletions.
Binary file added HiFGL.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
38 changes: 37 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,37 @@
# HiFGL
# HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning

This is the official repository of the KDD'2024 Research Track Paper ["HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning"](https://dl.acm.org/doi/10.1145/3637528.3671660) ([Preprint](https://arxiv.org/abs/2406.10616)).

### Introduction

Federated Graph Learning (FGL) has emerged as a promising way to learn high-quality representations from distributed graph data with privacy preservation. Despite considerable efforts have been made for FGL under either cross-device or cross-silo paradigm, how to effectively capture graph knowledge in a more complicated cross-silo cross-device environment remains an under-explored problem. However, this task is challenging because of the inherent hierarchy and heterogeneity of decentralized clients, diversified privacy constraints in different clients, and the cross-client graph integrity requirement. To this end, in this paper, we propose a Hierarchical Federated Graph Learning (HiFGL) framework for cross-silo cross-device FGL. Specifically, we devise a unified hierarchical architecture to safeguard federated GNN training on heterogeneous clients while ensuring graph integrity. Moreover, we propose a Secret Message Passing (SecMP) scheme to shield unauthorized access to subgraph-level and node-level sensitive information simultaneously. Theoretical analysis proves that HiFGL achieves multi-level privacy preservation with complexity guarantees. Extensive experiments on real-world datasets validate the superiority of the proposed framework against several baselines. Furthermore, HiFGL's versatile nature allows for its application in either solely cross-silo or cross-device settings, further broadening its utility in real-world FGL applications.

<p align="center">
<img src="./HiFGL.png" alt="Hierarchical Federated Graph Learning" width="100%">
</p>

### Requirements

- Python 3.8
- PyTorch 1.10
- PyTorch Lightning 1.5.3
- TorchMetrics 0.6

### Run

An example that appoints dataset with `cora` and epochs with `64`.

```
python ./code/main.py -D cora -E 64
```

### Citation

```
@article{guo2024hifgl,
title={HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning},
author={Guo, Zhuoning and Yao, Duanyi and Yang, Qiang and Liu, Hao},
journal={arXiv preprint arXiv:2406.10616},
year={2024}
}
```
2 changes: 1 addition & 1 deletion code/args.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ def parse_global_args():
parser.add_argument('--root', type=str, default='./')
parser.add_argument('--data_folder', type=str, default='data')
parser.add_argument('--args_folder', type=str, default='args')
parser.add_argument('-D', '--dataset_name', type=str)
parser.add_argument('-D', '--dataset_name', type=str, default='cora')
parser.add_argument('-E', '--epochs', type=int, default=1024, help='epoch size')
parser.add_argument('--train_batch_size', type=int, default=1, help='train batch size.')
parser.add_argument('--val_batch_size', type=int, default=1, help='validation batch size.')
Expand Down
2 changes: 1 addition & 1 deletion code/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ def test_step(self, *args, **kwargs):
self.validation_step(*args, **kwargs)

def _get_clients(self):
return {i: torch.load(f"../data/{self.hparams.dataset_name}/{i}_clients.pt") for i in ([-1] + list(range(1, self.hparams.num_clients + 1)))}
return {i: torch.load(f"./data/{self.hparams.dataset_name}/{i}_clients.pt") for i in ([-1] + list(range(1, self.hparams.num_clients + 1)))}

def _count_classes(self, clients: dict):
classes = set()
Expand Down

0 comments on commit be07ae1

Please sign in to comment.