Skip to content

Official implementation of "HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack", published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).

License

Notifications You must be signed in to change notification settings

iMoonLab/HGNN-Shield

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack

Paper License: Apache 2.0

Official implementation of "HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack", published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).

Authors: Yifan Feng, Yifan Zhang, Shaoyi Du, Shihui Ying, Jun-Hai Yong, Yue Gao


Figure 1: The architecture of HGNN Shield framework.


📖 Introduction

HGNN Shield is a robust framework designed to defend Hypergraph Neural Networks (HGNNs) against sophisticated high-order structure attacks. Unlike traditional graph-based defenses, HGNN Shield focuses on the unique properties of hyperedges to identify and purify malicious structural perturbations.

Core Features:

  • Structural Purification: Automatically identifies and prunes abnormal connections in hyperedges using feature-consistency analysis.
  • Adaptive Re-linking: Recovers potential valid relationships by establishing new connections for pruned nodes based on local similarity.
  • Versatile Backbone Support: Compatible with various GNN and HGNN architectures.

🛠️ Environment Setup

1. Requirements

2. Installation

pip install torch dhg deeprobust hydra-core omegaconf

📊 Datasets

1. Download Pre-attacked Data

We provide the pre-processed attacked data on Google Drive.

2. Physical Layout

Place the data files under specified root directory:

data/
└── coauthorship_cora/
    └── [mode]/
        └── [attack_rate]/
            └── data.pt

3. Generate Custom Attack

Configure the attack section in config/config.yaml and run:

python attack.py

🚀 Usage

All hyperparameters and configurations are managed via Hydra in config/config.yaml.

1. Training & Evaluation

To train HGNN Shield on the default dataset:

python train.py

2. Configuration Options

Key parameters in config/config.yaml:

  • model.threshold: Pruning threshold for structural purification (Default: 0.05).
  • model.threshold2: Linking threshold for node recovery (Default: 0.015).
  • attack.rate: Perturbation intensity (e.g., 0.1, 0.2).

3. Ablation Studies

Run ablation tasks (e.g., sensitivity analysis of thresholds):

python ablation.py

📈 Experimental Results

HGNN Shield demonstrates superior robustness across various benchmarks and attack scenarios.



Figure 2: Performance under non-targeted and targeted attacks.


📝 Citation

If you find this repository or our research helpful, please consider citing our TPAMI paper:

@ARTICLE{feng2024hgnnshield,
  author={Feng, Yifan and Zhang, Yifan and Du, Shaoyi and Ying, Shihui and Yong, Jun-Hai and Gao, Yue},
  journal={IEEE Transactions on Pattern Analysis & Machine Intelligence},
  title={{ HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack }},
  year={2026},
  volume={},
  number={01},
  ISSN={1939-3539},
  pages={1-17},
}

📬 Contact

For any questions, please contact yifanfeng@tsinghua.edu.cn.

About

Official implementation of "HGNN Shield: Defending Hypergraph Neural Networks Against High-Order Structure Attack", published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages