Pytorch implementation of our EMNLP 2020 paper: Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning.
We propose a neural symbolic method for knolwege graph reasoning that leverages symbolic rules.
Walk-based models have shown their advantages in knowledge graph (KG) reasoning by achieving decent performance while providing interpretable decisions. However, the sparse reward signals offered by the KG during traversal are often insufficient to guide a sophisticated walk-based reinforcement learning (RL) model. An alternate approach is to use traditional symbolic methods (e.g., rule induction), which achieve good performance but can be hard to generalize due to the limitation of symbolic representation. In this paper, we propose RuleGuider, which leverages high-quality rules generated by symbolicbased methods to provide reward supervision for walk-based agents. Experiments on benchmark datasets show that RuleGuider improves the performance of walk-based models without losing interpretability.
If you find the repository or ruleGuider helpful, please cite the following paper
@inproceedings{lei2020ruleguider,
title={Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning},
author={Lei, Deren and Jiang, Gangrong and Gu, Xiaotao and Sun, Kexuan and Mao, Yuning and Ren, Xiang},
journal={EMNLP},
year={2020}
}
Install PyTorch (>= 1.4.0) following the instructions on the PyTorch. Our code is written in Python3.
Run the following commands to install the required packages:
pip3 install -r requirements.txt
Unpack the data files:
unzip data.zip
It will generate three dataset folders in the ./data directory. In our experiments, the datasets used are: fb15k-237
, wn18rr
and nell-995
.
- Train embedding-based models:
./experiment-emb.sh configs/<dataset>-<model>.sh --train <gpu-ID>
- Pretrain relation agent using top rules:
./experiment-pretrain.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model>
- Jointly train relation agent and entity agent with reward shaping
./experiment-rs.sh configs/<dataset>-rs.sh --train <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <pretrain-checkpoint-path>
Note:
- you can choose embedding models among
conve
,complex
, anddistmult
. - you have to pre-train the embedding-based models before pretraining relation agent or jointly training two agents.
- you can skip pretraining relation agent.
- make sure you set the file path pointers to the pre-trained embedding-based models correctly (example configuration file),
- use
--board <board-path>
to logs the training details,--model <model-path>
to assign the directory in which checkpoints are saved, and--checkpoint_path <checkpoint-path>
to load checkpoints. - in joint training, you can use
--rule_ratio <ratio>
to specify the ratio between rule reward and hit reward.
- Evaluate embedding-based models:
./experiment-emb.sh configs/<dataset>-<model>.sh --inference <gpu-ID>
- Evaluate the pretraining of relation agent :
./experiment-pretrain.sh configs/<dataset>-rs.sh --inference <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <pretrain-checkpoint-path>
- Evaluate the final result:
./experiment-rs.sh configs/<dataset>-rs.sh --inference <gpu-ID> <rule-path> --model point.rs.<embedding-model> --checkpoint_path <checkpoint-path>