Two Amazon datasets (Amazon_Beauty, Amazon_Cellphones) are available in the "data/" directory and the split is consistent with [1]. All four datasets used in this paper can be downloaded here.
- Python >= 3.6
- PyTorch = 1.0
We've included all the required files for the Cellphone dataset so you can simply evaluate performance and play around with the recommendation parameters
-
Copy the contents of this folder ( https://drive.google.com/open?id=1ZbR7nWmpmDjMYmv56ChxonB5q3gKEsAI ) to the root directory of your project
-
Run the multi-agent evaluation
python test_multi_agent.py --dataset cell --run_path False --run_eval True --re_rank True
- Pre-process the data:
python preprocess.py --dataset <dataset_name>
"<dataset_name>" should be one of "cd", "beauty", "cloth", "cell" (refer to utils.py).
- Train knowledge graph embeddings (TransE in this case):
python train_transe_model.py --dataset <dataset_name>
- Train User RL agent:
python train_agent.py --dataset <dataset_name>
- Evaluate the User agent
python test_agent.py --dataset <dataset_name> --run_path True --run_eval True
If "run_path" is True, the program will generate paths for recommendation according to the trained policy. Note that for subsequent runs you should set this to 'False'
If "run_eval" is True, the program will evaluate the recommendation performance based on the resulting paths.
- Train Product RL agent:
python train_product_agent.py --dataset <dataset_name>
- Evaluate the Product agent
python test_product_agent.py --dataset <dataset_name> --run_path True --run_eval True
- Evaluate the final multi-agent setup
python test_multi_agent.py --dataset <dataset_name> --run_path False --run_eval True --re_rank True
[1] Yongfeng Zhang, Qingyao Ai, Xu Chen, W. Bruce Croft. "Joint Representation Learning for Top-N Recommendation with Heterogeneous Information Sources". In Proceedings of CIKM. 2017.
[2] Yikun Xian, Zuohui Fu, S. Muthukrishnan, Gerard de Melo, Yongfeng Zhang. "Reinforcement Knowledge Graph Reasoning for Explainable Recommendation." In Proceedings of SIGIR. 2019.