This is the official implementation for paper Knowledge Bridging for Empathetic Dialogue Generation (AAAI 2021).
- Check the packages needed or simply run the command:
pip install -r requirements.txt
-
Download GloVe vectors from here (glove.6B.300d.txt) and put it into
/data/
. -
Download other data sources regarding ConceptNet and NRC_VAD lexicon, please visit Google Drive and place processed dataset
kemp_dataset_preproc.json
into/data/
. -
For reproducibility purposes, we place the model checkpoints at Google Drive. You could download and move it under
/result/[MODELNAME]/result/
, e.g.,/result/KEMP/result/KEMP_best.tar
. -
To skip training, please check folder
/result/[MODELNAME]/predicition/
.
The dataset (EmpatheticDialogue) is preprocessed and stored under data
in pickle format
python preprocess.py
You can skip the data processing and directly use the processed file kemp_dataset_preproc.json
.
python main.py \
--cuda \
--label_smoothing \
--noam \
--emb_dim 300 \
--hidden_dim 300 \
--hop 1 \
--heads 2 \
--pretrain_emb \
--model KEMP \
--device_id 0 \
--concept_num 1 \
--total_concept_num 10 \
--attn_loss \
--pointer_gen \
--save_path result/KEMP/ \
--emb_file data/glove.6B.300d.txt
This model does not consider the emotional context graph of Emotional Context Encoder (ECE).
In ECE, we enrich the dialogue history with external knowledge into an emotional context graph. Then, the emotional signals of context are distilled based on the embeddings and emotion intensity values from the emotional context graph.
python main.py \
--cuda \
--label_smoothing \
--noam \
--emb_dim 300 \
--hidden_dim 300 \
--hop 1 \
--heads 2 \
--pretrain_emb \
--model wo_ECE \
--device_id 0 \
--concept_num 1 \
--total_concept_num 10 \
--pointer_gen \
--save_path result/wo_ECE/ \
--emb_file data/glove.6B.300d.txt
This model does not consider the emotional dependency strategies of Emotion-Dependency Decoder (EDD).
In EDD, given emotional signal and emotional context graph, we incorporate an emotional cross-attention mechanism to selectively learn the emotional dependencies.
python main.py \
--cuda \
--label_smoothing \
--noam \
--emb_dim 300 \
--hidden_dim 300 \
--hop 1 \
--heads 2 \
--pretrain_emb \
--model wo_EDD \
--device_id 0 \
--concept_num 1 \
--total_concept_num 10 \
--pointer_gen \
--save_path result/wo_EDD/ \
--emb_file data/glove.6B.300d.txt
Add
--test
into above commands.
You can directly run /result/cal_metrics.py
script to evaluate the model predictions.
If you find our work useful, please cite our paper as follows:
@article{li-etal-2022-kemp,
title={Knowledge Bridging for Empathetic Dialogue Generation},
author={Qintong Li and Piji Li and Zhaochun Ren and Pengjie Ren and Zhumin Chen},
booktitle={AAAI},
year={2022},
}