The repository contains the code for Paper "Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction", Accepted by IJCAI 2020.
If you find this repo helpful, please cite the following:
@inproceedings{zhao-etal-2020-asking,
title = "Asking Effective and Diverse Questions: A Machine Reading Comprehension based Framework for Joint Entity-Relation Extraction",
author = "Zhao, Tianyang and
Yan, Zhao and
Cao, Yunbo and
Li, Zhoujun",
booktitle = "Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence",
month = jan,
year = "2021",
address = "Kyoto, Japan",
publisher = "International Joint Conferences on Artificial Intelligence",
url = "https://www.ijcai.org/Proceedings/2020/0546.pdf",
pages = "3948--3954"
}
In this paper, we improve the existing MRCbased entity-relation extraction model through diverse question answering. First, a diversity question answering mechanism is introduced to detect entity spans and two answering selection strategies are designed to integrate different answers. Then, we propose to predict a subset of potential relations and filter out irrelevant ones to generate questions effectively. Finally, entity and relation extractions are integrated in an end-to-end way and optimized through joint learning.
For example,
when extracting a person entity, we can construct diverse question as follows:
- Who is mentioned in the context?
- Find people mentioned in the context?
- Which words are person entities?
We evaluate the proposed method on two widely-used datasets for entity relation extaction: ACE05 and CoNLL04. Micro precision, recall and F1-score are used as evaluation metrics.
-
Results on ACE 2005:
Models Enity P Entity R Entity F Relation P Relation R Relation F Sun et al. (2018) 83.9 s 83.2 83.6 64.9 55.1 59.6 Li et al. (2019) 84.7 84.9 84.8 64.8 56.2 **60.2 ** MRC4ERE++ 85.9 85.2 85.5 62.0 62.2 62.1 (+1.9) -
Results on CoNLL 2004:
Models Enity P Entity R Entity F Relation P Relation R Relation F Zhang et al. (2017) – – 85.6 – – 67.8 Li et al. (2019) 89.0 86.6 87.8 69.2 68.2 **68.9 ** MRC4ERE++ 89.3 88.5 88.9 72.2 71.5 71.9 (+3.0)
- Package dependencies:
python >= 3.6
PyTorch == 1.1.0
pytorch-pretrained-bert == 0.6.1
- Download BERT-Base-UnCased, English pretrained model and unzip.
As an example, the following command trains the proposed mothod on CoNLL04.