Skip to content

Latest commit

 

History

History
37 lines (23 loc) · 1.3 KB

README.md

File metadata and controls

37 lines (23 loc) · 1.3 KB

This repository (repo) contains source code for KIICE paper of mine.

In this paper, we present Transformer-based fact checking model which improves computational efficiency. Locality Sensitive Hashing (LSH) is employed to efficiently compute attention value so that it can reduce the computation time. With LSH, model can group semantically similar words, and compute attention value within the group. The performance of proposed model is 75% for accuracy, 42.9% and 75% for Fl micro score and F1 macro score, respectively.

As a result, we awarded best paper in 2021 KIICE spring conference.

Usage

Our code is written in Windows device. Please be aware of that.

First, you need to install required libraries with this command:

pip install -r requirements.txt

If you want to run our code, please input this command

./src/execute.bat

Plus, to perform research, we applied PHEME dataset to validate out proposed model, and we preprocessed that dataset.

So if you want to follow our path from scratch, you need to execute this code.

./src/dataset.bat

Tech Stack

  • Data: Pandas, Numpy, Scikit-learn
  • AI: Transformers, PyTorch

License

GNU GENERAL PUBLIC © Hee Seung Yun