This repository contains the official code implementation for the paper "Knowledge Editing in Language Models via Adapted Direct Preference Optimization" published in EMNLP (Findings) 2024. The paper proposes a novel approach for knowledge editing in pre-trained language models, leveraging adapted direct preference optimization to modify model knowledge in a controlled manner.
Paper: Knowledge Editing in Language Models via Adapted Direct Preference Optimization (arXiv)
Make sure you have the following dependencies installed:
- Python >= 3.9
- PyTorch 2.0.1
- CUDA
- Code was tested on an NVIDIA A100 80GB GPU
- Additional dependencies specified in
requirements.txt
-
Clone this repository
-
Install the dependencies:
pip install -r requirements.txt
Example command (assuming running from /path/to/EasyEdit/examples):
python EasyEdit/examples/run_zsre_llama2.py --editing_method=DPO --hparams_dir=../hparams/DPO/llama-7b.yaml --data_dir=../../data
Loads the results json file from training.
Example evaluation command:
python EasyEdit/accum_results.py
- EasyEdit: This code is heavily inspired by the EasyEdit repository by the ZJUNLP team. We thank them for their excellent work and contribution to knowledge editing research.
This repository is licensed under the MIT License. See the LICENSE file for more information.