This repo contains the code of 'Differentially Private Fine-tuning of Language Models', which is published as a conference paper at ICLR 2022. Please find the instructions in the subfloders.
Please feel free to raise any issues. The contacts of authors can be found in here. Feel free to drop us an email if you have any questions (yuda3@mail2.sysu.edu.cn for questions about language understanding).
@inproceedings{yu2022differentially,
title={Differentially private fine-tuning of language models},
author={Yu, Da and Naik, Saurabh and Backurs, Arturs and Gopi, Sivakanth and Inan, Huseyin A and Kamath, Gautam and Kulkarni, Janardhan and Lee, Yin Tat and Manoel, Andre and Wutschitz, Lukas and others},
year = {2022},
booktitle = {International Conference on Learning Representations (ICLR)}
}