Skip to content

Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model (EMNLP'2022)

License

Notifications You must be signed in to change notification settings

tryumanshow/StyleBART

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StyleBART

Source codes for Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model, accepted at EMNLP-22

Environment Setting

Using conda / pip

  • Check docker/requirements.txt to install dependencies

Using docker

  • Check Dockerfile and docker-compose.yml to set up the environment.
  • Append the below code in front of the python command instead of CUDA_VISIBLE_DEVICES=<device_id>:
# Non-docker version
# CUDA_VISIBLE_DEVICES=<device_id> python -m ...
CUDA_VISIBLE_DEVICES=0 python -m style_bart.train data=gyafc_fr

# Docker version
# USER_ID=$(id -u) GROUP_ID=$(id -g) docker compose -f docker/docker-compose.yml run -e NVIDIA_VISIBLE_DEVICES=<device_id> app python -m ...
USER_ID=$(id -u) GROUP_ID=$(id -g) docker compose -f docker/docker-compose.yml run -e NVIDIA_VISIBLE_DEVICES=0 app python -m style_bart.train data=gyafc_fr

Folder description

  • .venv: python environment. This folder will be generated automatically.
  • config: configs for experiments
  • content: forder for experiment outputs. This folder will be generated automatically.
    • content/pretrain: forder for pretraining
    • content/main: forder for main training
    • content/eval: forder for evaluation
  • data: folder for train/dev/test data
  • docker: docker configs
  • evaluate: evaluation source code
  • style_bart: StylaBART source code

Preprocessing

# python -m style_bart.data.preprocess [--dataset_name]
python -m style_bart.data.preprocess --gyafc --yelp --amazon

Evalaution

Please check evaluate/README.md. This procedure is also required to run below training code.

Pretraining

Classifier

# CUDA_VISIBLE_DEVICES=<device_id> python -m style_bart.pretrain.classifier data=<dataset_name> [args]
CUDA_VISIBLE_DEVICES=0 python -m style_bart.pretrain.classifier data=gyafc_fr

Depending on the dataset (especially for Amazon), classifier pretraining may not be converged. In this case, larger batch size helps convergence.

CUDA_VISIBLE_DEVICES=0 python -m style_bart.pretrain.classifier data=amazon train.batch_size=512 # train.accumulation=2

Autoencoder

# CUDA_VISIBLE_DEVICES=<device_id> python -m style_bart.pretrain.autoencoder data=<dataset_name> [args]
CUDA_VISIBLE_DEVICES=0 python -m style_bart.pretrain.autoencoder data=gyafc_fr

Language models

# CUDA_VISIBLE_DEVICES=<device_id> python -m style_bart.pretrain.lm data=<dataset_name> label=<style> [args]
CUDA_VISIBLE_DEVICES=0 python -m style_bart.pretrain.lm data=gyafc_fr label=0

Language models should be trained for both labels 0 and 1.

StyleBART Training

# CUDA_VISIBLE_DEVICES=<device_id> python -m style_bart.train data=<dataset_name> [args]
CUDA_VISIBLE_DEVICES=0 python -m style_bart.train data=gyafc_fr # train.accumulation=2

StyleBART Inferencing

Downloading the trained model for each corpus

You can download the trained StyleBART weights from http://gofile.me/6XWMw/L53iBR52U

Transfering the prompt or entire corpus

# CUDA_VISIBLE_DEVICES=<device_id> python -m style_bart.transfer -m <model_path> -l <target_style_label> <prompt>
CUDA_VISIBLE_DEVICES=0 python -m style_bart.transfer -m content/main/gyafc_fr/dump -l 0 "He loves you, too, girl...Time will tell."

Another option is redirecting the entire corpus to standard input

CUDA_VISIBLE_DEVICES=0 python -m style_bart.transfer -m content/main/gyafc_fr/dump -l 0 < data/preprocessed/gyafc_fr/sentences.test.1.txt

If you are using Docker, you need to add the -T option to redirect the corpus file.

docker compose -f docker/docker-compose.yml run -e NVIDIA_VISIBLE_DEVICES=0 -T app python -m style_bart.transfer -m content/main/gyafc_fr/dump -l 1 < data/preprocessed/gyafc_fr/sentences.test.0.txt > output.txt

About

Rethinking Style Transformer with Energy-based Interpretation: Adversarial Unsupervised Style Transfer using a Pretrained Model (EMNLP'2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.5%
  • Shell 6.9%
  • Dockerfile 0.6%