Skip to content

Official Implementation of paper "Training Neural Samplers with Reverse Diffusive KL Divergence"

License

Notifications You must be signed in to change notification settings

jiajunhe98/DiKL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Training Neural Samplers with Reverse Diffusive KL Divergence (DiKL, AISTATS 2025)

Paper

Official PyTorch implementation of the paper Training Neural Samplers with Reverse Diffusive KL Divergence, published at AISTATS 2025.

In this paper, we introduce Diffusive KL Divergence (DiKL), a reverse-KL-based divergence that promotes mode-covering behavior, in contrast to the standard reverse KL, which tends to focus on mode-seeking:

Reproducing results for DiKL and Baselines

Samples and Model Checkpoints for DiKL

The samples and model checkpoints on DW-4 and LJ-13 generated by our proposed method DiKL can be found in ./DiKL/results/.

Samples for all baselines and DiKL on GMM-40, MW-32, DW-4 and LJ-13 can be found in this shared Google drive folder.

Environment Setup

  • python 3.11
  • torch 2.4.0
  • numpy 1.23.0
  • scipy 1.12.0
  • tqdm
  • matplotlib
  • bgflow (see instruction below)

To install bgflow:

  1. Clone bgflow repository
  2. Navigate to the cloned repository folder
  3. Run the installation script:
python setup.py install

Training Neural Sampler with DiKL

To train the neural sampler with DiKL, navigate to folder DiKL, and run

python train.py --target mog/dw/lj/mw --device cuda

Evaluation

To evaluate the neural sampler, navigate to folder DiKL, and run

python eval.py --target mog/dw/lj/mw --device cuda --model_path your-dikl-model-path --sample_path your-dikl-sample-path --save_dir path-to-save-results --baseline_sample_dir baseline-samples-folder

You only need to specify either --model_path or --sample_path.

Citation

If you find our paper, code, and/or data useful for your research, please cite our paper:

@article{he2024training,
  title={Training Neural Samplers with Reverse Diffusive KL Divergence},
  author={He, Jiajun and Chen, Wenlin and Zhang, Mingtian and Barber, David and Hern{\'a}ndez-Lobato, Jos{\'e} Miguel},
  journal={arXiv preprint arXiv:2410.12456},
  year={2024}
}

About

Official Implementation of paper "Training Neural Samplers with Reverse Diffusive KL Divergence"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages