Skip to content

vbdi/LaWa

Repository files navigation

LaWa: Using Latent Space for In-Generation Image Watermarking

This repo contains the implementation of the paper 'LaWa: Using Latent Space for In-Generation Image Watermarking', published at ECCV2024.

Link to arXiv paper: https://arxiv.org/abs/2408.05868

Link to Huawei's AI Gallery Notebook: https://developer.huaweicloud.com/develop/aigallery/notebook/detail?id=03ccae2a-4fa8-4739-a75b-659a3abcc690

alt text

Install Required Packages

We have tested our code with python 3.8.17, pytorch 2.0.1, torchvision 0.15.2, and cuda 11.3. You can reproduce the environment using conda by running

!conda env create -f environment.yml
!conda activate LaWa

Inference

Run the following script to download our pretrained modified decoder as well as the original decoder. These weights correspond to the KL-f8 auto-encoder model and 48-bit watermarks.

!bash download.sh

Model weights will be saved to weights/LaWa/last.ckpt and weights/first_stage_models/first_stage_KL-f8.ckpt.

Furthermore, download weights of Stable Diffusion v1.4 model from here and save it to weights/stable-diffusion-v1/model.ckpt.

To generate watermarked images using Stable Diffusion and LaWa, run:

!python inference_AIGC.py --config configs/SD14_LaWa_inference.yaml --prompt "A white plate of food on a dining table" --message_len 48 --message '110111001110110001000000011101000110011100110101' --outdir results/SD14_LaWa/txt2img-samples

This will save the generated original and watermarked images as well as the difference image in results/SD14_LaWa/txt2img-samples. Also, results/SD14_LaWa/test_results_quality.csv and results/SD14_LaWa/test_results_attacks.csv are generated, which contain a summary of the visual quality of the watermarked image as well as its robustness to attacks.

Train your own model

Data Preparation

Download the MIRFlickR dataset from the official website. data/train_100k.csv contains the list of images we have used for training. In the config file configs/SD14_LaWa.yaml, adjust the path to images folder of the dataset under the data_dir of train and validation datasets.

Train

You can train your modified decoder using:

!python train.py --message_len 48 --config configs/SD14_LaWa.yaml --batch_size 8 --max_epochs 40 --learning_rate 0.00006

Batch size 8 fits on a 32GB GPU.

📚 Citation

If you find LaWa useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@misc{rezaei2024lawausinglatentspace,
      title={LaWa: Using Latent Space for In-Generation Image Watermarking}, 
      author={Ahmad Rezaei and Mohammad Akbari and Saeed Ranjbar Alvar and Arezou Fatemi and Yong Zhang},
      year={2024},
      eprint={2408.05868},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.05868}, 
}

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages