From 925ccec0f537ccef4dd97113ab007b1283f7144d Mon Sep 17 00:00:00 2001 From: Luozhou Wang <37919763+wileewang@users.noreply.github.com> Date: Wed, 3 Apr 2024 21:33:10 +0800 Subject: [PATCH] Update README.md --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 47fae7d..082a1b1 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,8 @@ To start training, first download the [ZeroScope](https://huggingface.co/cerspen ```bash python train.py --config ./configs/train_config.yaml ``` -We provide a sample config file in [config.py](./configs/config.yaml). Note for various motion types and editing requirements, selecting the appropriate loss function significantly impacts the outcome. In scenarios where only the camera motion from the source video is desired, without the need to retain information about the objects in the source, it is advisable to employ [DebiasedHybridLoss](./loss/debiased_hybrid_loss.py). Similarly, when editing objects that undergo significant deformation, [DebiasedTemporalLoss](./loss/debiased_temporal_loss.py) is recommended. For straightforward cross-categorical editing, as described in [DMT]('https://diffusion-motion-transfer.github.io/'), utilizing [BaseLoss](./loss/base_loss.py) function suffices. +We provide a sample config file in [config.py](./configs/config.yaml). +Note for various motion types and editing requirements, selecting the appropriate loss function impacts the outcome. In scenarios where only the camera motion from the source video is desired, without the need to retain information about the objects in the source, it is advisable to employ [DebiasedHybridLoss](./loss/debiased_hybrid_loss.py). Similarly, when editing objects that undergo significant deformation, [DebiasedTemporalLoss](./loss/debiased_temporal_loss.py) is recommended. For straightforward cross-categorical editing, as described in [DMT]('https://diffusion-motion-transfer.github.io/'), utilizing [BaseLoss](./loss/base_loss.py) function suffices. ## Inference After cloning the repository, you can easily load motion embeddings for video generation as follows: