-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot reproduce comparable results in the paper #5
Comments
I think it's due to the hyperparameters setting. In the paper it's mentioned "With a frozen denoising module, we then train the leapfrog initializer for 200 epochs with an initial learning rate of 10−4, decaying by 0.9 every 32 epochs", but in the default led_augment.yml it is not like this. |
Hello, did you use the pre trained model provided by him for the diffusion model in the first stage, or did you train yourself for one stage according to the settings in the paper? |
Hi,
I double checked my training configuration, and found that my learning rate
setting is 1e-3 (same with the default code setting). So, I think you can
try once with lr=1e-3 decaying by 0.9 every 32 epochs? The hyperparameter
settings in this repo are quite confusing, some parts are consistent with
the paper but some are not...
Best,
…On Wed, 22 Nov 2023 at 19:08, Frank Star ***@***.***> wrote:
I think it's due to the hyperparameters setting. In the paper it's
mentioned "With a frozen denoising module, we then train the leapfrog
initializer for 200 epochs with an initial learning rate of 10−4, decaying
by 0.9 every 32 epochs", but in the default led_augment.yml it is not like
this.
Set them based on the paper and I've got [image: image]
<https://user-images.githubusercontent.com/58146879/278015992-f6145560-0ebe-441c-8bd2-846dff8b996d.png>
Hello, did you use the pre trained model provided by him for the diffusion
model in the first stage, or did you train yourself for one stage according
to the settings in the paper?
Hi, I use the provided pre-trained model as the first stage.
Hello, I used the same hyperparameter settings as in the paper (200 epochs
with an initial learning rate of 10−4, decaying by 0.9 every 32 epochs) on
the RTX4090 server, but got
[image: 1111]
<https://user-images.githubusercontent.com/72490620/284882130-3917c2a5-48a6-475b-ab3e-04160f66aaa1.png>
This result is much lower than the results in the paper, and there is a
slight improvement after increasing the epoch to 400, but it is still not
as good as the results in the paper
[image: 400epo]
<https://user-images.githubusercontent.com/72490620/284882634-6a129d5e-c8de-412c-a4f8-e57f49fa9fb5.png>
Do you have any clue about this? What do you think is the reason for the
inability to reproduce?
—
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AN3UAP72PZKJLV4TCMVIEX3YFXML3AVCNFSM6AAAAAA2O3BXUGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRSGU3DIMBZGU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I think 0.83/1.69 was the only reproduced result |
Now, I am able to reproduce their stageone and LED stagetwo results. The answer from @woyoudian2gou helped me a lot. But I would say it requires an non-trivial amount of engineering work to tune this well. |
Could you share with us some insight, it would me helpful. |
Yes, and the whole implementation is difficult to explain, I think the original |
@woyoudian2gou Hi, I have implemented your mentioned hyperparameters setting, but still can't get a reasonable result. So could you share your config.yml with us? Thank you very much. |
|
@kkk00714 Thank you for your prompt reply, I would also like to know the hyperparameters of Phase 2 training, could you share that? I would appreciate it |
The hyperparameters of stage 2 are same as original implement of author (batchsiaze = 10, lr = 1e-4...). |
@kkk00714 |
Your loss is normal because the author multiplies a coefficient in front of loss_dist to give it a greater weight than loss_uncertainty. You can check it in the paper or code. |
Thank you for your answer. This is the first time I have encountered such a loss handling method |
Hi, I ran the code with a single GPU NVIDIA GeForce RTX 3090 with the given config file listed in the paper. Here is my reproduced result which is significantly different from the results provided in README.md file. Can you guide me through and specify what could be the issue? Can you provide more info on how to train a model with the same performance of your pre-trained model you provided in
/checkpoints
. Any help will be appreciated.The text was updated successfully, but these errors were encountered: