-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Aseq_DIP/Aseq_DIP_Sparse_view_CT.py
Line 114 in f2e5074
| ref = Variable(torch.rand((1,1,width,length)).cuda(), requires_grad=True) |
Aseq_DIP/Aseq_DIP_Sparse_view_CT.py
Line 108 in f2e5074
| optimizer = optim.Adam(net.parameters(), lr = learning_rate) |
Aseq_DIP/Aseq_DIP_Sparse_view_CT.py
Line 118 in f2e5074
| optimizer2 = optim.Adam([ref], lr = 1e-1) |
Aseq_DIP/Aseq_DIP_Sparse_view_CT.py
Line 157 in f2e5074
| optimizer.step() |
Thank you for sharing this impressive work! I truly appreciate the effort put into it. However, I have some questions about certain details in the code. Specifically, I noticed that (\lambda) is initialized as a trainable random variable and is optimized by Optimizer 2. That said, I couldn’t find updates for Optimizer 2 in the later stages, nor did I observe the current output being used as the input for the next iteration. Looking forward to your reply. Thank you!