You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I developed a crystallographic refinement package using PyTorch TorchRef Currently, it is just doing the classical thing with classical restraint refinement; however, in the long term, the most interesting development would be to use torchsim to simulate the Boltzmann ensemble for a given structure, calculate an aggregated structure factor from the ensemble, and propagate gradients back to the original Structure.
However, for this to be viable, I need to checkpoint the simulation step, as otherwise the graph will blow up. It seems that checkpointing is currently not working because the simulation state is handled in a way that prevents it. I implemented/ helped Claude Code implement a hacky checkpointed version.
I validated gradients between checkpointed and normal runs, and after 10 steps, they correlate well but start to diverge. I assume this is some RNG state that isn't being cached correctly somewhere.
The implementation does seem to work. My question is, are there any glaring Problems with the implementation?
Also, this must be a solved problem. Could you point me in the direction of how to do this?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I developed a crystallographic refinement package using PyTorch TorchRef Currently, it is just doing the classical thing with classical restraint refinement; however, in the long term, the most interesting development would be to use torchsim to simulate the Boltzmann ensemble for a given structure, calculate an aggregated structure factor from the ensemble, and propagate gradients back to the original Structure.
However, for this to be viable, I need to checkpoint the simulation step, as otherwise the graph will blow up. It seems that checkpointing is currently not working because the simulation state is handled in a way that prevents it. I implemented/ helped Claude Code implement a hacky checkpointed version.
checkpointed_optimizer.py
I validated gradients between checkpointed and normal runs, and after 10 steps, they correlate well but start to diverge. I assume this is some RNG state that isn't being cached correctly somewhere.
The implementation does seem to work. My question is, are there any glaring Problems with the implementation?
Also, this must be a solved problem. Could you point me in the direction of how to do this?
Best regards,
Peter
Beta Was this translation helpful? Give feedback.
All reactions