Open
Description
🐛 Bug
I'm writing a C++ program that loads a module from disk and tries to work with it. Unlike nn::Module
, it appears that torch::jit::script::Module
has no zero_grad()
method. This makes it impossible to do optimization.
Environment
- PyTorch Version (e.g., 1.0): Nightly build downloaded Sept. 29, 2019
- OS (e.g., Linux): Linux
- How you installed PyTorch (
conda
,pip
, source): Downloaded from https://download.pytorch.org/libtorch/nightly/cu101/libtorch-cxx11-abi-shared-with-deps-latest.zip - Build command you used (if compiling from source): n/a
- Python version: n/a
- CUDA/cuDNN version: n/a
- GPU models and configuration: n/a
- Any other relevant information:
cc @suo