You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I change my pytorch version to Nightly as your project's description. Now my pytorch version is 2.0.1+cu117, but I still encounter the same problem when I run run_dlrm_fae.sh.
Could you please tell me the exact pytorch version you used? Thank you.
Here are the erros:
Traceback (most recent call last):
File "dlrm_fae.py", line 1448, in
E.backward()
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 34, in backward
return (None,) + ReduceAddCoalesced.apply(ctx.input_device, ctx.num_inputs, *grad_outputs)
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
NotImplementedError: Could not run 'aten::view' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::view' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, MkldnnCPU, NestedTensorCPU, NestedTensorCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named
The text was updated successfully, but these errors were encountered:
I change my pytorch version to Nightly as your project's description. Now my pytorch version is 2.0.1+cu117, but I still encounter the same problem when I run run_dlrm_fae.sh.
Could you please tell me the exact pytorch version you used? Thank you.
Here are the erros:
Traceback (most recent call last):
File "dlrm_fae.py", line 1448, in
E.backward()
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward
torch.autograd.backward(
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/autograd/init.py", line 200, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/autograd/function.py", line 274, in apply
return user_fn(self, *args)
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/nn/parallel/_functions.py", line 34, in backward
return (None,) + ReduceAddCoalesced.apply(ctx.input_device, ctx.num_inputs, *grad_outputs)
File "/home/wzs/anaconda3/envs/FAE/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
NotImplementedError: Could not run 'aten::view' with arguments from the 'SparseCUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::view' is only available for these backends: [CPU, CUDA, Meta, QuantizedCPU, QuantizedCUDA, MkldnnCPU, NestedTensorCPU, NestedTensorCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named
The text was updated successfully, but these errors were encountered: