Description
Bug Description
after calling
auto trt_mod = torch_tensorrt::torchscript::compile(module, compile_settings);
the process gets stuck in an infinite(?) loop. I can also observe that the GPU load drops back to 0% after about 1s.
According to this link: #1409 the issue should already have been fixed.
Error message
1 __memmove_avx_unaligned 0x7fff79289cc1
2 std::vectortorch::jit::Use::_M_erase(__gnu_cxx::__normal_iterator<torch::jit::Use *, std::vectortorch::jit::Use>) 0x7fffab48412f
3 torch::jit::Value::replaceFirstUseWith(torch::jit::Value *) 0x7fffab46ff5d
4 torch::jit::Value::replaceAllUsesWith(torch::jit::Value *) 0x7fffab46ffcb
5 torch::jit::EliminateExceptions(torch::jit::Block *) 0x7fffab63c3c9
6 torch::jit::EliminateExceptions(std::shared_ptrtorch::jit::Graph&) 0x7fffab63c999
7 torch_tensorrt::core::lowering::LowerGraph(std::shared_ptrtorch::jit::Graph&, std::vectorc10::IValue&, torch_tensorrt::core::lowering::LowerInfo) 0x7fffd7426b0d
8 torch_tensorrt::core::lowering::Lower(torch::jit::Module const&, std::string, torch_tensorrt::core::lowering::LowerInfo const&) 0x7fffd742a181
9 torch_tensorrt::core::CompileGraph(torch::jit::Module const&, torch_tensorrt::core::CompileSpec) 0x7fffd732b5a8
10 torch_tensorrt::torchscript::compile(torch::jit::Module const&, torch_tensorrt::torchscript::CompileSpec) 0x7fffd7313a04
11 ModelLoader::optimizeWithTensorRT modelloader.cpp 266 0x5ad43c
12 InferenceDisplay::<lambda()>::<lambda()>::operator() inferencedisplay.cpp 1330 0x58c996
13 std::_Function_handler<void(), InferenceDisplay::InferenceDisplay(QWidget *, DataController&)::<lambda()>::<lambda()>>::_M_invoke(const std::_Any_data &) std_function.h 316 0x58c996
14 std::function<void ()>::operator()() const std_function.h 706 0x5cbcca
15 errorwrapper::loading(std::function<void ()>) errorwrapper.cpp 11 0x5cbcca
16 InferenceDisplay::<lambda()>::operator() inferencedisplay.cpp 1333 0x58e127
17 QtPrivate::FunctorCall<QtPrivate::IndexesList<>, QtPrivate::List<>, void, InferenceDisplay::InferenceDisplay(QWidget *, DataController&)::<lambda()>>::call qobjectdefs_impl.h 146 0x58e127
18 QtPrivate::Functor<InferenceDisplay::InferenceDisplay(QWidget *, DataController&)::<lambda()>, 0>::call<QtPrivate::List<>, void> qobjectdefs_impl.h 256 0x58e127
19 QtPrivate::QFunctorSlotObject<InferenceDisplay::InferenceDisplay(QWidget *, DataController&)::<lambda()>, 0, QtPrivate::List<>, void>::impl(int, QtPrivate::QSlotObjectBase *, QObject *, void * *, bool *) qobjectdefs_impl.h 439 0x58e127
20 QMetaObject::activate(QObject *, int, int, void * *) 0x7fff7a163f8f
...
Expected behavior
successful torch-tensorrt optimization of a torchscript model
Environment
- Torch-TensorRT Version: v1.3.0
- PyTorch Version : 1.13.0
- OS: Linux
- PyTorch : libtorch 1.13+cu117
- CUDA version: 11.7
- cudnn version: 8.5.0.96
- TensorRT version: 8.5.2.2