You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
完整报错/Complete Error Message:
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0404 13:53:59.584630 4046 init.cc:179] Compiled with WITH_GPU, but no GPU found in runtime.
I0404 13:53:59.617218 4046 analysis_predictor.cc:964] MKLDNN is enabled
--- Running analysis [ir_graph_build_pass]
terminate called after throwing an instance of 'phi::enforce::EnforceNotMet'
what():
ExternalError: CUDA error(999), unknown error.
[Hint: Please search for the error code(999) on website (https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038) to get Nvidia's official solution and advice about CUDA Error.] (at /paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:66)
[operator < load_combine > error]
已放弃 (核心已转储)
The text was updated successfully, but these errors were encountered:
请提供下述完整信息以便快速定位问题/Please provide the following information to quickly locate the problem
问题描述:执行文件是在有显卡的ubuntu16.04上编译的,可以在CPU和GPU两种模式下正常运行,没有问题;如果将程序放到另一台没有显卡的机器上,在CPU模式下会出错,报无法找到显卡。仍然会执行phi::backends::gpu::GetGPUDeviceCount()报错。之前的版本好像还没有这种问题。具体的错误信息见<- 完整报错>。
WARNING: Logging before InitGoogleLogging() is written to STDERR
W0404 13:53:59.584630 4046 init.cc:179] Compiled with WITH_GPU, but no GPU found in runtime.
I0404 13:53:59.617218 4046 analysis_predictor.cc:964] MKLDNN is enabled
--- Running analysis [ir_graph_build_pass]
terminate called after throwing an instance of 'phi::enforce::EnforceNotMet'
what():
C++ Traceback (most recent call last):
0 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&)
1 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
2 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
3 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
5 paddle::AnalysisPredictor::OptimizeInferenceProgram()
6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7 paddle::inference::analysis::IrGraphBuildPass::RunImpl(paddle::inference::analysis::Argument*)
8 paddle::inference::analysis::IrGraphBuildPass::LoadModel(std::string const&, std::string const&, paddle::framework::Scope*, phi::Place const&, bool)
9 paddle::inference::Load(paddle::framework::Executor*, paddle::framework::Scope*, std::string const&, std::string const&)
10 paddle::inference::LoadPersistables(paddle::framework::Executor*, paddle::framework::Scope*, paddle::framework::ProgramDesc const&, std::string const&, std::string const&, bool)
11 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocator<std::string > > const&, bool, bool)
12 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
13 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool)
14 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&)
15 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const
16 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext*) const
17 paddle::platform::DeviceContextPool::Get(phi::Place const&)
18 std::__future_base::_Deferred_state<std::thread::_Invoker<std::tuple<paddle::platform::EmplaceDeviceContextpaddle::platform::MKLDNNDeviceContext(std::map<phi::Place, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::lessphi::Place, std::allocator<std::pair<phi::Place const, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > > > >, phi::Place)::{lambda()#1}> >, std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >::_M_complete_async()
19 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::__future_base::_Result_base::_Deleter>, std::thread::_Invoker<std::tuple<paddle::platform::EmplaceDeviceContextpaddle::platform::MKLDNNDeviceContext(std::map<phi::Place, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::lessphi::Place, std::allocator<std::pair<phi::Place const, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > > > >, phi::Place)::{lambda()#1}> >, std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > >::_M_invoke(std::_Any_data const&)
20 paddle::platform::EmplaceDeviceContextpaddle::platform::MKLDNNDeviceContext(std::map<phi::Place, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::lessphi::Place, std::allocator<std::pair<phi::Place const, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > > > >, phi::Place)::{lambda()#1}::operator()() const
21 paddle::memory::allocation::AllocatorFacade::Instance()
22 paddle::memory::allocation::AllocatorFacade::AllocatorFacade()
23 paddle::memory::allocation::AllocatorFacadePrivate::AllocatorFacadePrivate(bool)
24 phi::backends::gpu::GetGPUDeviceCount()
25 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const, int)
26 phi::enforce::GetCurrentTraceBackStringabi:cxx11
Error Message Summary:
ExternalError: CUDA error(999), unknown error.
[Hint: Please search for the error code(999) on website (https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038) to get Nvidia's official solution and advice about CUDA Error.] (at /paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:66)
[operator < load_combine > error]
已放弃 (核心已转储)
The text was updated successfully, but these errors were encountered: