Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

paddleOCR检测识别推理服务在没有显卡的机器上,无法在CPU模式下运行。 #9637

Closed
dtiny opened this issue Apr 4, 2023 · 4 comments
Assignees

Comments

@dtiny
Copy link

dtiny commented Apr 4, 2023

请提供下述完整信息以便快速定位问题/Please provide the following information to quickly locate the problem

问题描述:执行文件是在有显卡的ubuntu16.04上编译的,可以在CPU和GPU两种模式下正常运行,没有问题;如果将程序放到另一台没有显卡的机器上,在CPU模式下会出错,报无法找到显卡。仍然会执行phi::backends::gpu::GetGPUDeviceCount()报错。之前的版本好像还没有这种问题。具体的错误信息见<- 完整报错>。

  • 系统环境/System Environment:ubuntu16.04
  • 版本号/Version:Paddle: PaddleOCR: 问题相关组件/Related components:推理库(2.4.2 版本),GPU | 是 | MKL | 5.4 | CUDA11.2/cuDNN8.2/TensorRT8.0 | paddle_inference.tgz
  • 运行指令/Command Code:
  • 运行语言:c++
  • 完整报错/Complete Error Message:
    WARNING: Logging before InitGoogleLogging() is written to STDERR
    W0404 13:53:59.584630 4046 init.cc:179] Compiled with WITH_GPU, but no GPU found in runtime.
    I0404 13:53:59.617218 4046 analysis_predictor.cc:964] MKLDNN is enabled
    --- Running analysis [ir_graph_build_pass]
    terminate called after throwing an instance of 'phi::enforce::EnforceNotMet'
    what():

C++ Traceback (most recent call last):

0 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&)
1 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
2 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
3 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
5 paddle::AnalysisPredictor::OptimizeInferenceProgram()
6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7 paddle::inference::analysis::IrGraphBuildPass::RunImpl(paddle::inference::analysis::Argument*)
8 paddle::inference::analysis::IrGraphBuildPass::LoadModel(std::string const&, std::string const&, paddle::framework::Scope*, phi::Place const&, bool)
9 paddle::inference::Load(paddle::framework::Executor*, paddle::framework::Scope*, std::string const&, std::string const&)
10 paddle::inference::LoadPersistables(paddle::framework::Executor*, paddle::framework::Scope*, paddle::framework::ProgramDesc const&, std::string const&, std::string const&, bool)
11 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocator<std::string > > const&, bool, bool)
12 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
13 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool)
14 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, phi::Place const&)
15 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&) const
16 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, phi::Place const&, paddle::framework::RuntimeContext*) const
17 paddle::platform::DeviceContextPool::Get(phi::Place const&)
18 std::__future_base::_Deferred_state<std::thread::_Invoker<std::tuple<paddle::platform::EmplaceDeviceContextpaddle::platform::MKLDNNDeviceContext(std::map<phi::Place, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::lessphi::Place, std::allocator<std::pair<phi::Place const, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > > > >, phi::Place)::{lambda()#1}> >, std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >::_M_complete_async()
19 std::_Function_handler<std::unique_ptr<std::__future_base::_Result_base, std::__future_base::_Result_base::_Deleter> (), std::__future_base::_Task_setter<std::unique_ptr<std::__future_base::_Result<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::__future_base::_Result_base::_Deleter>, std::thread::_Invoker<std::tuple<paddle::platform::EmplaceDeviceContextpaddle::platform::MKLDNNDeviceContext(std::map<phi::Place, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::lessphi::Place, std::allocator<std::pair<phi::Place const, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > > > >
, phi::Place)::{lambda()#1}> >, std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > >::_M_invoke(std::_Any_data const&)
20 paddle::platform::EmplaceDeviceContextpaddle::platform::MKLDNNDeviceContext(std::map<phi::Place, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > >, std::lessphi::Place, std::allocator<std::pair<phi::Place const, std::shared_future<std::unique_ptr<phi::DeviceContext, std::default_deletephi::DeviceContext > > > > >, phi::Place)::{lambda()#1}::operator()() const
21 paddle::memory::allocation::AllocatorFacade::Instance()
22 paddle::memory::allocation::AllocatorFacade::AllocatorFacade()
23 paddle::memory::allocation::AllocatorFacadePrivate::AllocatorFacadePrivate(bool)
24 phi::backends::gpu::GetGPUDeviceCount()
25 phi::enforce::EnforceNotMet::EnforceNotMet(phi::ErrorSummary const&, char const
, int)
26 phi::enforce::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

ExternalError: CUDA error(999), unknown error.
[Hint: Please search for the error code(999) on website (https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038) to get Nvidia's official solution and advice about CUDA Error.] (at /paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:66)
[operator < load_combine > error]
已放弃 (核心已转储)

@andyjiang1116
Copy link
Collaborator

可以安装对应的cpu版本paddle

@dtiny
Copy link
Author

dtiny commented Apr 4, 2023

这个应该是paddle2.4版本推理库的问题,在CPU模式下仍然去调用GPU的相关函数,不合理。
如果切换到2.3版本的GPU库下使用CPU模式就没有这样的问题,希望2.4以后得版本可以修复这个问题。

@andyjiang1116
Copy link
Collaborator

因为你是在有gpu的机器上编译的GPU版本,所以在纯cpu机器上可能会有点问题,推荐在cpu机器上安装cpu版本的paddle

@dtiny dtiny closed this as completed Apr 4, 2023
@UnstoppableCurry
Copy link

要改Cmakelist 代码,编译动态库自动去找GPU版本依赖肯定报错,或者你参数设置错了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants