-
Notifications
You must be signed in to change notification settings - Fork 4.2k
faq.en
- Open QQ -> click the group chat search-> search group number 637093648, enter the answer to the question: conv conv conv conv conv → join the group chat → ready to accept the Turing test(a joke)
- Open QQ -> search Pocky group: 677104663 (lots experts), the answer to the question
- nihui:水竹院落
-
git clone --recursive https://github.com/Tencent/ncnn/
or
download ncnn-xxxxx-full-source.zip
-
The submodules were not downloaded! Please update submodules with "git submodule update --init" and try again
As above, download the full source code. Or follow the prompts to execute: git submodule update --init
-
sudo apt-get install libprotobuf-dev protobuf-compiler
-
Could not find a package configuration file provided by "OpenCV" with any of the following names: OpenCVConfig.cmake opencv-config.cmake
sudo apt-get install libopencv-dev
or customized compile and install ,with set(OpenCV_DIR {the dir OpenCVConfig.cmake exist})
-
Could not find a package configuration file provided by "ncnn" with any of the following names: ncnnConfig.cmake ncnn-config.cmake
set(ncnn_DIR { the dir ncnnConfig.cmake exist})
-
-
cmake requires version >= 3.10, otherwise there is no FindVulkan.cmake
-
android-api >= 24
-
macos has to run the install script first
-
-
- See https://www.vulkan.org/tools#download-these-essential-development-tools
- But There was a frequent problem that the project need glslang lib in ncnn not official vulkan
-
undefined reference to __kmpc_for_static_init_4 __kmpc_for_static_fini __kmpc_fork_call ...
Need to link openmp
undefined reference to vkEnumerateInstanceExtensionProperties vkGetInstanceProcAddr vkQueueSubmit ...
need vulkan-1.lib
undefined reference to glslang::InitializeProcess() glslang::TShader::TShader(EShLanguage) ...
need glslang.lib OGLCompiler.lib SPIRV.lib OSDependent.lib
undefined reference to AAssetManager_fromJava AAssetManager_open AAsset_seek ...
Add android to find_library and target_like_libraries
find_package(ncnn)
-
opencv rtti -> opencv-mobile
-
upgrade compiler / libgcc_s libgcc
-
upgrade gcc
-
See https://github.com/Tencent/ncnn/wiki/build-for-android.zh and see How to trim smaller ncnn
-
ncnnoptimize first before adding a custom layer to avoid ncnnoptimize not being able to handle custom layer saves.
-
The reason for the conflict is that the libraries used in the project are configured differently, so analyze whether you need to turn them on or off according to your actual situation. ncnn is ON by default, add the following two parameters when recompiling ncnn.
- ON: -DNCNN_DISABLE_RTTI=OFF -DNCNN_DISABLE_EXCEPTION=OFF
- OFF: -DNCNN_DISABLE_RTTI=ON -DNCNN_DISABLE_EXCEPTION=ON
-
Possible scenarios.
- Try upgrading the NDK version of Android Studio
Compile ncnn,and make install. linux/windows should set/export ncnn_DIR points to the directory containing ncnnConfig.cmake under the install directory
-
./caffe2ncnn caffe.prototxt caffe.caffemodel ncnn.param ncnn.bin
-
./mxnet2ncnn mxnet-symbol.json mxnet.params ncnn.param ncnn.bin
-
onnx-simplifier shape
-
Input 0=w 1=h 2=c
-
ncnnoptimize model.param model.bin yolov5s-opt.param yolov5s-opt.bin 65536
-
Interp Reshape
-
use ncnn2mem
-
Yes, for all platforms
-
Ref:
Referring to an article by UP https://zhuanlan.zhihu.com/p/128974102, step 3 is to remove the post-processing and then export the onnx, where removing the post-processing can be the result of removing the subsequent steps when testing within the project.
Mode 1:
ONNX_ATEN_FALLBACK Fully customizable op, first change to one that can export (e.g. concat slice), go to ncnn and then modify param
Way 2.
You can try this with PNNX, see the following article for a general description:
-
Please upgrade your GPU driver if you meet this crash or error. Here are the download sites for some brands of GPU drivers. We have provided some driver download pages here. Intel, AMD, Nvidia www.nvidia.com/Download/index.aspx)
-
python setup.py develop
-
path should be working dir
File not found or not readable. Make sure that XYZ.param/XYZ.bin is accessible.
-
layer name vs blob name
param.bin use xxx.id.h enum
-
The model maybe has problems
Your model file is being the old format converted by an old caffe2ncnn tool.
Checkout the latest ncnn code, build it and regenerate param and model binary files, and that should work.
Make sure that your param file starts with the magic number 7767517.
you may find more info on use-ncnn-with-alexnet
When adding the softmax layer yourself, you need to add 1=1
-
Set net.opt.use_vulkan_compute = true before load_param / load_model;
-
Multiple execute
ex.input()
andex.extract()
like followingex.input("data1", in_1); ex.input("data2", in_2); ex.extract("output1", out_1); ex.extract("output2", out_2);
-
No
-
cmake -DNCNN_BENCHMARK=ON ..
-
from_pixels to_pixels
-
First of all, you need to manage the memory you request yourself, at this point ncnn::Mat will not automatically free up the float data you pass over to it
std::vector<float> testData(60, 1.0); // use std::vector<float> to manage memory requests and releases yourself ncnn::Mat in1 = ncnn::Mat(60, (void*)testData.data()).reshape(4, 5, 3); // just pass the pointer to the float data as a void*, and even specify the dimension (up says it's best to use reshape to solve the channel gap) float* a = new float[60]; // New a piece of memory yourself, you need to release it later ncnn::Mat in2 = ncnn::Mat(60, (void*)a).reshape(4, 5, 3).clone(); // use the same method as above, clone() to transfer data owner