From fc17522e3cf5a3dcc0faaca7c970b93d8ddff74a Mon Sep 17 00:00:00 2001 From: mafiosnik <108760201+mafiosnik777@users.noreply.github.com> Date: Mon, 28 Nov 2022 12:16:20 +0100 Subject: [PATCH] Update FAQ --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index 0062919..42b2dc1 100644 --- a/README.md +++ b/README.md @@ -109,6 +109,10 @@ I can't include most models in enhancr because of Licensing. Custom models can b Either lower concurrent streams in settings or set up tiling in combination with a smaller engine resolution to decrease Video Memory usage. +> I get "Python exception: operator (): no valid optimization profile found" + +This means that shapes are not fitting, make sure you are using custom shapes in settings if you go over 1080p input or use correct tiling settings if you use tiling. + # Inferences [TensorRT](https://developer.nvidia.com/tensorrt) is a highly optimized AI inference runtime for NVIDIA GPUs. It uses benchmarking to find the optimal kernel to use for your specific GPU, and there is an extra step to build an engine on the machine you are going to run the AI on. However, the resulting performance is also typically _much much_ better than any PyTorch or NCNN implementation.