You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This thread collects common issues and solutions that developers encounter when using our examples.
Before you go through this, make sure you are using the latest firmware in your camera, using an old firmware can often cause issues.
<_InactiveRpcError of RPC that terminated with:
object-detector-python_1 | status = StatusCode.UNAVAILABLE
object-detector-python_1 | details = "failed to connect to all addresses"
object-detector-python_1 | debug_error_string = "{"created":"@1659425957.027178280","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3217,"referenced_errors":[{"created":"@1659425957.027173640","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":165,"grpc_status":14}]}"
Explanation
The StatusCode.UNAVAILABLE indicates that your application can't connect to the inference server (acap-runtime) that performs the inference. That usually happens for two reasons:
The inference server failed to load your model.
The inference server is not responding, because it is busy loading your model.
Suggestions
Leave it run for up to 2-3 minutes. The first time a model is loaded, it is converted internally, and can take long (We usually see 1 min to load SSD mobilenetv2 on Q1656). After this time, your model should be successfully converted and cached, and the rest of the execution should be smoother.
If that is not the case, run docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT ps. This command will tell you which containers are in execution in your camera. And will help you to verify if the acap-runtime is still running.
If you are missing your acap-runtime container, check in the log of the docker-compose up if your inference server crashed. It usually prints something. If you can't find any log, ssh into your camera and run journalctl -u larod, you will get more info about your crash.
Can't replace default models
Issue
When I replace the models in the Dockerfile.model I still see the default models being loaded, and my application can't find my model
Explanation
When you terminate an execution you should make sure to add the --volume to your docker-compose down command, otherwise your old volume will remain in your camera, and it won't be overwritten when you update it.
Suggestions
Follow the instruction of the example, and run the docker compose down with the --volume flag.
If that doesn't work, try to clean all the volumes in the camera with: docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT volume prune -f.
exec format error - during execution
Issue
When I run my example, I get this error:
object-detector-python-object-detector-python-1 | standard_init_linux.go:219: exec user process caused: exec format error
Explanation
You probably chose the wrong architecture while building the application
Suggestions
Verify that your camera is armv7hf or aarch64 and build your application with the right ARCH flag.
exec format error - during build
Issue
When I build my example, I get this error:
---> Running in 27e79ec59704
standard_init_linux.go:228: exec user process caused: exec format error
The command'/bin/sh -c pip install RUN pip install Flask' returned a non-zero code: 1
Explanation
Docker doesn't manage to run instructions in another architecture, you probably didn't set up qemu properly.
Suggestions
Follow the instructions in the example about how to install qemu. You have probably missed running the line: docker run -it --rm --privileged multiarch/qemu-user-static --credential yes --persistent yes
No space left in the device
Issue
I get the error "no space on device" when trying to install an example
Explanation
The space in the camera is very limited, most of the cameras won't be able to handle more than 1 light docker image.
Suggestions
Make sure to install an SD card in the camera, and specify in your docker ACAP settings that you want to use it. Look here for more details
When trying the docker-compose up command, you get an error similar to ERROR: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:40249->[::1]:53: read: connection refused
Explanation
This type of error occurs when your camera can't connect to dockerhub for some reason, and it doesn't manage to pull a docker image needed to run your application (typically the acap-runtime).
Suggestions
You can pull the image in your computer, and load it into the device like you do with the other containers. Something like: docker save axisecp/acap-runtime:<version>-<arch>-containerized | docker --tlsverify -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT load
You can find the expected version and arch on the error message itself or looking at the configuration file that you are using.
Error: Could not load model: Model contains too many graph partitions (X > 16) and Y of the graph partitions can't be run on the device.
Issue
When trying to load a model, you get an error saying something similar to inference-server_1 | ERROR in Inference: Failed to load model yolov8m_int8.tflite (Could not load model: Model contains too many graph partitions (137 > 16) and 68 of the graph partitions can't be run on the device. Consider redesigning the model to better utilize the device. (Is it fully integer quantized? Is it using non-supported operations?))
Explanation
Your model contains too many layers that can't be executed from the accelerator, because the output of the model has to bounce from the DLPU to the CPU and back too many times, we are preventing you to do it, as it would be less fast than simply running it directly on the cpu.
As the error says, this might be caused by too many layers not supported by the DLPU, or your model is not fully integer quantized
Traceback (most recent call last):
File "detector.py", line 21, in <module>
print("Image dtype:", image.dtype)
AttributeError: 'NoneType' object has no attribute 'dtype'
Explanation
You are using a version of Axis Os mismatching with the Computer vision SDK, typically you are using Axis OS 10.x with the latest SDK.
Suggestion
Make sure to use the SDK that is compatible with Axis OS you are using.
You can find the compatibility table here and the examples release history here the examples release are aligned with the SDK release.
This discussion was converted from issue #109 on October 24, 2022 06:37.
Heading
Bold
Italic
Quote
Code
Link
Numbered list
Unordered list
Task list
Attach files
Mention
Reference
Menu
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This thread collects common issues and solutions that developers encounter when using our examples.
Before you go through this, make sure you are using the latest firmware in your camera, using an old firmware can often cause issues.
GRPC UNAVAILABLE
Issue#
When executing the example, I get the error:
Explanation
The
StatusCode.UNAVAILABLE
indicates that your application can't connect to the inference server (acap-runtime) that performs the inference. That usually happens for two reasons:Suggestions
Leave it run for up to 2-3 minutes. The first time a model is loaded, it is converted internally, and can take long (We usually see 1 min to load SSD mobilenetv2 on Q1656). After this time, your model should be successfully converted and cached, and the rest of the execution should be smoother.
If that is not the case, run
docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT ps
. This command will tell you which containers are in execution in your camera. And will help you to verify if the acap-runtime is still running.If you are missing your acap-runtime container, check in the log of the
docker-compose up
if your inference server crashed. It usually prints something. If you can't find any log, ssh into your camera and runjournalctl -u larod
, you will get more info about your crash.Can't replace default models
Issue
When I replace the models in the Dockerfile.model I still see the default models being loaded, and my application can't find my model

Explanation
When you terminate an execution you should make sure to add the
--volume
to yourdocker-compose down
command, otherwise your old volume will remain in your camera, and it won't be overwritten when you update it.Suggestions
Follow the instruction of the example, and run the
docker compose down
with the--volume
flag.If that doesn't work, try to clean all the volumes in the camera with:
docker -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT volume prune -f
.exec format error - during execution
Issue
When I run my example, I get this error:
Explanation
You probably chose the wrong architecture while building the application
Suggestions
Verify that your camera is armv7hf or aarch64 and build your application with the right ARCH flag.
exec format error - during build
Issue
When I build my example, I get this error:
Explanation
Docker doesn't manage to run instructions in another architecture, you probably didn't set up qemu properly.
Suggestions
Follow the instructions in the example about how to install qemu. You have probably missed running the line:
docker run -it --rm --privileged multiarch/qemu-user-static --credential yes --persistent yes
No space left in the device
Issue
I get the error "no space on device" when trying to install an example
Explanation
The space in the camera is very limited, most of the cameras won't be able to handle more than 1 light docker image.
Suggestions
Make sure to install an SD card in the camera, and specify in your docker ACAP settings that you want to use it.
Look here for more details
Error: Get "https://registry-1.docker.io/v2/"
Issue
When trying the docker-compose up command, you get an error similar to
ERROR: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:40249->[::1]:53: read: connection refused
Explanation
This type of error occurs when your camera can't connect to dockerhub for some reason, and it doesn't manage to pull a docker image needed to run your application (typically the acap-runtime).
Suggestions
You can pull the image in your computer, and load it into the device like you do with the other containers. Something like:
docker save axisecp/acap-runtime:<version>-<arch>-containerized | docker --tlsverify -H tcp://$AXIS_TARGET_IP:$DOCKER_PORT load
You can find the expected version and arch on the error message itself or looking at the configuration file that you are using.
Error: Could not load model: Model contains too many graph partitions (X > 16) and Y of the graph partitions can't be run on the device.
Issue
When trying to load a model, you get an error saying something similar to
inference-server_1 | ERROR in Inference: Failed to load model yolov8m_int8.tflite (Could not load model: Model contains too many graph partitions (137 > 16) and 68 of the graph partitions can't be run on the device. Consider redesigning the model to better utilize the device. (Is it fully integer quantized? Is it using non-supported operations?))
Explanation
Your model contains too many layers that can't be executed from the accelerator, because the output of the model has to bounce from the DLPU to the CPU and back too many times, we are preventing you to do it, as it would be less fast than simply running it directly on the cpu.
As the error says, this might be caused by too many layers not supported by the DLPU, or your model is not fully integer quantized
Suggestions
Error: Failed to initialize vdo stream #
Issue
When trying your application, you get the error
Explanation
You are using a version of Axis Os mismatching with the Computer vision SDK, typically you are using Axis OS 10.x with the latest SDK.
Suggestion
Make sure to use the SDK that is compatible with Axis OS you are using.
You can find the compatibility table here and the examples release history here the examples release are aligned with the SDK release.
Beta Was this translation helpful? Give feedback.
All reactions