Skip to content

Can't fix @vgem error #5400

Open
Open
@paksruan13

Description

@paksruan13

Running docker-desktop on windows 11. Running into:
localai-1 | WARNING: error parsing the pci address "vgem" localai-1 | 12:55AM DBG GPU count: 1 localai-1 | 12:55AM DBG GPU: card #0 @vgem localai-1 | 12:55AM DBG [startup] resolved local model: local-ai localai-1 | 12:55AM ERR LoadBackendConfigsFromPath cannot read config file error="readBackendConfigFromFile cannot unmarshal config file \"/build/models/4d54d763bf21e69b32836071d1918d84.yaml\": yaml: control characters are not allowed" File Name=4d54d763bf21e69b32836071d1918d84.yaml localai-1 | 12:55AM INF Preloading models from /build/models localai-1 | 12:55AM DBG Extracting backend assets files to /tmp/localai/backend_data localai-1 | 12:55AM DBG processing api keys runtime update localai-1 | 12:55AM DBG processing external_backends.json localai-1 | 12:55AM DBG external backends loaded from external_backends.json localai-1 | 12:55AM INF core/startup process completed! localai-1 | 12:55AM DBG No configuration file found at /tmp/localai/upload/uploadedFiles.json localai-1 | 12:55AM DBG No configuration file found at /tmp/localai/config/assistants.json localai-1 | 12:55AM DBG No configuration file found at /tmp/localai/config/assistantsFile.json localai-1 | 12:55AM INF LocalAI API is listening! Please connect to the endpoint for API documentation. endpoint=http://0.0.0.0:8080

My initial intent was to use my GPU to run my models since it will be much faster. But after installing the CUDA kits, nvidia container kit, etc..., it am still getting @vgem error. I'm quite new to this but I'm sure that "DBG GPU: card #0 @vgem" should be my gpu instead?

Have also tried modifying yaml files, env files, no luck.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions