-
Notifications
You must be signed in to change notification settings - Fork 628
Closed
Description
I try to deploy in my server with 2*3090, cuda-11.7.
It deploy normally with command:"docker run --gpus all --rm -v $(pwd)/workspace:/workspace -it openmmlab/lmdeploy:latest
python3 -m lmdeploy.turbomind.chat internlm /workspace"
however it can't be deployed by command:"bash workspace/service_docker_up.sh" because segment fault.

otherwise, when I try "python3 lmdeploy.app {server_ip_addresss}:33337 internlm" to deploy a client,it report : torch don't have module cuda. it is because you add lmdeploy/lmdeploy/torch into sys.path, and that torch don't have cuda module. I fix this by add "sys.path.remove("lmdeploy/lmdeploy/torch")"
Metadata
Metadata
Assignees
Labels
No labels