Skip to content

p2p inferencing not work #4214

Closed
Closed
@mintyleaf

Description

@mintyleaf

LocalAI version:
7adbc16 (7adbc16)

Environment, CPU architecture, OS, and Version:
some vast.ai nvidia gpu instance

Describe the bug
telegram-cloud-photo-size-2-5327815199232747918-y

To Reproduce
just try to start p2p worker node in the working p2p environment

Additional context
using latest master docker image with my p2p fixes makes everything work in federated mode and connect two nodes with third just head server cpu node (+ using own DHT for faster discovery), yet, that only works for federated mode, worker is stuck in the loop here:

go func() {
for {
log.Info().Msgf("Starting llama-cpp-rpc-server on '%s:%d'", address, port)
grpcProcess := assets.ResolvePath(
r.BackendAssetsPath,
"util",

i'll try to investigate that further, but maybe you can get the idea of what is happening faster

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions