-
Notifications
You must be signed in to change notification settings - Fork 570
Issues: meta-llama/llama-stack
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Improve errors in client when there are server errors
good first issue
Good for newcomers
#434
opened Nov 13, 2024 by
raghotham
2 tasks
Support guided decoding with vllm and remote::vllm
good first issue
Good for newcomers
#391
opened Nov 7, 2024 by
ashwinb
test_chat_completion_with_tool_calling_streaming
fails to parse llama-3.1-8b output
#389
opened Nov 7, 2024 by
henrytwo
1 of 2 tasks
Could not find conda environment: llamastack-local when running llama stack run ./run.yaml
#385
opened Nov 6, 2024 by
wukaixingxp
Ollama 4.0 vision and llama-stack token Invalid token for decoding
#367
opened Nov 4, 2024 by
JoseGuilherme1904
Model ids that contains a colon throws error when trying to install on Windows
#347
opened Oct 30, 2024 by
Sandstedt
1 of 2 tasks
ValueError: Further information is requested
Llama3.1-8B-Instruct
not registered. Make sure there is an Inference provider serving this model.
question
#345
opened Oct 29, 2024 by
ducktapeonmydesk
2 tasks
Issue saving and querying PDF to vector store (meta-reference)
#342
opened Oct 29, 2024 by
jeffxtang
2 tasks
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.