Skip to content

Issues: meta-llama/llama-stack

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Add test::mock providers good first issue Good for newcomers
#436 opened Nov 13, 2024 by ashwinb
Improve errors in client when there are server errors good first issue Good for newcomers
#434 opened Nov 13, 2024 by raghotham
2 tasks
Error in meta-reference-gpu docker
#418 opened Nov 11, 2024 by subramen
2 tasks
Disable code_interpreter in tool-calling agent
#407 opened Nov 8, 2024 by subramen
1 of 2 tasks
Model names mismatch with remote::vllm
#405 opened Nov 8, 2024 by stevegrubb
1 of 2 tasks
Create vLLM distribution
#382 opened Nov 6, 2024 by yanxi0830
Usage of remote:vllm
#372 opened Nov 5, 2024 by TurboMa
llama-stack-client: command not found
#361 opened Nov 3, 2024 by alexhegit
2 tasks
Run ollama gpu distribution failed
#350 opened Oct 31, 2024 by alexhegit
1 of 2 tasks
vLLM can't find model from llama download
#344 opened Oct 29, 2024 by stevegrubb
1 of 2 tasks
ProTip! Find all open issues with in progress development work with linked:pr.