-
Notifications
You must be signed in to change notification settings - Fork 740
Insights: meta-llama/llama-stack
Overview
Could not load contribution data
Please try again later
39 Pull requests merged by 15 people
-
Convert
SamplingParams.strategy
to a union#767 merged
Jan 15, 2025 -
Fix issue when generating distros
#755 merged
Jan 15, 2025 -
Free up memory after post training finishes
#770 merged
Jan 15, 2025 -
Fix fireworks run-with-safety template
#766 merged
Jan 14, 2025 -
Fix broken tests in test_registry
#707 merged
Jan 14, 2025 -
added support of PYPI_VERSION in stack build
#762 merged
Jan 14, 2025 -
add braintrust to experimental-post-training template
#763 merged
Jan 14, 2025 -
[post training] define llama stack post training dataset format
#717 merged
Jan 14, 2025 -
Fix telemetry to work on reinstantiating new lib cli
#761 merged
Jan 14, 2025 -
[bugfix] fix streaming GeneratorExit exception with LlamaStackAsLibraryClient
#760 merged
Jan 14, 2025 -
Switch to use importlib instead of deprecated pkg_resources
#678 merged
Jan 14, 2025 -
Add init files to post training folders
#711 merged
Jan 14, 2025 -
Update Cerebras docs to include header
#704 merged
Jan 14, 2025 -
Fix incorrect Python binary path for UBI9 image
#757 merged
Jan 14, 2025 -
Rename ipython to tool
#756 merged
Jan 14, 2025 -
[#432] Add Groq Provider - tool calls
#630 merged
Jan 14, 2025 -
[CI/CD] more robust re-try for downloading testpypi package
#749 merged
Jan 14, 2025 -
Consolidating Safety tests from various places under client-sdk
#699 merged
Jan 14, 2025 -
Consolidating Inference tests under client-sdk tests
#751 merged
Jan 14, 2025 -
[Fireworks] Update model name for Fireworks
#753 merged
Jan 13, 2025 -
Add provider data passing for library client
#750 merged
Jan 13, 2025 -
update notebook to use new tool defs
#745 merged
Jan 13, 2025 -
Support building UBI9 base container image
#676 merged
Jan 13, 2025 -
Improve model download doc
#748 merged
Jan 13, 2025 -
Replaced zrangebylex method in the range method
#521 merged
Jan 12, 2025 -
[CICD] github workflow to push nightly package to testpypi
#734 merged
Jan 11, 2025 -
Add inline vLLM inference provider to regression tests and fix regressions
#662 merged
Jan 11, 2025 -
rename LLAMASTACK_PORT to LLAMA_STACK_PORT for consistency with other env vars
#744 merged
Jan 10, 2025 -
remove conflicting default for tool prompt format in chat completion
#742 merged
Jan 10, 2025 -
Expose LLAMASTACK_PORT in cli.stack.run
#722 merged
Jan 10, 2025 -
Consolidating Memory tests under client-sdk
#703 merged
Jan 10, 2025 -
Fixed typo in default VLLM_URL in remote-vllm.md
#723 merged
Jan 10, 2025 -
Add persistence for localfs datasets
#557 merged
Jan 10, 2025 -
Check version incompatibility
#738 merged
Jan 9, 2025 -
Add X-LlamaStack-Client-Version, rename ProviderData -> Provider-Data
#735 merged
Jan 9, 2025 -
Add automatic PyPI release GitHub workflow
#618 merged
Jan 9, 2025 -
agents to use tools api
#673 merged
Jan 9, 2025 -
add --version to llama stack CLI & /version endpoint
#732 merged
Jan 9, 2025 -
fix links for distro
#733 merged
Jan 8, 2025
5 Pull requests opened by 4 people
-
[Test automation] Fix pytest collection errors and schedule daily CI/CD workflow run
#737 opened
Jan 9, 2025 -
[Test automation] generate custom test report
#739 opened
Jan 9, 2025 -
[Test automation] create a CI config for running required tests
#743 opened
Jan 10, 2025 -
[CICD] Github workflow for publishing Docker images
#764 opened
Jan 14, 2025 -
More idiomatic REST API
#765 opened
Jan 14, 2025
26 Issues closed by 4 people
-
Fireworks: maximum context length: 32k - for Llama 405b (should be: 128k context length)
#697 closed
Jan 13, 2025 -
Add Remote Inference Adapter for Groq
#432 closed
Jan 13, 2025 -
Enable distributuion/ollama for rocm
#363 closed
Jan 13, 2025 -
Redis persistence store not working
#520 closed
Jan 13, 2025 -
curl -X GET http://xxxx5000/alpha/models/list return empty list
#625 closed
Jan 13, 2025 -
Tests are failing with "cannot import name 'RawContent'" error
#647 closed
Jan 13, 2025 -
Unable to download model from Meta
#746 closed
Jan 13, 2025 -
Persistence for localfs dataset provider
#539 closed
Jan 10, 2025 -
Ollama 4.0 vision and llama-stack token Invalid token for decoding
#367 closed
Jan 10, 2025 -
Usage of remote:vllm
#372 closed
Jan 10, 2025 -
documentation or cookbook for meta-reference api usage
#394 closed
Jan 10, 2025 -
llama-stack api for QLORA finetuning (atleast meta-reference)
#395 closed
Jan 10, 2025 -
Disable code_interpreter in tool-calling agent
#407 closed
Jan 10, 2025 -
Error in meta-reference-gpu docker
#418 closed
Jan 10, 2025 -
ollama distro: can't find shield
#502 closed
Jan 10, 2025 -
Add an integration test for redis KVStore and ensure it has functional parity with other KVStores
#506 closed
Jan 10, 2025 -
llama3.2 gmailtoolkit problem
#530 closed
Jan 10, 2025 -
Incorporate Feast into Llama Stack
#561 closed
Jan 10, 2025 -
Feature Request: Add Tags to Models for Categorization and Filtering
#542 closed
Jan 10, 2025 -
LlamaStackDirectClient with ollama failed to run
#562 closed
Jan 10, 2025 -
getting_started.ipynb link on llama.com is a 404
#633 closed
Jan 10, 2025 -
JSON structured outputs for Ollama
#679 closed
Jan 9, 2025 -
Feature Request: Multi-Organization and Project Management
#687 closed
Jan 9, 2025
4 Issues opened by 4 people
-
Github workflow for releasing PyPi packages
#769 opened
Jan 15, 2025 -
Free up GPU memory after unregistering model in meta reference inference
#768 opened
Jan 15, 2025 -
Switch from pip to uv
#747 opened
Jan 13, 2025
33 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
[test automation] support run tests on config file
#730 commented on
Jan 15, 2025 • 13 new comments -
resolved termios error during execution on a windows environment
#727 commented on
Jan 12, 2025 • 1 new comment -
add nvidia distribution
#565 commented on
Jan 14, 2025 • 1 new comment -
Support multimodal embedding generation
#616 commented on
Jan 8, 2025 • 0 new comments -
Make sure image decode -> data URL works for large images also
#416 commented on
Jan 13, 2025 • 0 new comments -
Create vLLM distribution
#382 commented on
Jan 13, 2025 • 0 new comments -
Portkey AI inference Provier support
#671 commented on
Jan 13, 2025 • 0 new comments -
Consolidate tests/client-sdk and providers/tests
#652 commented on
Jan 13, 2025 • 0 new comments -
Better error message for fireworks invalid response_format
#650 commented on
Jan 13, 2025 • 0 new comments -
Explanation of Sampling Param semantics in documentation
#602 commented on
Jan 13, 2025 • 0 new comments -
[RFC] Support multi modal retrieval on top of llama stack, inference provider side
#667 commented on
Jan 13, 2025 • 0 new comments -
Why can't I use my llama stack? Can anyone help me?
#373 commented on
Jan 14, 2025 • 0 new comments -
Adds SambaNova Cloud Inference Adapter
#515 commented on
Jan 14, 2025 • 0 new comments -
Sambanova inferene
#555 commented on
Jan 14, 2025 • 0 new comments -
feat: enable xpu support for meta-reference stack
#558 commented on
Jan 14, 2025 • 0 new comments -
Agent response format
#660 commented on
Jan 14, 2025 • 0 new comments -
adding encoding specifier to open().read() call in order to prevent t…
#725 commented on
Jan 14, 2025 • 0 new comments -
Permission denied error on running 'llama-stack-client providers list'
#691 commented on
Jan 9, 2025 • 0 new comments -
Executing llama on Windows causes a ModuleNotFoundError for termios
#726 commented on
Jan 9, 2025 • 0 new comments -
Executing setup.py build on Windows 11 throws a UnicodeDecodeError.
#724 commented on
Jan 9, 2025 • 0 new comments -
`VLLM + Llama-Stack` fails when using local images in base64 format with Vision Llama
#571 commented on
Jan 10, 2025 • 0 new comments -
Path misinterpretation in llama model command: C: misread as C-
#513 commented on
Jan 10, 2025 • 0 new comments -
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
#466 commented on
Jan 10, 2025 • 0 new comments -
Add test::mock providers
#436 commented on
Jan 10, 2025 • 0 new comments -
Improve errors in client when there are server errors
#434 commented on
Jan 10, 2025 • 0 new comments -
Upload arm64 docker images where relevant
#406 commented on
Jan 10, 2025 • 0 new comments -
500 when uploading large documents to memory bank
#582 commented on
Jan 10, 2025 • 0 new comments -
vllm does not work with image URLs
#643 commented on
Jan 10, 2025 • 0 new comments -
What is the correct value for "INFERENCE_MODEL" environment variable for "llama3:70b-instruct model" ?
#721 commented on
Jan 11, 2025 • 0 new comments -
Feature Request: Option to Delete a Memory
#682 commented on
Jan 13, 2025 • 0 new comments -
Higher test coverage for tests/client-sdk
#651 commented on
Jan 13, 2025 • 0 new comments -
Agent's tests are failing with ScopeMismatch
#648 commented on
Jan 13, 2025 • 0 new comments -
Context retrieval only works for first user message
#444 commented on
Jan 13, 2025 • 0 new comments