LCORE-580: Updated documentation that was outdated#492
LCORE-580: Updated documentation that was outdated#492tisnik merged 1 commit intolightspeed-core:mainfrom
Conversation
WalkthroughUpdated llama-stack version references to 0.2.18 across documentation and example project metadata (README.md, docs/*.md, and examples/pyproject.llamastack.toml). No code, exported symbols, or runtime logic were modified. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
README.md (1)
110-112: Fix user-facing typos and minor grammar.These are visible in docs; please correct for clarity.
-1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model, the server shoud listen to port 8321. +1. check Llama stack settings in [run.yaml](run.yaml), make sure we can access the provider and the model; the server should listen on port 8321. - The "provider_type" is used in the llama stack configuration file when refering to the provider. + The "provider_type" is used in the llama stack configuration file when referring to the provider. -The Llama Stack can be run as a standalone server and accessed via its the REST +The Llama Stack can be run as a standalone server and accessed via its REST - This step is not neccessary. + This step is not necessary. -Development images are build from main branch every time a new pull request is merged. +Development images are built from the main branch every time a new pull request is merged. -For macosx users: +For macOS users: - + trl==0.20.0 + + trl==0.20.0Also applies to: 129-131, 164-169, 433-433, 590-595, 664-666, 141-141
docs/deployment_guide.md (2)
143-147: Tidy up broken/misleading commands and typos.These will trip users following the guide.
-1. Copy the project file named `pyproject.llamastack.toml` into the new directory, renaming it to `pyproject.toml': +1. Copy the project file named `pyproject.llamastack.toml` into the new directory, renaming it to `pyproject.toml`: -cp examples/lightspeed-stack-lls-external.yaml lightspeed-stack.yaml` +cp examples/lightspeed-stack-lls-external.yaml lightspeed-stack.yaml -Llama Stack can be used as a library that is already part of OLS image. It means that no other processed needs to be started, +Llama Stack can be used as a library that is already part of the OLS image. It means that no other processes need to be started, -Development images are build from main branch every time a new pull request is merged. +Development images are built from the main branch every time a new pull request is merged.Also applies to: 403-404, 1124-1126, 599-606
1-1: Update all version pins to 0.2.18
- examples/pyproject.llamastack.toml:7 – bump
llama-stack==0.2.17→0.2.18- docs/getting_started.md:27 – bump
llama-stack==0.2.17→0.2.18- Verify whether the
"version": "0.2.0"entries in docs/deployment_guide.md:1326 and openapi.json:16 should remain as the API schema version or be updated to0.2.18.
🧹 Nitpick comments (3)
docs/deployment_guide.md (3)
679-697: Dependency pin updated to llama-stack==0.2.18 — OK.Consider centralizing this version (e.g., a single source of truth via docs variables) to avoid future drift across snippets.
1268-1269: Container run command — env var expansion portability.
${OPENAI_API_KEY}relies on the caller’s shell. Consider adding a brief note “ensure OPENAI_API_KEY is exported in your shell” above the command for clarity.
1440-1442: Minor wording nit.“endlines” → “newlines”.
-> As is shown on the previous example, the output might contain endlines, Markdown marks etc. +> As shown in the previous example, the output might contain newlines, Markdown markup, etc.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
README.md(1 hunks)docs/deployment_guide.md(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (2)
docs/deployment_guide.md (2)
393-395: Version JSON updated to 0.2.18 — OK.
1074-1076: Second version JSON updated to 0.2.18 — OK.
| authors = [] | ||
| dependencies = [ | ||
| "llama-stack==0.2.14", | ||
| "llama-stack==0.2.18", |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Version bump to llama-stack 0.2.18 looks correct.
Matches the PR goal and is consistent with other updated snippets.
Run to ensure no stale references remain:
🏁 Script executed:
#!/bin/bash
# Find any leftover 0.2.14 references or mismatched llama-stack pins
rg -nP '\b0\.2\.14\b|llama-stack==[0-9]+\.[0-9]+\.[0-9]+' -g '!**/dist/**' -g '!**/.venv/**' -S
# Optional: list all version JSON examples to confirm they read 0.2.18
rg -nP '"version"\s*:\s*"0\.2\.\d+"' docs README.md -SLength of output: 629
Fix stale llama-stack version references
- examples/pyproject.llamastack.toml: update
llama-stack==0.2.17(line 7) to0.2.18. - docs/getting_started.md: update
llama-stack==0.2.17(line 27) to0.2.18.
🤖 Prompt for AI Agents
In README.md around line 247, update the stale llama-stack reference by ensuring
consistency across the repo: change any occurrences of "llama-stack==0.2.17" to
"llama-stack==0.2.18" — specifically edit examples/pyproject.llamastack.toml
(line 7) and docs/getting_started.md (line 27) to replace 0.2.17 with 0.2.18 so
all files match the README.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/getting_started.md (1)
26-34: Consider reducing future doc churn for this pin.If compatible, either: (a) use a lower-bound (>=0.2.18) in docs, or (b) centralize the version string (e.g., a single docs include or Makefile var) to avoid drift across files.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
README.md(1 hunks)docs/deployment_guide.md(3 hunks)docs/getting_started.md(1 hunks)examples/pyproject.llamastack.toml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- examples/pyproject.llamastack.toml
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/deployment_guide.md
- README.md
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
| ```bash | ||
| uv add \ | ||
| "llama-stack==0.2.17" \ | ||
| "llama-stack==0.2.18" \ |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Version bump to llama-stack==0.2.18: LGTM; verify repo-wide consistency.
Looks good. Please also scan for any lingering 0.2.14–0.2.17 references in docs/examples and confirm 0.2.18 is on PyPI.
Run:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Current PyPI version:"
curl -s https://pypi.org/pypi/llama-stack/json | jq -r '.info.version'
echo
echo "Stale version references (expect none):"
rg -nP --glob '!**/dist/**' --glob '!**/build/**' 'llama-stack==0\.2\.(1[0-7]|0?\d)' || true
echo
echo "All pinned occurrences (should mostly be 0.2.18):"
rg -n 'llama-stack=='Length of output: 1003
Bump llama-stack dependency to 0.2.20 across docs and examples.
PyPI’s latest llama-stack release is 0.2.20—update every llama-stack==0.2.18 occurrence (README.md:247, pyproject.toml:31, examples/pyproject.llamastack.toml:7, docs/getting_started.md:27, docs/deployment_guide.md:679) to ==0.2.20 and verify no pins to 0.2.14–0.2.17 remain.
🤖 Prompt for AI Agents
In docs/getting_started.md around line 27, the llama-stack dependency is pinned
to "llama-stack==0.2.18"; update this occurrence to "llama-stack==0.2.20" and
save the file. After making this change, search the repo for any remaining pins
to 0.2.14–0.2.18 and replace them with 0.2.20 (specifically check README.md line
~247, pyproject.toml line ~31, examples/pyproject.llamastack.toml line ~7, and
docs/deployment_guide.md line ~679) to ensure all references are consistent.
Description
LCORE-580: Updated documentation that was outdated
Type of change
Related Tickets & Documents
Summary by CodeRabbit