Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.5.0-alpha.1"
".": "0.5.0-alpha.2"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 108
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-faa8aea30f68f4757456ffabbaa687cace33f1dc3b3eba9cb074ca4500a6fa43.yml
openapi_spec_hash: 8cea736f660e8842c3a2580469d331aa
config_hash: aa28e451064c13a38ddc44df99ebf52a
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-958e990011d6b4c27513743a151ec4c80c3103650a80027380d15f1d6b108e32.yml
openapi_spec_hash: 5b49d825dbc2a26726ca752914a65114
config_hash: 19b84a0a93d566334ae134dafc71991f
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 0.5.0-alpha.2 (2026-02-05)

Full Changelog: [v0.5.0-alpha.1...v0.5.0-alpha.2](https://github.com/llamastack/llama-stack-client-python/compare/v0.5.0-alpha.1...v0.5.0-alpha.2)

### Features

* Adds support for the `safety_identifier` parameter ([f20696b](https://github.com/llamastack/llama-stack-client-python/commit/f20696b6c1855c40e191980812ba3fd70b1f3577))

## 0.5.0-alpha.1 (2026-02-04)

Full Changelog: [v0.4.0-alpha.15...v0.5.0-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.4.0-alpha.15...v0.5.0-alpha.1)
Expand Down
36 changes: 18 additions & 18 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,10 +103,10 @@ Methods:

- <code title="post /v1/prompts">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">create</a>(\*\*<a href="src/llama_stack_client/types/prompt_create_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
- <code title="get /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">retrieve</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_retrieve_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
- <code title="post /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">update</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_update_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
- <code title="put /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">update</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_update_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
- <code title="get /v1/prompts">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">list</a>() -> <a href="./src/llama_stack_client/types/prompt_list_response.py">PromptListResponse</a></code>
- <code title="delete /v1/prompts/{prompt_id}">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">delete</a>(prompt_id) -> None</code>
- <code title="post /v1/prompts/{prompt_id}/set-default-version">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">set_default_version</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_set_default_version_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>
- <code title="put /v1/prompts/{prompt_id}/set-default-version">client.prompts.<a href="./src/llama_stack_client/resources/prompts/prompts.py">set_default_version</a>(prompt_id, \*\*<a href="src/llama_stack_client/types/prompt_set_default_version_params.py">params</a>) -> <a href="./src/llama_stack_client/types/prompt.py">Prompt</a></code>

## Versions

Expand Down Expand Up @@ -442,18 +442,6 @@ Methods:

# Alpha

## Inference

Types:

```python
from llama_stack_client.types.alpha import InferenceRerankResponse
```

Methods:

- <code title="post /v1alpha/inference/rerank">client.alpha.inference.<a href="./src/llama_stack_client/resources/alpha/inference.py">rerank</a>(\*\*<a href="src/llama_stack_client/types/alpha/inference_rerank_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/inference_rerank_response.py">InferenceRerankResponse</a></code>

## PostTraining

Types:
Expand Down Expand Up @@ -486,9 +474,9 @@ from llama_stack_client.types.alpha.post_training import (
Methods:

- <code title="get /v1alpha/post-training/jobs">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">list</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_list_response.py">JobListResponse</a></code>
- <code title="get /v1alpha/post-training/job/artifacts">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">artifacts</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training/job_artifacts_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_artifacts_response.py">JobArtifactsResponse</a></code>
- <code title="post /v1alpha/post-training/job/cancel">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">cancel</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training/job_cancel_params.py">params</a>) -> None</code>
- <code title="get /v1alpha/post-training/job/status">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">status</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training/job_status_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_status_response.py">JobStatusResponse</a></code>
- <code title="get /v1alpha/post-training/job/artifacts">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">artifacts</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_artifacts_response.py">JobArtifactsResponse</a></code>
- <code title="post /v1alpha/post-training/job/cancel">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">cancel</a>() -> None</code>
- <code title="get /v1alpha/post-training/job/status">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">status</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_status_response.py">JobStatusResponse</a></code>

## Benchmarks

Expand Down Expand Up @@ -538,6 +526,18 @@ Methods:
- <code title="get /v1alpha/admin/inspect/routes">client.alpha.admin.<a href="./src/llama_stack_client/resources/alpha/admin.py">list_routes</a>(\*\*<a href="src/llama_stack_client/types/alpha/admin_list_routes_params.py">params</a>) -> <a href="./src/llama_stack_client/types/route_list_response.py">RouteListResponse</a></code>
- <code title="get /v1alpha/admin/version">client.alpha.admin.<a href="./src/llama_stack_client/resources/alpha/admin.py">version</a>() -> <a href="./src/llama_stack_client/types/shared/version_info.py">VersionInfo</a></code>

## Inference

Types:

```python
from llama_stack_client.types.alpha import InferenceRerankResponse
```

Methods:

- <code title="post /v1alpha/inference/rerank">client.alpha.inference.<a href="./src/llama_stack_client/resources/alpha/inference.py">rerank</a>(\*\*<a href="src/llama_stack_client/types/alpha/inference_rerank_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/inference_rerank_response.py">InferenceRerankResponse</a></code>

# Beta

## Datasets
Expand All @@ -558,7 +558,7 @@ Methods:

- <code title="get /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">retrieve</a>(dataset_id) -> <a href="./src/llama_stack_client/types/beta/dataset_retrieve_response.py">DatasetRetrieveResponse</a></code>
- <code title="get /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">list</a>() -> <a href="./src/llama_stack_client/types/beta/dataset_list_response.py">DatasetListResponse</a></code>
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(path_dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
- <code title="get /v1beta/datasetio/iterrows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">iterrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_iterrows_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_iterrows_response.py">DatasetIterrowsResponse</a></code>
- <code title="post /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">register</a>(\*\*<a href="src/llama_stack_client/types/beta/dataset_register_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_register_response.py">DatasetRegisterResponse</a></code>
- <code title="delete /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">unregister</a>(dataset_id) -> None</code>
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "llama_stack_client"
version = "0.5.0-alpha.1"
version = "0.5.0-alpha.2"
description = "The official Python library for the llama-stack-client API"
dynamic = ["readme"]
license = "MIT"
Expand Down
2 changes: 1 addition & 1 deletion src/llama_stack_client/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "llama_stack_client"
__version__ = "0.5.0-alpha.1" # x-release-please-version
__version__ = "0.5.0-alpha.2" # x-release-please-version
12 changes: 6 additions & 6 deletions src/llama_stack_client/resources/alpha/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,12 +56,6 @@
)

__all__ = [
"InferenceResource",
"AsyncInferenceResource",
"InferenceResourceWithRawResponse",
"AsyncInferenceResourceWithRawResponse",
"InferenceResourceWithStreamingResponse",
"AsyncInferenceResourceWithStreamingResponse",
"PostTrainingResource",
"AsyncPostTrainingResource",
"PostTrainingResourceWithRawResponse",
Expand All @@ -86,6 +80,12 @@
"AsyncAdminResourceWithRawResponse",
"AdminResourceWithStreamingResponse",
"AsyncAdminResourceWithStreamingResponse",
"InferenceResource",
"AsyncInferenceResource",
"InferenceResourceWithRawResponse",
"AsyncInferenceResourceWithRawResponse",
"InferenceResourceWithStreamingResponse",
"AsyncInferenceResourceWithStreamingResponse",
"AlphaResource",
"AsyncAlphaResource",
"AlphaResourceWithRawResponse",
Expand Down
48 changes: 24 additions & 24 deletions src/llama_stack_client/resources/alpha/alpha.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,10 +55,6 @@


class AlphaResource(SyncAPIResource):
@cached_property
def inference(self) -> InferenceResource:
return InferenceResource(self._client)

@cached_property
def post_training(self) -> PostTrainingResource:
return PostTrainingResource(self._client)
Expand All @@ -75,6 +71,10 @@ def eval(self) -> EvalResource:
def admin(self) -> AdminResource:
return AdminResource(self._client)

@cached_property
def inference(self) -> InferenceResource:
return InferenceResource(self._client)

@cached_property
def with_raw_response(self) -> AlphaResourceWithRawResponse:
"""
Expand All @@ -96,10 +96,6 @@ def with_streaming_response(self) -> AlphaResourceWithStreamingResponse:


class AsyncAlphaResource(AsyncAPIResource):
@cached_property
def inference(self) -> AsyncInferenceResource:
return AsyncInferenceResource(self._client)

@cached_property
def post_training(self) -> AsyncPostTrainingResource:
return AsyncPostTrainingResource(self._client)
Expand All @@ -116,6 +112,10 @@ def eval(self) -> AsyncEvalResource:
def admin(self) -> AsyncAdminResource:
return AsyncAdminResource(self._client)

@cached_property
def inference(self) -> AsyncInferenceResource:
return AsyncInferenceResource(self._client)

@cached_property
def with_raw_response(self) -> AsyncAlphaResourceWithRawResponse:
"""
Expand All @@ -140,10 +140,6 @@ class AlphaResourceWithRawResponse:
def __init__(self, alpha: AlphaResource) -> None:
self._alpha = alpha

@cached_property
def inference(self) -> InferenceResourceWithRawResponse:
return InferenceResourceWithRawResponse(self._alpha.inference)

@cached_property
def post_training(self) -> PostTrainingResourceWithRawResponse:
return PostTrainingResourceWithRawResponse(self._alpha.post_training)
Expand All @@ -160,15 +156,15 @@ def eval(self) -> EvalResourceWithRawResponse:
def admin(self) -> AdminResourceWithRawResponse:
return AdminResourceWithRawResponse(self._alpha.admin)

@cached_property
def inference(self) -> InferenceResourceWithRawResponse:
return InferenceResourceWithRawResponse(self._alpha.inference)


class AsyncAlphaResourceWithRawResponse:
def __init__(self, alpha: AsyncAlphaResource) -> None:
self._alpha = alpha

@cached_property
def inference(self) -> AsyncInferenceResourceWithRawResponse:
return AsyncInferenceResourceWithRawResponse(self._alpha.inference)

@cached_property
def post_training(self) -> AsyncPostTrainingResourceWithRawResponse:
return AsyncPostTrainingResourceWithRawResponse(self._alpha.post_training)
Expand All @@ -185,15 +181,15 @@ def eval(self) -> AsyncEvalResourceWithRawResponse:
def admin(self) -> AsyncAdminResourceWithRawResponse:
return AsyncAdminResourceWithRawResponse(self._alpha.admin)

@cached_property
def inference(self) -> AsyncInferenceResourceWithRawResponse:
return AsyncInferenceResourceWithRawResponse(self._alpha.inference)


class AlphaResourceWithStreamingResponse:
def __init__(self, alpha: AlphaResource) -> None:
self._alpha = alpha

@cached_property
def inference(self) -> InferenceResourceWithStreamingResponse:
return InferenceResourceWithStreamingResponse(self._alpha.inference)

@cached_property
def post_training(self) -> PostTrainingResourceWithStreamingResponse:
return PostTrainingResourceWithStreamingResponse(self._alpha.post_training)
Expand All @@ -210,15 +206,15 @@ def eval(self) -> EvalResourceWithStreamingResponse:
def admin(self) -> AdminResourceWithStreamingResponse:
return AdminResourceWithStreamingResponse(self._alpha.admin)

@cached_property
def inference(self) -> InferenceResourceWithStreamingResponse:
return InferenceResourceWithStreamingResponse(self._alpha.inference)


class AsyncAlphaResourceWithStreamingResponse:
def __init__(self, alpha: AsyncAlphaResource) -> None:
self._alpha = alpha

@cached_property
def inference(self) -> AsyncInferenceResourceWithStreamingResponse:
return AsyncInferenceResourceWithStreamingResponse(self._alpha.inference)

@cached_property
def post_training(self) -> AsyncPostTrainingResourceWithStreamingResponse:
return AsyncPostTrainingResourceWithStreamingResponse(self._alpha.post_training)
Expand All @@ -234,3 +230,7 @@ def eval(self) -> AsyncEvalResourceWithStreamingResponse:
@cached_property
def admin(self) -> AsyncAdminResourceWithStreamingResponse:
return AsyncAdminResourceWithStreamingResponse(self._alpha.admin)

@cached_property
def inference(self) -> AsyncInferenceResourceWithStreamingResponse:
return AsyncInferenceResourceWithStreamingResponse(self._alpha.inference)
48 changes: 40 additions & 8 deletions src/llama_stack_client/resources/alpha/eval/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,13 @@ def evaluate_rows(
Evaluate a list of rows on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

input_rows: The rows to evaluate

scoring_functions: The scoring functions to use for the evaluation

extra_headers: Send extra headers

Expand Down Expand Up @@ -132,7 +138,13 @@ def evaluate_rows_alpha(
Evaluate a list of rows on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

input_rows: The rows to evaluate

scoring_functions: The scoring functions to use for the evaluation

extra_headers: Send extra headers

Expand Down Expand Up @@ -176,7 +188,9 @@ def run_eval(
Run an evaluation on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

extra_headers: Send extra headers

Expand Down Expand Up @@ -213,7 +227,9 @@ def run_eval_alpha(
Run an evaluation on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

extra_headers: Send extra headers

Expand Down Expand Up @@ -279,7 +295,13 @@ async def evaluate_rows(
Evaluate a list of rows on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

input_rows: The rows to evaluate

scoring_functions: The scoring functions to use for the evaluation

extra_headers: Send extra headers

Expand Down Expand Up @@ -325,7 +347,13 @@ async def evaluate_rows_alpha(
Evaluate a list of rows on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

input_rows: The rows to evaluate

scoring_functions: The scoring functions to use for the evaluation

extra_headers: Send extra headers

Expand Down Expand Up @@ -369,7 +397,9 @@ async def run_eval(
Run an evaluation on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

extra_headers: Send extra headers

Expand Down Expand Up @@ -408,7 +438,9 @@ async def run_eval_alpha(
Run an evaluation on a benchmark.

Args:
benchmark_config: A benchmark configuration for evaluation.
benchmark_id: The ID of the benchmark

benchmark_config: The configuration for the benchmark

extra_headers: Send extra headers

Expand Down
Loading
Loading