Skip to content

Fix - expose options and quality_preset props #86

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

## [1.0.18] 2025-06-3

- Expose `options` and `quality_preset` properties for `Validator.validate()`

## [1.0.17] 2025-06-3

- Refactor `validate()` to use `/validate` endpoint from Codex backend and leverage this default logic
Expand Down Expand Up @@ -86,7 +90,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Initial release of the `cleanlab-codex` client library.

[Unreleased]: https://github.com/cleanlab/cleanlab-codex/compare/v1.0.17...HEAD
[Unreleased]: https://github.com/cleanlab/cleanlab-codex/compare/v1.0.18...HEAD
[1.0.18]: https://github.com/cleanlab/cleanlab-codex/compare/v1.0.17...v1.0.18
[1.0.17]: https://github.com/cleanlab/cleanlab-codex/compare/v1.0.16...v1.0.17
[1.0.16]: https://github.com/cleanlab/cleanlab-codex/compare/v1.0.15...v1.0.16
[1.0.15]: https://github.com/cleanlab/cleanlab-codex/compare/v1.0.14...v1.0.15
Expand Down
2 changes: 1 addition & 1 deletion src/cleanlab_codex/__about__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# SPDX-License-Identifier: MIT
__version__ = "1.0.17"
__version__ = "1.0.18"
12 changes: 11 additions & 1 deletion src/cleanlab_codex/validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,17 @@

from __future__ import annotations

from typing import Any, Callable, Optional
from typing import TYPE_CHECKING as _TYPE_CHECKING
from typing import Any, Callable, Literal, Optional

from cleanlab_tlm import TrustworthyRAG

from cleanlab_codex.internal.validator import validate_thresholds
from cleanlab_codex.project import Project

if _TYPE_CHECKING:
from codex.types.project_validate_params import Options as ProjectValidateOptions


class Validator:
def __init__(
Expand Down Expand Up @@ -54,6 +58,8 @@ def validate(
prompt: Optional[str] = None,
form_prompt: Optional[Callable[[str, str], str]] = None,
metadata: Optional[dict[str, Any]] = None,
options: Optional[ProjectValidateOptions] = None,
quality_preset: Literal["best", "high", "medium", "low", "base"] = "medium",
) -> dict[str, Any]:
"""Evaluate whether the AI-generated response is bad, and if so, request an alternate expert answer.
If no expert answer is available, this query is still logged for SMEs to answer.
Expand All @@ -65,6 +71,8 @@ def validate(
prompt (str, optional): Optional prompt representing the actual inputs (combining query, context, and system instructions into one string) to the LLM that generated the response.
form_prompt (Callable[[str, str], str], optional): Optional function to format the prompt based on query and context. Cannot be provided together with prompt, provide one or the other. This function should take query and context as parameters and return a formatted prompt string. If not provided, a default prompt formatter will be used. To include a system prompt or any other special instructions for your LLM, incorporate them directly in your custom form_prompt() function definition.
metadata (dict, optional): Additional custom metadata to associate with the query logged in the Codex Project.
options (ProjectValidateOptions, optional): Typed dict of advanced configuration options for the Trustworthy Language Model.
quality_preset (Literal["best", "high", "medium", "low", "base"], optional): The quality preset to use for the TLM or Trustworthy RAG API.

Returns:
dict[str, Any]: A dictionary containing:
Expand All @@ -91,6 +99,8 @@ def validate(
response=response,
custom_eval_thresholds=self._custom_eval_thresholds,
custom_metadata=metadata,
options=options,
quality_preset=quality_preset,
)

formatted_eval_scores = {
Expand Down