-
Notifications
You must be signed in to change notification settings - Fork 1
Add getter functions for TLM defaults #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
4e24abc
Init
AshishSardana 65278e4
Fix lint
AshishSardana e1b7162
Add tests for default context limit
AshishSardana ee94b0e
Bump version
AshishSardana 53382fb
Update CHANGELOG
AshishSardana cc91018
Linting
AshishSardana 8a528bd
Add test for `get_default_quality_preset` and update word_to_token
AshishSardana d456a60
Update existing tests to use WORD_THAT_EQUALS_ONE_TOKEN instead of "a"
AshishSardana c132348
Add test for `get_default_max_tokens()`, introduce constant `_DEFAULT…
AshishSardana 0b9c607
Add type checks
AshishSardana 5d14592
Fix type check error
AshishSardana 8521516
Assert return type of single prompt to TLM
AshishSardana 9045cf5
Use "hello " instead of "no " as some tests depend on characters base…
AshishSardana 71c802e
Revert to assertion string used throughout `test_validation`
AshishSardana 43f6628
Catch error in fetching tokenizer of newer GPT (4.1-mini, etc.) models
AshishSardana 23605da
Specify error to catch
AshishSardana 3608f9c
We need 4 character = 1 token to satisfy both (character counter, act…
AshishSardana File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,2 @@ | ||
# SPDX-License-Identifier: MIT | ||
__version__ = "1.1.1" | ||
__version__ = "1.1.2" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,46 @@ | ||
from cleanlab_tlm.internal.constants import ( | ||
_DEFAULT_TLM_MAX_TOKENS, | ||
_DEFAULT_TLM_QUALITY_PRESET, | ||
_TLM_DEFAULT_CONTEXT_LIMIT, | ||
_TLM_DEFAULT_MODEL, | ||
) | ||
|
||
|
||
def get_default_model() -> str: | ||
""" | ||
Get the default model name for TLM. | ||
|
||
Returns: | ||
str: The default model name for TLM. | ||
""" | ||
return _TLM_DEFAULT_MODEL | ||
|
||
|
||
def get_default_quality_preset() -> str: | ||
""" | ||
Get the default quality preset for TLM. | ||
|
||
Returns: | ||
str: The default quality preset for TLM. | ||
""" | ||
return _DEFAULT_TLM_QUALITY_PRESET | ||
|
||
|
||
def get_default_context_limit() -> int: | ||
""" | ||
Get the default context limit for TLM. | ||
|
||
Returns: | ||
int: The default context limit for TLM. | ||
""" | ||
return _TLM_DEFAULT_CONTEXT_LIMIT | ||
|
||
|
||
def get_default_max_tokens() -> int: | ||
""" | ||
Get the default maximum output tokens allowed. | ||
|
||
Returns: | ||
int: The default maximum output tokens. | ||
""" | ||
return _DEFAULT_TLM_MAX_TOKENS | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
import pytest | ||
AshishSardana marked this conversation as resolved.
Show resolved
Hide resolved
|
||
import tiktoken | ||
|
||
from cleanlab_tlm.errors import TlmBadRequestError | ||
from cleanlab_tlm.tlm import TLM | ||
from cleanlab_tlm.utils.config import ( | ||
get_default_context_limit, | ||
get_default_max_tokens, | ||
get_default_model, | ||
get_default_quality_preset, | ||
) | ||
from tests.constants import WORD_THAT_EQUALS_ONE_TOKEN | ||
|
||
tlm_with_default_setting = TLM() | ||
|
||
|
||
def test_get_default_model(tlm: TLM) -> None: | ||
assert tlm.get_model_name() == get_default_model() | ||
|
||
|
||
def test_get_default_quality_preset(tlm: TLM) -> None: | ||
assert get_default_quality_preset() == tlm._quality_preset | ||
|
||
|
||
def test_prompt_too_long_exception_single_prompt(tlm: TLM) -> None: | ||
"""Tests that bad request error is raised when prompt is too long when calling tlm.prompt with a single prompt.""" | ||
with pytest.raises(TlmBadRequestError) as exc_info: | ||
tlm.prompt(WORD_THAT_EQUALS_ONE_TOKEN * (get_default_context_limit() + 1)) | ||
|
||
assert exc_info.value.message.startswith("Prompt length exceeds") | ||
assert exc_info.value.retryable is False | ||
|
||
|
||
def test_prompt_within_context_limit_returns_response(tlm: TLM) -> None: | ||
"""Tests that no error is raised when prompt length is within limit.""" | ||
response = tlm.prompt(WORD_THAT_EQUALS_ONE_TOKEN * (get_default_context_limit() - 1000)) | ||
|
||
assert isinstance(response, dict) | ||
assert "response" in response | ||
assert isinstance(response["response"], str) | ||
|
||
|
||
def test_response_within_max_tokens() -> None: | ||
"""Tests that response is within max tokens limit.""" | ||
tlm_base = TLM(quality_preset="base") | ||
prompt = "write a 100 page book about computer science. make sure it is extremely long and comprehensive." | ||
|
||
result = tlm_base.prompt(prompt) | ||
assert isinstance(result, dict) | ||
response = result["response"] | ||
assert isinstance(response, str) | ||
|
||
try: | ||
enc = tiktoken.encoding_for_model(get_default_model()) | ||
except KeyError: | ||
enc = tiktoken.encoding_for_model("gpt-4o") | ||
tokens_in_response = len(enc.encode(response)) | ||
assert tokens_in_response <= get_default_max_tokens() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.