-
Notifications
You must be signed in to change notification settings - Fork 6
refactor: refactor prompt builder #30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Trivy scanning results. |
Code Coverage Summary
Diff against main
Results for commit: 4a9cdd1 Minimum allowed coverage is ♻️ This comment has been updated with latest results |
@@ -13,19 +13,19 @@ class PromptBuilder: | |||
def __init__(self, model_name: Optional[str] = None) -> None: | |||
""" | |||
Args: | |||
model_name: Name of the model to load a tokenizer for. | |||
Tokenizer is used to append special tokens to the prompt. If empty, no tokens will be added. | |||
model_name: name of the tokenizer model to use. If provided, the tokenizer will convert the prompt to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit (non-blocking): maybe the name hg_model_name
would make the fact that this is only for Hugging Face models (and one shouldn't put the names of other models here) more explicit? Or maybe even we should have a separate HuggingFacePromptBuilder (probably inheriting from PromptBuilder
) so the general PromptBuilder
doesn't containg HG-specific stuff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of a separate class. We don't use Hugging Face stuff yet, but we could keep it for future integrations.
This PR removes hard-coded
PromptBuilder
inLLMClient
and fixes issues with prompt builder for new LLM models integrations.