Skip to content

Commit f782f4f

Browse files
committed
fix: address the comments
1 parent 1011fb8 commit f782f4f

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

bigframes/ml/llm.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@
2626

2727
_REMOTE_TEXT_GENERATOR_MODEL_CODE = "CLOUD_AI_LARGE_LANGUAGE_MODEL_V1"
2828
_REMOTE_TEXT_GENERATOR_32K_MODEL_CODE = "text-bison-32k"
29-
_REMOTE_TEXT_GENERATOR_32K_MODEL_CODE = "text-bison-32k"
3029
_TEXT_GENERATE_RESULT_COLUMN = "ml_generate_text_llm_result"
3130

3231
_REMOTE_EMBEDDING_GENERATOR_MODEL_CODE = "CLOUD_AI_TEXT_EMBEDDING_MODEL_V1"
@@ -52,7 +51,7 @@ class PaLM2TextGenerator(base.Predictor):
5251

5352
def __init__(
5453
self,
55-
model_name: Literal["text-bison", "text-bison-32k"] = "text-bison-32k",
54+
model_name: Literal["text-bison", "text-bison-32k"] = "text-bison",
5655
session: Optional[bigframes.Session] = None,
5756
connection_name: Optional[str] = None,
5857
):
@@ -131,7 +130,6 @@ def predict(
131130
top_k (int, default 40):
132131
Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens
133132
in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
134-
in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
135133
For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.
136134
Specify a lower value for less random responses and a higher value for more random responses.
137135
Default 40. Possible values [1, 40].

0 commit comments

Comments
 (0)