You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens
129
144
in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
145
+
in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
130
146
For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling.
131
147
Specify a lower value for less random responses and a higher value for more random responses.
132
148
Default 40. Possible values [1, 40].
@@ -183,6 +199,10 @@ class PaLM2TextEmbeddingGenerator(base.Predictor):
183
199
"""PaLM2 text embedding generator LLM model.
184
200
185
201
Args:
202
+
model_name (str, Default to "textembedding-gecko"):
203
+
The model for text embedding. “textembedding-gecko” returns model embeddings for text inputs.
204
+
"textembedding-gecko-multilingual" returns model embeddings for text inputs which support over 100 languages
205
+
Default to "textembedding-gecko".
186
206
session (bigframes.Session or None):
187
207
BQ session to create the model. If None, use the global default session.
0 commit comments