-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Embedding model and Engine?? #62
Comments
There is no embedding model defined as such. For each input sentence you have to tokenize using the tokenizer provided my Mistral and then pass those tokens to the model. Check out the example below posted from the mistral with torch.no_grad(): concatenate sentence embeddingsX = np.concatenate([x[None] for x in featurized_x], axis=0) # (n_points, model_dim) |
Is there any working example which can help me better with understanding to code? I am getting some of the lines as a prompt back from the Mistral and I want them to embedded. |
check the tutorial example provided in the folder by mistral. The code I gave earlier is given on the same. |
Thanks but I didn't find anything useful. Well, I was just playing with prompts and afterwards I was embedding them to some other function. |
hi, dude, have you solve the problem? |
hey, I tried but did not get the enough of good response from the model. |
Hey guys,
I am shifting from GPT to Mistral and I am facing one problem which is that I could not find the embedding model and engine for Mistral yet.
I am using the service from DeepInfra
Here's the code snippet which I wrote for GPT:
All I want to know is which embedding model and engine should be used?
Thank you 🙂
The text was updated successfully, but these errors were encountered: