In the langkit documentation i found this paragraph
The similarity is done by calculating the cosine similarity between the prompt's embedding representation and the examples' embedding representation. Langkit currently uses sentence-transformers' all-MiniLM-L6-v2 model to calculate the embeddings. The target prompt is embedded at runtime, while the examples are pre-embedded and stored in a vector store using the FAISS library.
So how can i change the all-MiniLM-L6-v2 model with any other custom huggingface model ?
How to integrate that via the python code?