Skip to content

Commit 80f0ac6

Browse files
eyurtsevolgamurraft
authored andcommitted
ollama[patch]: Update API Reference for ollama embeddings (langchain-ai#25315)
Update API reference for OllamaEmbeddings Issue: langchain-ai#24856
1 parent 9836480 commit 80f0ac6

File tree

1 file changed

+100
-5
lines changed

1 file changed

+100
-5
lines changed

libs/partners/ollama/langchain_ollama/embeddings.py

Lines changed: 100 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,16 +9,111 @@
99

1010

1111
class OllamaEmbeddings(BaseModel, Embeddings):
12-
"""OllamaEmbeddings embedding model.
12+
"""Ollama embedding model integration.
1313
14-
Example:
14+
Set up a local Ollama instance:
15+
Install the Ollama package and set up a local Ollama instance
16+
using the instructions here: https://github.com/ollama/ollama .
17+
18+
You will need to choose a model to serve.
19+
20+
You can view a list of available models via the model library (https://ollama.com/library).
21+
22+
To fetch a model from the Ollama model library use ``ollama pull <name-of-model>``.
23+
24+
For example, to pull the llama3 model:
25+
26+
.. code-block:: bash
27+
28+
ollama pull llama3
29+
30+
This will download the default tagged version of the model.
31+
Typically, the default points to the latest, smallest sized-parameter model.
32+
33+
* On Mac, the models will be downloaded to ~/.ollama/models
34+
* On Linux (or WSL), the models will be stored at /usr/share/ollama/.ollama/models
35+
36+
You can specify the exact version of the model of interest
37+
as such ``ollama pull vicuna:13b-v1.5-16k-q4_0``.
38+
39+
To view pulled models:
40+
41+
.. code-block:: bash
42+
43+
ollama list
44+
45+
To start serving:
46+
47+
.. code-block:: bash
48+
49+
ollama serve
50+
51+
View the Ollama documentation for more commands.
52+
53+
.. code-block:: bash
54+
55+
ollama help
56+
57+
Install the langchain-ollama integration package:
58+
.. code-block:: bash
59+
60+
pip install -U langchain_ollama
61+
62+
Key init args — completion params:
63+
model: str
64+
Name of Ollama model to use.
65+
base_url: Optional[str]
66+
Base url the model is hosted under.
67+
68+
See full list of supported init args and their descriptions in the params section.
69+
70+
Instantiate:
1571
.. code-block:: python
1672
1773
from langchain_ollama import OllamaEmbeddings
1874
19-
embedder = OllamaEmbeddings(model="llama3")
20-
embedder.embed_query("what is the place that jonathan worked at?")
21-
"""
75+
embed = OllamaEmbeddings(
76+
model="llama3"
77+
)
78+
79+
Embed single text:
80+
.. code-block:: python
81+
82+
input_text = "The meaning of life is 42"
83+
vector = embed.embed_query(input_text)
84+
print(vector[:3])
85+
86+
.. code-block:: python
87+
88+
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
89+
90+
Embed multiple texts:
91+
.. code-block:: python
92+
93+
input_texts = ["Document 1...", "Document 2..."]
94+
vectors = embed.embed_documents(input_texts)
95+
print(len(vectors))
96+
# The first 3 coordinates for the first vector
97+
print(vectors[0][:3])
98+
99+
.. code-block:: python
100+
101+
2
102+
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
103+
104+
Async:
105+
.. code-block:: python
106+
107+
vector = await embed.aembed_query(input_text)
108+
print(vector[:3])
109+
110+
# multiple:
111+
# await embed.aembed_documents(input_texts)
112+
113+
.. code-block:: python
114+
115+
[-0.009100092574954033, 0.005071679595857859, -0.0029193938244134188]
116+
""" # noqa: E501
22117

23118
model: str
24119
"""Model name to use."""

0 commit comments

Comments
 (0)