Skip to content

Commit 91897a6

Browse files
smartalecHAlec Hammond
and
Alec Hammond
authored
Add OllamaEmbeddings to python LangChain example (ollama#994)
* Add OllamaEmbeddings to python LangChain example * typo --------- Co-authored-by: Alec Hammond <alechammond@fb.com>
1 parent 96122b7 commit 91897a6

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

docs/tutorials/langchainpy.md

+4-3
Original file line numberDiff line numberDiff line change
@@ -42,12 +42,13 @@ text_splitter=RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
4242
all_splits = text_splitter.split_documents(data)
4343
```
4444

45-
It's split up, but we have to find the relevant splits and then submit those to the model. We can do this by creating embeddings and storing them in a vector database. For now, we don't have embeddings built in to Ollama, though we will be adding that soon, so for now, we can use the GPT4All library for that. We will use ChromaDB in this example for a vector database. `pip install GPT4All chromadb`
45+
It's split up, but we have to find the relevant splits and then submit those to the model. We can do this by creating embeddings and storing them in a vector database. We can use Ollama directly to instantiate an embedding model. We will use ChromaDB in this example for a vector database. `pip install GPT4All chromadb`
4646

4747
```python
48-
from langchain.embeddings import GPT4AllEmbeddings
48+
from langchain.embeddings import OllamaEmbeddings
4949
from langchain.vectorstores import Chroma
50-
vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())
50+
oembed = OllamaEmbeddings(base_url="http://localhost:11434", model="llama2")
51+
vectorstore = Chroma.from_documents(documents=all_splits, embedding=oembed)
5152
```
5253

5354
Now let's ask a question from the document. **Who was Neleus, and who is in his family?** Neleus is a character in the Odyssey, and the answer can be found in our text.

0 commit comments

Comments
 (0)