Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs[patch],langchain[patch]: Clean up legacy retrieval QA chain code in docs, fix bad type #4384

Merged
merged 1 commit into from
Feb 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,4 @@ npm install @langchain/openai

<CodeBlock language="typescript">{Example}</CodeBlock>

In this example, the `SearchApiLoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. The `RetrievalQAChain` is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SearchApiLoader` can streamline the process of loading and processing web search results.
In this example, the `SearchApiLoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SearchApiLoader` can streamline the process of loading and processing web search results.
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ npm install @langchain/openai

<CodeBlock language="typescript">{Example}</CodeBlock>

In this example, the `SerpAPILoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. The `RetrievalQAChain` is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SerpAPILoader` can streamline the process of loading and processing web search results.
In this example, the `SerpAPILoader` is used to load web search results, which are then stored in memory using `MemoryVectorStore`. A retrieval chain is then used to retrieve the most relevant documents from the memory and answer the question based on these documents. This demonstrates how the `SerpAPILoader` can streamline the process of loading and processing web search results.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Chaindesk Retriever

This example shows how to use the Chaindesk Retriever in a `RetrievalQAChain` to retrieve documents from a Chaindesk.ai datastore.
This example shows how to use the Chaindesk Retriever in a retrieval chain to retrieve documents from a Chaindesk.ai datastore.

## Usage

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ hide_table_of_contents: true

# Metal Retriever

This example shows how to use the Metal Retriever in a `RetrievalQAChain` to retrieve documents from a Metal index.
This example shows how to use the Metal Retriever in a retrieval chain to retrieve documents from a Metal index.

## Setup

Expand Down
22 changes: 0 additions & 22 deletions docs/core_docs/docs/integrations/retrievers/remote-retriever.mdx

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ hide_table_of_contents: true

# Zep Retriever

This example shows how to use the Zep Retriever in a `RetrievalQAChain` to retrieve documents from Zep memory store.
This example shows how to use the Zep Retriever in a retrieval chain to retrieve documents from Zep memory store.

## Setup

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Let's walk through what's happening here.

2. Though we can query the vector store directly, we convert the vector store into a retriever to return retrieved documents in the right format for the question answering chain.

3. We initialize a `RetrievalQAChain` with the `.fromLLM` method, which we'll call later in step 4.
3. We initialize a retrieval chain, which we'll call later in step 4.

4. We ask questions!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ npm install @langchain/openai

```typescript
import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { RetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import * as fs from "fs";
Expand Down
37 changes: 0 additions & 37 deletions examples/src/chains/chat_vector_db_chroma.ts

This file was deleted.

33 changes: 0 additions & 33 deletions examples/src/chains/retrieval_qa_with_remote.ts

This file was deleted.

33 changes: 25 additions & 8 deletions examples/src/document_loaders/apify_dataset_existing.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";
import { RetrievalQAChain } from "langchain/chains";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";

/*
* datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents.
Expand All @@ -27,17 +29,32 @@ const docs = await loader.load();

const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

const model = new OpenAI({
const model = new ChatOpenAI({
temperature: 0,
});

const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), {
returnSourceDocuments: true,
const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);

const combineDocsChain = await createStuffDocumentsChain({
llm: model,
prompt: questionAnsweringPrompt,
});
const res = await chain.call({ query: "What is LangChain?" });

console.log(res.text);
console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));
const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});

const res = await chain.invoke({ input: "What is LangChain?" });

console.log(res.answer);
console.log(res.context.map((doc) => doc.metadata.source));

/*
LangChain is a framework for developing applications powered by language models.
Expand Down
33 changes: 25 additions & 8 deletions examples/src/document_loaders/apify_dataset_new.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
import { ApifyDatasetLoader } from "langchain/document_loaders/web/apify_dataset";
import { HNSWLib } from "@langchain/community/vectorstores/hnswlib";
import { OpenAIEmbeddings, OpenAI } from "@langchain/openai";
import { RetrievalQAChain } from "langchain/chains";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { Document } from "@langchain/core/documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createRetrievalChain } from "langchain/chains/retrieval";

/*
* datasetMappingFunction is a function that maps your Apify dataset format to LangChain documents.
Expand Down Expand Up @@ -33,17 +35,32 @@ const docs = await loader.load();

const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

const model = new OpenAI({
const model = new ChatOpenAI({
temperature: 0,
});

const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), {
returnSourceDocuments: true,
const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);

const combineDocsChain = await createStuffDocumentsChain({
llm: model,
prompt: questionAnsweringPrompt,
});
const res = await chain.call({ query: "What is LangChain?" });

console.log(res.text);
console.log(res.sourceDocuments.map((d: Document) => d.metadata.source));
const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});

const res = await chain.invoke({ input: "What is LangChain?" });

console.log(res.answer);
console.log(res.context.map((doc) => doc.metadata.source));

/*
LangChain is a framework for developing applications powered by language models.
Expand Down
40 changes: 31 additions & 9 deletions examples/src/document_loaders/searchapi.ts
Original file line number Diff line number Diff line change
@@ -1,17 +1,21 @@
import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { RetrievalQAChain } from "langchain/chains";
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { TokenTextSplitter } from "langchain/text_splitter";
import { SearchApiLoader } from "langchain/document_loaders/web/searchapi";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createRetrievalChain } from "langchain/chains/retrieval";

// Initialize the necessary components
const llm = new OpenAI();
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
});
const embeddings = new OpenAIEmbeddings();
const apiKey = "Your SearchApi API key";

// Define your question and query
const question = "Your question here";
const query = "Your question here";
const query = "Your query here";

// Use SearchApiLoader to load web search results
const loader = new SearchApiLoader({ q: query, apiKey, engine: "google" });
Expand All @@ -21,17 +25,35 @@ const textSplitter = new TokenTextSplitter({
chunkSize: 800,
chunkOverlap: 100,
});

const splitDocs = await textSplitter.splitDocuments(docs);

// Use MemoryVectorStore to store the loaded documents in memory
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings
);
// Use RetrievalQAChain to retrieve documents and answer the question
const chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever(), {
verbose: true,

const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);

const combineDocsChain = await createStuffDocumentsChain({
llm,
prompt: questionAnsweringPrompt,
});

const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});

const res = await chain.invoke({
input: question,
});
const answer = await chain.call({ query: question });

console.log(answer.text);
console.log(res.answer);
34 changes: 27 additions & 7 deletions examples/src/document_loaders/serpapi.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import { OpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { RetrievalQAChain } from "langchain/chains";
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { SerpAPILoader } from "langchain/document_loaders/web/serpapi";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { createRetrievalChain } from "langchain/chains/retrieval";

// Initialize the necessary components
const llm = new OpenAI();
const llm = new ChatOpenAI();
const embeddings = new OpenAIEmbeddings();
const apiKey = "Your SerpAPI API key";

Expand All @@ -19,8 +21,26 @@ const docs = await loader.load();
// Use MemoryVectorStore to store the loaded documents in memory
const vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);

// Use RetrievalQAChain to retrieve documents and answer the question
const chain = RetrievalQAChain.fromLLM(llm, vectorStore.asRetriever());
const answer = await chain.call({ query: question });
const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"Answer the user's questions based on the below context:\n\n{context}",
],
["human", "{input}"],
]);

console.log(answer.text);
const combineDocsChain = await createStuffDocumentsChain({
llm,
prompt: questionAnsweringPrompt,
});

const chain = await createRetrievalChain({
retriever: vectorStore.asRetriever(),
combineDocsChain,
});

const res = await chain.invoke({
input: question,
});

console.log(res.answer);
Loading
Loading