diff --git a/docs/core_docs/docs/get_started/quickstart.mdx b/docs/core_docs/docs/get_started/quickstart.mdx index f79ba4c96230..6e344617a5b4 100644 --- a/docs/core_docs/docs/get_started/quickstart.mdx +++ b/docs/core_docs/docs/get_started/quickstart.mdx @@ -45,7 +45,7 @@ In this quickstart, we will walk through a few different ways of doing that: - We will start with a simple LLM chain, which just relies on information in the prompt template to respond. - Next, we will build a retrieval chain, which fetches data from a separate database and passes that into the prompt template. -- We will then add in chat history, to create a conversation retrieval chain. This allows you interact in a chat manner with this LLM, so it remembers previous questions. +- We will then add in chat history, to create a conversational retrieval chain. This allows you interact in a chat manner with this LLM, so it remembers previous questions. - Finally, we will build an agent - which utilizes and LLM to determine whether or not it needs to fetch data to answer questions. We will cover these at a high level, but keep in mind there is a lot more to each piece! We will link to more in-depth docs as appropriate. @@ -398,7 +398,7 @@ This answer should be much more accurate! We've now successfully set up a basic retrieval chain. We only touched on the basics of retrieval - for a deeper dive into everything mentioned here, see [this section of documentation](/docs/modules/data_connection). -## Conversation Retrieval Chain +## Conversational Retrieval Chain The chain we've created so far can only answer single questions. One of the main types of LLM applications that people are building are chat bots. So how do we turn this chain into one that can answer follow up questions?