Description
Is this a new feature, an improvement, or a change to existing functionality?
New Feature
How would you describe the priority of this feature request
High
Please provide a clear description of problem this feature solves
As part of the Sherlock work, an example showing how to use Morpheus to execute multiple LLM queries that utilize RAG inside of a pipeline.
Describe your ideal solution
Purpose
The purpose of this example is to illustrate how a user could build a pipeline while will integrate an LLM service into a Morpheus pipeline. This example builds on the previous example, #1305, by adding the ability to augment LLM queries with context information from a knowledge base. Appending this context helps improve the responses from the LLM by providing additional background contextual and factual information which the LLM can pull from for its response.
In order for this pipeline to function correctly, a Vector Database must already have been populated with information that can be retrieved. An example of populating a database is illustrated in #1298. This example assumes that pipeline has already been run to completion.
Scenario
This example will show two different implementations of a RAG pipeline but the pipeline and components could be used in many scenarios with different requirements. At a high level, the following illustrates different customization points for this pipeline and the specific choices made for this example:
- LLM Service
- This pipeline could support any type of LLM service which is compatible with our
LLMService
interface. - Such services include OpenAI, NeMo, or even running locally using
llama-cpp-python
- For this example, we will focus on using OpenAI as the LLM service. This service has been chosen to be different from the previous example which uses NeMo. Additionally, the models in OpenAI have been trained to provide responses when given additional context information that RAG provides.
- This pipeline could support any type of LLM service which is compatible with our
- Embedding model
- This pipeline can support any type of embedding model that can convert text into a vector of floats.
- For the example we will use all-MiniLM-L6-v2 since it is a small, as is the default model specified in [FEA]: Create Sherlock example for VDB Upload #1298.
- Any vector database can be used to store the resulting embedding and corresponding metadata.
- It would be trivial to update the example to use Chroma or FAISS if needed
- For the example, we will be using Milvus since it is the VDB chosen for [FEA]: Create Sherlock example for VDB Upload #1298.
- Prompt generation type
- There are many different ways to eventually build up the final prompt which gets sent to the model. Any convention can be used even custom prompt templates can be used.
- For this example, we will be using a custom prompt template similar to the
"stuff"
retrievers in Langchain. Using a simple custom prompt keeps the implementation easy to understand.
- Downstream tasks
- After the LLM has been run, the output of the model could be used in any number of tasks such as training a model, running analysis, or even simulating an attack.
- For this example, we will not have any downstream tasks to keep the implementation simple and the focus on the
LLMEngine
Implementation
This example will add a new version of the pipeline using a click
command
Persistent Morpheus pipeline
The persistent Morpheus pipeline is functionally similar to the standalone pipeline, however it uses multiple sources and multiple sinks to perform both the upload and retrieval portions in the same pipeline. The benefit of this pipeline over the standalone pipeline is no VDB upload process needed to be run beforehand. Everything runs in a single pipeline.
The implementation for this pipeline is illustrated by the following diagram:
The major differences between the diagram and the example pipeline are:
- The source for the upload and retrieval are both
KafkaSourceStage
to make it easy for the user to control when messages are processed by the example pipeline- To enable this, there are two topics on the Kafka cluster:
upload
andretrieve_input
. Pushing messages to one or the other will have a different effect on the final message but both will perform the same tasks until the final part of the pipeline.
- To enable this, there are two topics on the Kafka cluster:
- There is a
SplitStage
added after the embedding portion of the pipeline which determines which sink to send each message to.- The
SplitStage
determines where to send each message by looking at the task attached to eachControlMessage
- The
- The final sink for the
retrieval
task is sent to another Kafka topicretrieve_output
Completion Criteria
The following items need to be satisfied to consider this issue complete:
- Should run without error using all default arguments
- Correctly upload the upload task payloads to the VDB
- Correctly run the retrieval task payloads through the specified OpenAI model
- Provide information about the success or failure of the pipeline. Including the number of queries run, throughput and total runtime.
- Reuse the same pipeline for both uploading and retrieval
Metadata
Assignees
Type
Projects
Status
Todo