Skip to content

Latest commit

 

History

History
113 lines (83 loc) · 4.79 KB

File metadata and controls

113 lines (83 loc) · 4.79 KB

Chatbots with Good Memory: Cerebras & LangChain

This tutorial outlines the setup, code structure, and how conversational memory is managed using LangChain for a chatbot implemented with the Cerebras API and Streamlit. Try testing out this chatbot's memory and switching between models to observe token metrics.

finished product

Step 1: Set up your API Key

  1. Obtain Your API Key: Log in to your Cerebras account, navigate to the “API Keys” section, and generate a new API key.

  2. Set the API Key in the Sidebar: Simply grab your Cerebras API key and stick it in the textbox in the sidebar.

Step 2: Install dependencies

Let's make sure we have all of the requirements for this project installed!

pip install -r requirements.txt

Go ahead and also run pip install -r requirements.txt to install other requirements as well!

Step 3: Start Chatting with Memory 🧠

Press RUN and then run the command streamlit run main.py in Shell to interact with the UI.

Code Structure

Streamlit Application

Streamlit allows an easy way to implement our chatbot using Python with a simple dropdown and input box.

# Initialize history and chatbot memory
if 'history' not in st.session_state:
    st.session_state.history = []

if 'memory' not in st.session_state:
    st.session_state.memory = ConversationBufferWindowMemory(k=conversational_memory_length, memory_key="chat_history", return_messages=True)

if "selected_model" not in st.session_state:
    st.session_state.selected_model = None

Streamlit stores our bot's chat history, memory, and selected model in its session history.

Initialization of Memory

   if 'memory' not in st.session_state:
       st.session_state.memory = ConversationBufferWindowMemory(
           k=conversational_memory_length,
           memory_key="chat_history",
           return_messages=True
       )

ConversationBufferWindowMemory from LangChain is used to manage conversational memory. It retains a fixed number of the most recent messages (k=5 in this case), allowing the chatbot to maintain context throughout the conversation.

Memory Handling in Conversation Chain

```python
prompt = ChatPromptTemplate.from_messages(
    [
        SystemMessage(
            content=system_prompt
        ),  # This is the persistent system prompt that is always included at the start of the chat.

        MessagesPlaceholder(
            variable_name="chat_history"
        ),  # This placeholder will be replaced by the actual chat history during the conversation. It helps in maintaining context.

        HumanMessagePromptTemplate.from_template(
            "{human_input}"
        ),  # This template is where the user's current input will be injected into the prompt.
    ]
)
```

The provided code snippet creates a ChatPromptTemplate using LangChain, which structures prompts for a chatbot. This setup ensures that each prompt sent to the language model includes both the fixed system instructions ("You are a friendly chatbot") and the updated chat history, allowing the chatbot to generate contextually relevant responses based on the entire conversation.

conversation = LLMChain(
    llm=cerebras_llm,  # Custom LLM instance
    prompt=prompt,  # Constructed prompt template
    verbose=False,
    memory=st.session_state.memory  # Conversation memory instance
)

The LLMChain object is configured with the ConversationBufferWindowMemory instance. This setup allows the chatbot to use the stored conversation history for generating contextually relevant responses.

# Initialize the Cerebras LLM object
cerebras_llm = ChatCerebras(api_key=api_key, model=st.session_state.selected_model)

cerebras_llm is our LLM instance, which was initialized in the code, as seen above.

# The chatbot's answer is generated by sending the full prompt to the Groq API.
response = conversation.predict(human_input=user_input)

Once the conversation chain has been continued, we can use the conversation object to predict the next response using all previous context contained in memory.

Updating and Displaying History

st.session_state.history.append(f"User: {user_input}")
st.session_state.history.append(f"Chatbot: {response}")

if st.session_state.history:
    st.write("### Conversation History:")
    for message in st.session_state.history:
        st.write(message)

The conversation history is updated with each user input and chatbot response. This history is then displayed in the Streamlit application, providing users with a record of the conversation.