Skip to content

LangChain LLM

Jael Gu edited this page Jul 13, 2023 · 1 revision

The chatbot uses ChatLLM module to generate responses as the final answer, which will be returned to the user end. In order to adapt the LangChain agent, this must be a LangChain BaseChatModel.

By default, it uses ChatOpenAI from LangChain, which calls the chat service of OpenAI. Refer to LangChain Models for more LLM options.

By default, it calls OpenAI Chat service using GPT-3.5 model and sets temperature to 0. If you want to modify OpenAI parameters, refer to Configuration.

Usage Example

from langchain.schema import HumanMessage

from llm import ChatLLM

llm = ChatLLM(temperature=0.0)
messages = [HumanMessage(content='This is a test user message.')]
resp = llm(messages)

Customization

A ChatLLM should inherit LangChain BaseChatModel. To customize the module, you can define your own _generate and _agenerate methods.

from typing import List, Optional

from langchain.callbacks.manager import AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun
from langchain.schema import BaseMessage, ChatResult


class ChatLLM(BaseChatModel):
    def _generate(self,
                  messages: List[BaseMessage],
                  stop: Optional[List[str]] = None,
                  run_manager: Optional[CallbackManagerForLLMRun] = None
                  ) -> ChatResult:
                  # Your method here
                  pass
    
    async def _agenerate(self,
                         messages: List[BaseMessage],
                         stop: Optional[List[str]] = None,
                         run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
                         ) -> ChatResult:
                         # Your method here
                         pass
Clone this wiki locally