-
Notifications
You must be signed in to change notification settings - Fork 39
LangChain LLM
The chatbot uses ChatLLM module to generate responses as the final answer, which will be returned to the user end. In order to adapt the LangChain agent, this must be a LangChain BaseChatModel
.
By default, it uses ChatOpenAI
from LangChain, which calls the chat service of OpenAI.
Refer to LangChain Models for more LLM options.
By default, it calls OpenAI Chat service using GPT-3.5 model and sets temperature to 0. If you want to modify OpenAI parameters, refer to Configuration.
from langchain.schema import HumanMessage
from llm import ChatLLM
llm = ChatLLM(temperature=0.0)
messages = [HumanMessage(content='This is a test user message.')]
resp = llm(messages)
A ChatLLM should inherit LangChain BaseChatModel.
To customize the module, you can define your own _generate
and _agenerate
methods.
from typing import List, Optional
from langchain.callbacks.manager import AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun
from langchain.schema import BaseMessage, ChatResult
class ChatLLM(BaseChatModel):
def _generate(self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None
) -> ChatResult:
# Your method here
pass
async def _agenerate(self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> ChatResult:
# Your method here
pass
Akcio is a proprietary project owned and developed by Zilliz. It is published under the Server Side Public License (SSPL) v1.
© Copyright 2023, Zilliz Inc.
Towhee
LangChain
Others