Skip to content

docs: change SystemChatMessage and HumanChatMessage to SystemMessage, HumanMessage #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,6 @@ yarn-debug.log*
yarn-error.log*

.vercel

# IDE
.idea
6 changes: 3 additions & 3 deletions docs/ecosystem/helicone.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ const model = new OpenAI(

);

const res = await model.call("What is a helicone?");
const res = await model.invoke("What is a helicone?");

```

Expand Down Expand Up @@ -59,7 +59,7 @@ const model = new OpenAI(

);

const res = await model.call("What is a helicone?");
const res = await model.invoke("What is a helicone?");

```

Expand Down Expand Up @@ -95,7 +95,7 @@ const model = new OpenAI(

);

const res = await model.call("What is a helicone?");
const res = await model.invoke("What is a helicone?");

```

Expand Down
36 changes: 18 additions & 18 deletions docs/getting-started/guide-chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ import Example from "!!raw-loader!@examples/models/chat/chat_streaming_stdout.ts
```typescript
import { ChatOpenAI } from "langchain/chat_models/openai";

import { HumanChatMessage, SystemChatMessage } from "langchain/schema";
import { HumanMessage, SystemMessage } from "langchain/schema";



Expand All @@ -41,13 +41,13 @@ const chat = new ChatOpenAI({ temperature: 0 });

### 聊天模型: 消息作为输入, 消息作为输出

通过将一个或多个消息传递给聊天模型,可以获取聊天完成。响应也将是一条消息。LangChain当前支持的消息类型为`AIChatMessage`, `HumanChatMessage`, `SystemChatMessage` ,和通用`ChatMessage`-- ChatMessage采用任意角色参数,在这里,我们不会使用。大多数时候,您只需要处理`HumanChatMessage`, `AIChatMessage`和`SystemChatMessage`。
通过将一个或多个消息传递给聊天模型,可以获取聊天完成。响应也将是一条消息。LangChain当前支持的消息类型为`AIMessage`, `HumanMessage`, `SystemMessage` ,和通用`ChatMessage`-- ChatMessage采用任意角色参数,在这里,我们不会使用。大多数时候,您只需要处理`HumanMessage`, `AIMessage`和`SystemMessage`。


```typescript
const response = await chat.call([
const response = await chat.invoke([

new HumanChatMessage(
new HumanMessage(

"Translate this sentence from English to French. I love programming."

Expand All @@ -63,7 +63,7 @@ console.log(response);


```
AIChatMessage { text: "J'aime programmer." }
AIMessage { text: "J'aime programmer." }

```

Expand All @@ -75,15 +75,15 @@ OpenAI的在线聊天模型(目前包括`gpt-3.5-turbo`和`gpt-4`以及Azure O
> **ⓘ** 注意,如果您使用Azure OpenAI,请确保更改部署名称以使用您选择的模型的部署。

```typescript
const responseB = await chat.call([
const responseB = await chat.invoke([

new SystemChatMessage(
new SystemMessage(

"You are a helpful assistant that translates English to French."

),

new HumanChatMessage("Translate: I love programming."),
new HumanMessage("Translate: I love programming."),

]);

Expand All @@ -95,7 +95,7 @@ console.log(responseB);


```
AIChatMessage { text: "J'aime programmer." }
AIMessage { text: "J'aime programmer." }

```

Expand All @@ -109,13 +109,13 @@ const responseC = await chat.generate([

[

new SystemChatMessage(
new SystemMessage(

"You are a helpful assistant that translates English to French."

),

new HumanChatMessage(
new HumanMessage(

"Translate this sentence from English to French. I love programming."

Expand All @@ -125,13 +125,13 @@ const responseC = await chat.generate([

[

new SystemChatMessage(
new SystemMessage(

"You are a helpful assistant that translates English to French."

),

new HumanChatMessage(
new HumanMessage(

"Translate this sentence from English to French. I love artificial intelligence."

Expand Down Expand Up @@ -159,7 +159,7 @@ console.log(responseC);

text: "J'aime programmer.",

message: AIChatMessage { text: "J'aime programmer." },
message: AIMessage { text: "J'aime programmer." },

}

Expand All @@ -171,7 +171,7 @@ console.log(responseC);

text: "J'aime l'intelligence artificielle.",

message: AIChatMessage { text: "J'aime l'intelligence artificielle." }
message: AIMessage { text: "J'aime l'intelligence artificielle." }

}

Expand Down Expand Up @@ -257,7 +257,7 @@ console.log(responseA);

text: "J'aime programmer.",

message: AIChatMessage { text: "J'aime programmer." }
message: AIMessage { text: "J'aime programmer." }

}

Expand Down Expand Up @@ -293,7 +293,7 @@ const chain = new LLMChain({


```typescript
const responseB = await chain.call({
const responseB = await chain.invoke({

input_language: "English",

Expand Down Expand Up @@ -509,7 +509,7 @@ const chain = new ConversationChain({
该链将内部累加发送到模型和接收的输出消息。然后,在下一次调用时,它将把消息注入提示符中。所以你可以多次调用该链,并且它会记住先前的消息。(The chain will internally accumulate the messages sent to the model and the ones received as output. Then it will inject the messages into the prompt on the next call. So you can call the chain a few times and it remembers previous messages.)

```typescript
const responseH = await chain.call({
const responseH = await chain.invoke({

input: "hi from London, how are you doing today",

Expand Down
10 changes: 5 additions & 5 deletions docs/getting-started/guide-llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ import { OpenAI } from "langchain/llms/openai";
初始化wrapper后,我们现在可以在一些输入上调用它!

```typescript
const res = await model.call(
const res = await model.invoke(

"What would be a good company name a company that makes colorful socks?"

Expand Down Expand Up @@ -262,7 +262,7 @@ const chain = new LLMChain({ llm: model, prompt: prompt });

```typescript

const res = await chain.call({ product: "colorful socks" });
const res = await chain.invoke({ product: "colorful socks" });

console.log(res);

Expand Down Expand Up @@ -369,7 +369,7 @@ console.log(`Executing with input "${input}"...`);



const result = await executor.call({ input });
const result = await executor.invoke({ input });



Expand Down Expand Up @@ -409,7 +409,7 @@ const memory = new BufferMemory();

const chain = new ConversationChain({ llm: model, memory: memory });

const res1 = await chain.call({ input: "Hi! I'm Jim." });
const res1 = await chain.invoke({ input: "Hi! I'm Jim." });

console.log(res1);

Expand All @@ -423,7 +423,7 @@ console.log(res1);


```typescript
const res2 = await chain.call({ input: "What's my name?" });
const res2 = await chain.invoke({ input: "What's my name?" });

console.log(res2);

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/agents/toolkits/json.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ export const run = async () => {



const result = await executor.call({ input });
const result = await executor.invoke({ input });



Expand Down
2 changes: 1 addition & 1 deletion docs/modules/agents/toolkits/openapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ export const run = async () => {



const result = await executor.call({ input });
const result = await executor.invoke({ input });

console.log(`Got output ${result.output}`);

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/agents/tools/agents_with_vectorstores.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ console.log(`Executing with input "${input}"...`);



const result = await executor.call({ input });
const result = await executor.invoke({ input });



Expand Down
2 changes: 1 addition & 1 deletion docs/modules/agents/tools/dynamic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ export const run = async () => {



const result = await executor.call({ input });
const result = await executor.invoke({ input });



Expand Down
2 changes: 1 addition & 1 deletion docs/modules/agents/tools/lambda_agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ const executor = await initializeAgentExecutorWithOptions(tools, model, {

const input = `Find out the capital of Croatia. Once you have it, email the answer to testing123@gmail.com.`;

const result = await executor.call({ input });
const result = await executor.invoke({ input });

console.log(result);

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ const chain = VectorDBQAChain.fromLLM(model, vectorStore, {

});

const response = await chain.call({ query: "What is opensearch?" });
const response = await chain.invoke({ query: "What is opensearch?" });



Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ const chain = VectorDBQAChain.fromLLM(model, vectorStore, {

});

const response = await chain.call({ query: "What is pinecone?" });
const response = await chain.invoke({ query: "What is pinecone?" });

console.log(response);

Expand Down
10 changes: 5 additions & 5 deletions docs/modules/memory/examples/buffer_memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ const memory = new BufferMemory();

const chain = new ConversationChain({ llm: model, memory: memory });

const res1 = await chain.call({ input: "Hi! I'm Jim." });
const res1 = await chain.invoke({ input: "Hi! I'm Jim." });

console.log({ res1 });

Expand All @@ -31,7 +31,7 @@ console.log({ res1 });


```typescript
const res2 = await chain.call({ input: "What's my name?" });
const res2 = await chain.invoke({ input: "What's my name?" });

console.log({ res2 });

Expand All @@ -50,15 +50,15 @@ console.log({ res2 });

import { ChatMessageHistory } from "langchain/memory";

import { HumanChatMessage, AIChatMessage } from "langchain/schema";
import { HumanMessage, AIMessage } from "langchain/schema";



const pastMessages = [

new HumanChatMessage("My name's Jonas"),
new HumanMessage("My name's Jonas"),

new AIChatMessage("Nice to meet you, Jonas!"),
new AIMessage("Nice to meet you, Jonas!"),

];

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/memory/examples/buffer_window_memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ const memory = new BufferWindowMemory({ k: 1 });

const chain = new ConversationChain({ llm: model, memory: memory });

const res1 = await chain.call({ input: "Hi! I'm Jim." });
const res1 = await chain.invoke({ input: "Hi! I'm Jim." });

console.log({ res1 });

Expand All @@ -33,7 +33,7 @@ console.log({ res1 });


```typescript
const res2 = await chain.call({ input: "What's my name?" });
const res2 = await chain.invoke({ input: "What's my name?" });

console.log({ res2 });

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/memory/examples/motorhead_memory.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ const chain = new ConversationChain({



const res1 = await chain.call({ input: "Hi! I'm Jim." });
const res1 = await chain.invoke({ input: "Hi! I'm Jim." });

console.log({ res1 });

Expand All @@ -90,7 +90,7 @@ console.log({ res1 });


```typescript
const res2 = await chain.call({ input: "What's my name?" });
const res2 = await chain.invoke({ input: "What's my name?" });

console.log({ res2 });

Expand Down
8 changes: 4 additions & 4 deletions docs/modules/models/chat/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ LangChain提供了一个标准接口来使用聊天模型。聊天模型是语

目前LangChain支持四种不同类型的“聊天消息”:

- `HumanChatMessage`: 模拟一个人类的视角发送的聊天消息。
- `AIChatMessage`: 从AI系统的角度发送的聊天消息,用于与人类进行通信。
- `SystemChatMessage`: 一种用于向AI系统提供有关对话信息的聊天消息。通常在对话开始时发送。
- `HumanMessage`: 模拟一个人类的视角发送的聊天消息。
- `AIMessage`: 从AI系统的角度发送的聊天消息,用于与人类进行通信。
- `SystemMessage`: 一种用于向AI系统提供有关对话信息的聊天消息。通常在对话开始时发送。
- `ChatMessage`: 一个通用的聊天消息,不仅有一个“文本”字段,还有一个任意的“角色”字段。

要开始使用,只需使用“LLM”实现的“call”方法,传入一个字符串输入。在这个例子中,我们使用的是“ChatOpenAI”实现:
要开始使用,只需使用“LLM”实现的“invoke”方法,传入一个字符串输入。在这个例子中,我们使用的是“ChatOpenAI”实现:

<CodeBlock language="typescript">{Example}</CodeBlock>

Expand Down
8 changes: 4 additions & 4 deletions docs/modules/models/chat/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ const respA = await chat.generate([

[

new SystemChatMessage(
new SystemMessage(

"You are a helpful assistant that translates English to French."

Expand Down Expand Up @@ -148,9 +148,9 @@ npm install google-auth-library

ChatGoogleVertexAI 类的工作方式与其他基于聊天的 LLM 相同,具有一些例外情况:

1. 第一个传入的 `SystemChatMessage` 被映射到 PaLM 模型期望的 "context" 参数。
2. 不允许出现其他 `SystemChatMessage`。
3. 在第一个 `SystemChatMessage` 之后,必须是一串奇数条消息,代表人类和模型之间的对话。
1. 第一个传入的 `SystemMessage` 被映射到 PaLM 模型期望的 "context" 参数。
2. 不允许出现其他 `SystemMessage`。
3. 在第一个 `SystemMessage` 之后,必须是一串奇数条消息,代表人类和模型之间的对话。
4. 发送的信息必须交错出现,先是人类信息,然后是 AI 回复,然后是人类信息,以此类推。


Expand Down
Loading