Skip to content

Commit

Permalink
Merge pull request langchain-ai#15 from langchain-ai/wfh/run_collector
Browse files Browse the repository at this point in the history
Rec run collector
  • Loading branch information
hinthornw authored Aug 30, 2023
2 parents 5a2ece9 + 4c42bb2 commit 432f8c0
Show file tree
Hide file tree
Showing 3 changed files with 155 additions and 74 deletions.
63 changes: 7 additions & 56 deletions docs/tracing/tracing-faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,10 @@ import {
TraceableThreadingCodeBlock,
} from "@site/src/components/QuickStart";

import {
AccessRunIdBlock,
} from "@site/src/components/TracingFaq";

# Tracing FAQs

The following are some frequently asked questions about logging runs to LangSmith:
Expand Down Expand Up @@ -268,64 +272,11 @@ try {

### How do I get the run ID from a call?

The run ID is returned in the call response under the `__run` key. In python chains, it is not returned by default. There you will have to pass the `include_run_info=True` parameter to the call function. Example:

<CodeTabs
tabs={[
PythonBlock(`from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain\n
chain = LLMChain.from_string(ChatOpenAI(), "Say hi to {name}")
def main():
response = chain("Clara", include_run_info=True)
run_id = response["__run"].run_id
print(run_id)
main()
`),
TypeScriptBlock(`import { ChatOpenAI } from "langchain/chat_models/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";\n
const prompt = PromptTemplate.fromTemplate("Say hi to {name}");
const chain = new LLMChain({
llm: new ChatOpenAI(),
prompt: prompt,
});\n
async function main() {
const response = await chain.invoke({ name: "Clara" });
console.log(response.__run);
}\n
main();
`),
]}
groupId="client-language"
/>

For python LLMs/chat models, the run information is returned automatically when calling the `generate()` method. Example:

```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate

chat_model = ChatOpenAI()

prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a cat"),
("human", "Hi"),
]
)
res = chat_model.generate(messages=[prompt.format_messages()])
res.run[0].run_id
```

or for LLMs
In Typescript, the run ID is returned in the call response under the `__run` key. In python, we recommend using the run collector callback.
Below is an example:

```python
from langchain.llms import OpenAI
<AccessRunIdBlock />

openai = OpenAI()
res = openai.generate(["You are a good bot"])
print(res.run[0].run_id)
```

### How do I get the URL of the run?

Expand Down
79 changes: 61 additions & 18 deletions docs/tracing/use-cases/export-runs/local.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ import {
TypeScriptBlock,
} from "@site/src/components/InstructionsWithCode";

# Export Runs Locally
# Querying Saved Runs

Running local analysis of your run data can be incredibly useful at early stages of deployment. This guide reviews some ways you can use the client to fetch runs from the server.
Running local analysis of your run data can be incredibly useful at each stage of deployment. This guide reviews some ways you can use the client to fetch runs from the server.

Using the `list_runs` method, you can filter runs to analyze and export. Most simple requests can be satisfied using simple top level arguments:

Expand All @@ -26,6 +26,11 @@ Using the `list_runs` method, you can filter runs to analyze and export. Most si
- `filter`: Fetch runs that match a given structured filter statement. See the [run filtering guide](#run-filtering) below for more information.
- `query` (*experimental*): Query the experimental natural language API, which translates your query into a filter statement.

:::note Required arguments
LangSmith expects at least one of the following to be provided: 'project_id'/'project_name', 'run_ids', 'parent_run_id', or 'reference_example_id'.
If none of these are provided, the server will raise an error.
:::

## Using keyword arguments

For simple queries, such as filtering by project, run time, name, or run ID's, you can directly use keyword arguments in the list_runs method. These correspond dirctly to query params in the REST API. All the examples below assume you have created a LangSmith client and configured it with your API key to connect to the LangSmith server.
Expand Down Expand Up @@ -158,6 +163,7 @@ The filtering grammar is based on common comparators on fields in the run object
- `lt` (less than)
- `eq` (equal to)
- `neq` (not equal to)
- `has` (check if run contains a tag or metadata json blob)
- `search` (search for a substring in a string field)

Additionally, you can combine multiple comparisons through `and` and `or` operators.
Expand All @@ -173,10 +179,10 @@ The following examples assume you have configured your environment appropriately
<CodeTabs
tabs={[
PythonBlock(
`client.list_runs(filter='and(eq(feedback_key, "star_rating"), gt(feedback_score, 4))')`
`client.list_runs(project_name="<your_project>", filter='and(eq(feedback_key, "star_rating"), gt(feedback_score, 4))')`
),
TypeScriptBlock(
`client.listRuns({filter: 'and(eq(feedback_key, "star_rating"), gt(feedback_score, 4))'})`
`client.listRuns({projectName: "<your_project>", filter: 'and(eq(feedback_key, "star_rating"), gt(feedback_score, 4))'})`
),
]}
groupId="client-language"
Expand All @@ -186,8 +192,8 @@ The following examples assume you have configured your environment appropriately

<CodeTabs
tabs={[
PythonBlock(`client.list_runs(filter='gt(latency, "5s")')`),
TypeScriptBlock(`client.listRuns({filter: 'gt(latency, "5s")'})`),
PythonBlock(`client.list_runs(project_name="<your_project>", filter='gt(latency, "5s")')`),
TypeScriptBlock(`client.listRuns({projectName: "<your_project>", filter: 'gt(latency, "5s")'})`),
]}
groupId="client-language"
/>
Expand All @@ -196,8 +202,8 @@ The following examples assume you have configured your environment appropriately

<CodeTabs
tabs={[
PythonBlock(`client.list_runs(filter='gt(total_tokens, 5000)')`),
TypeScriptBlock(`client.listRuns({filter: 'gt(total_tokens, 5000)'})`),
PythonBlock(`client.list_runs(project_name="<your_project>", filter='gt(total_tokens, 5000)')`),
TypeScriptBlock(`client.listRuns({projectName: "<your_project>", filter: 'gt(total_tokens, 5000)'})`),
]}
groupId="client-language"
/>
Expand All @@ -206,8 +212,8 @@ The following examples assume you have configured your environment appropriately

<CodeTabs
tabs={[
PythonBlock(`client.list_runs(filter='neq(error, null)')`),
TypeScriptBlock(`client.listRuns({filter: 'neq(error, null)'})`),
PythonBlock(`client.list_runs(project_name="<your_project>", filter='neq(error, null)')`),
TypeScriptBlock(`client.listRuns({projectName: "<your_project>", filter: 'neq(error, null)'})`),
]}
groupId="client-language"
/>
Expand All @@ -216,8 +222,8 @@ The following examples assume you have configured your environment appropriately

<CodeTabs
tabs={[
PythonBlock(`client.list_runs(filter='lt(execution_order, 10)')`),
TypeScriptBlock(`client.listRuns({filter: 'lt(execution_order, 10)'})`),
PythonBlock(`client.list_runs(project_name="<your_project>", filter='lt(execution_order, 10)')`),
TypeScriptBlock(`client.listRuns({projectName: "<your_project>", filter: 'lt(execution_order, 10)'})`),
]}
groupId="client-language"
/>
Expand All @@ -227,10 +233,10 @@ The following examples assume you have configured your environment appropriately
<CodeTabs
tabs={[
PythonBlock(
`client.list_runs(filter='gt(start_time, "2023-07-15T12:34:56Z")')`
`client.list_runs(project_name="<your_project>", filter='gt(start_time, "2023-07-15T12:34:56Z")')`
),
TypeScriptBlock(
`client.listRuns({filter: 'gt(start_time, "2023-07-15T12:34:56Z")'})`
`client.listRuns({projectName: "<your_project>", filter: 'gt(start_time, "2023-07-15T12:34:56Z")'})`
),
]}
groupId="client-language"
Expand All @@ -240,8 +246,8 @@ The following examples assume you have configured your environment appropriately

<CodeTabs
tabs={[
PythonBlock(`client.list_runs(filter='search("substring")')`),
TypeScriptBlock(`client.listRuns({filter: 'search("substring")'})`),
PythonBlock(`client.list_runs(project_name="<your_project>", filter='search("substring")')`),
TypeScriptBlock(`client.listRuns({projectName: "<your_project>", filter: 'search("substring")'})`),
]}
groupId="client-language"
/>
Expand All @@ -250,8 +256,8 @@ The following examples assume you have configured your environment appropriately

<CodeTabs
tabs={[
PythonBlock(`client.list_runs(filter='has(tags, "2aa1cf4")')`),
TypeScriptBlock(`client.listRuns({filter: 'has(tags, "2aa1cf4")'})`),
PythonBlock(`client.list_runs(project_name="<your_project>", filter='has(tags, "2aa1cf4")')`),
TypeScriptBlock(`client.listRuns({projectName: "<your_project>", filter: 'has(tags, "2aa1cf4")'})`),
]}
groupId="client-language"
/>
Expand All @@ -261,9 +267,11 @@ The following examples assume you have configured your environment appropriately
<CodeTabs
tabs={[
PythonBlock(`client.list_runs(
project_name="<your_project>",
filter='and(eq(run_type, "chain"), gt(latency, 10), gt(total_tokens, 5000))'
)`),
TypeScriptBlock(`client.listRuns({
projectName: "<your_project>",
filter: 'and(eq(run_type, "chain"), gt(latency, 10), gt(total_tokens, 5000))'
})`),
]}
Expand All @@ -276,11 +284,13 @@ The following examples assume you have configured your environment appropriately
tabs={[
PythonBlock(
`client.list_runs(
project_name="<your_project>",
filter='and(gt(start_time, "2023-07-15T12:34:56Z"), or(neq(error, null), and(eq(feedback_key, "Correctness"), eq(feedback_score, 0.0))))'
)`
),
TypeScriptBlock(
`client.listRuns({
projectName: "<your_project>",
filter: 'and(gt(start_time, "2023-07-15T12:34:56Z"), or(neq(error, null), and(eq(feedback_key, "Correctness"), eq(feedback_score, 0.0))))'
})`
),
Expand All @@ -294,14 +304,47 @@ The following examples assume you have configured your environment appropriately
tabs={[
PythonBlock(
`client.list_runs(
project_name="<your_project>",
filter='and(or(has(tags, "experimental"), has(tags, "beta")), gt(latency, 2))'
)`
),
TypeScriptBlock(
`client.listRuns({
projectName: "<your_project>",
filter: 'and(or(has(tags, "experimental"), has(tags, "beta")), gt(latency, 2))'
})`
),
]}
groupId="client-language"
/>


#### Check for presence of metadata

If you want to check for the presence of metadata, you can use the `has` operator. This is useful if you want to log more structured information
about your runs.

<CodeTabs
tabs={[
PythonBlock(`import json\n
to_search = {
"user": {
"4070f233-f61e-44eb-bff1-da3c163895a3"
}
}\n
client.list_runs(
project_name="default",
filter=f"has(metadata, '{json.dumps(to_search)}')"
)`),
TypeScriptBlock(`const toSearch = {
user: {
id: '4070f233-f61e-44eb-bff1-da3c163895a3'
}
}\n
client.listRuns({
projectName: 'default',
filter: \`has(metadata, '\${JSON.stringify(toSearch)}')\`
});`),
]}
groupId="client-language"
/>
87 changes: 87 additions & 0 deletions src/components/TracingFaq.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
import React from "react";
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme/CodeBlock";

export const AccessRunIdBlock = ({}) => {
const callbackPythonBlock = `from langchain import chat_models, prompts, callbacks
chain = (
prompts.ChatPromptTemplate.from_template("Say hi to {name}")
| chat_models.ChatOpenAI()
)
with callbacks.collect_runs() as cb:
result = chain.invoke({"name": "Clara"})
run_id = id.traced_runs[0].id
print(run_id)
`;

const alternativePythonBlock = `from langchain.chat_models import ChatOpenAI
from langchain.chains import LLMChain\n
chain = LLMChain.from_string(ChatOpenAI(), "Say hi to {name}")
response = chain("Clara", include_run_info=True)
run_id = response["__run"].run_id
print(run_id)`;

const chatModelPythonBlock = `from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
chat_model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a cat"),
("human", "Hi"),
]
)
res = chat_model.generate(messages=[prompt.format_messages()])
res.run[0].run_id`;

const llmModelPythonBlock = `python
from langchain.llms import OpenAI
openai = OpenAI()
res = openai.generate(["You are a good bot"])
print(res.run[0].run_id)`;
return (
<Tabs groupId="client-language">
<TabItem key="python" value="python" label="Python">
<CodeBlock className="python" language="python">
{callbackPythonBlock}
</CodeBlock>
<p>
For older versions of LangChain ({`<`}0.0.276), you can instruct the
chain to return the run ID by specifying the `include_run_info=True`
parameter to the call function:
</p>
<CodeBlock className="python" language="python">
{alternativePythonBlock}
</CodeBlock>
<p>
For python LLMs/chat models, the run information is returned
automatically when calling the `generate()` method. Example:
</p>
<CodeBlock className="python" language="python">
{chatModelPythonBlock}
</CodeBlock>
<p>or for LLMs</p>
<CodeBlock className="python" language="python">
{llmModelPythonBlock}
</CodeBlock>
</TabItem>
<TabItem key="typescript" value="typescript" label="TypeScript">
<CodeBlock className="typescript" language="typescript">
{`import { ChatOpenAI } from "langchain/chat_models/openai";
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";\n
const prompt = PromptTemplate.fromTemplate("Say hi to {name}");
const chain = new LLMChain({
llm: new ChatOpenAI(),
prompt: prompt,
});\n
const response = await chain.invoke({ name: "Clara" });
console.log(response.__run);`}
</CodeBlock>
</TabItem>
</Tabs>
);
};

0 comments on commit 432f8c0

Please sign in to comment.