Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs[minor]: Fix broken link used in quickstart #4422

Merged
merged 5 commits into from
Feb 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/core_docs/docs/get_started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ Then, use it like this:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/overview"
"https://docs.smith.langchain.com/user_guide"
);

const docs = await loader.load();
Expand Down
26 changes: 15 additions & 11 deletions docs/core_docs/docs/modules/agents/quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/overview"
"https://docs.smith.langchain.com/user_guide"
);
const rawDocs = await loader.load();

Expand All @@ -78,9 +78,9 @@ console.log(retrieverResult[0]);

/*
Document {
pageContent: "dataset uploading.Once we have a dataset, how can we use it to test changes to a prompt or chain? The most basic approach is to run the chain over the data points and visualize the outputs. Despite technological advancements, there still is no substitute for looking at outputs by eye. Currently, running the chain over the data points needs to be done client-side. The LangSmith client makes it easy to pull down a dataset and then run a chain over them, logging the results to a new project associated with the dataset. From there, you can review them. We've made it easy to assign feedback to runs and mark them as correct or incorrect directly in the web app, displaying aggregate statistics for each test project.We also make it easier to evaluate these runs. To that end, we've added a set of evaluators to the open-source LangChain library. These evaluators can be specified when initiating a test run and will evaluate the results once the test run completes. If we’re being honest, most of",
pageContent: "your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.Production​Closely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production. However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Monitoring and A/B Testing​LangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to",
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: { lines: [Object] }
}
}
Expand Down Expand Up @@ -232,7 +232,7 @@ console.log(result2);
{
"pageContent": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 11,
Expand All @@ -244,7 +244,7 @@ console.log(result2);
{
"pageContent": "the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 3,
Expand All @@ -256,7 +256,7 @@ console.log(result2);
{
"pageContent": "inputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 4,
Expand All @@ -268,7 +268,7 @@ console.log(result2);
{
"pageContent": "feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime",
"metadata": {
"source": "https://docs.smith.langchain.com/overview",
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 11,
Expand Down Expand Up @@ -322,13 +322,17 @@ console.log(result2);
input: 'how can langsmith help with testing?',
output: 'LangSmith can help with testing in several ways:\n' +
'\n' +
'1. Debugging: LangSmith can be used to debug unexpected end results, agent loops, slow chains, and token usage. It helps in pinpointing underperforming data points and tracking performance over time.\n' +
'1. Initial Test Set: LangSmith allows developers to create datasets of inputs and reference outputs to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces.\n' +
'\n' +
'2. Monitoring: LangSmith can monitor applications by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for associating feedback programmatically with runs, which can be used to track performance over time.\n' +
"2. Comparison View: When making changes to your applications, LangSmith provides a comparison view to see whether you've regressed with respect to your initial test cases. This is helpful for evaluating changes in prompts, retrieval strategies, or model choices.\n" +
'\n' +
'3. Exporting Datasets: LangSmith makes it easy to curate datasets, which can be exported for use in other contexts such as OpenAI Evals or fine-tuning with FireworksAI.\n' +
'3. Monitoring and A/B Testing: LangSmith provides monitoring charts to track key metrics over time and allows for A/B testing changes in prompt, model, or retrieval strategy.\n' +
'\n' +
'Overall, LangSmith simplifies the process of testing changes, constructing datasets, and extracting insights from logged runs, making it a valuable tool for testing and evaluation.'
'4. Debugging: LangSmith offers tracing and debugging information at each step of an LLM sequence, making it easier to identify and root-cause issues when things go wrong.\n' +
'\n' +
'5. Beta Testing and Production: LangSmith enables the addition of runs as examples to datasets, expanding test coverage on real-world scenarios. It also provides monitoring for application performance with respect to latency, cost, and feedback scores at the production stage.\n' +
'\n' +
'Overall, LangSmith provides comprehensive testing and monitoring capabilities for LLM applications.'
}
*/
```
Expand Down
28 changes: 14 additions & 14 deletions docs/core_docs/docs/use_cases/chatbots/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -210,7 +210,7 @@ While this chain can serve as a useful chatbot on its own with just the model’

We can set up and use a [`Retriever`](/docs/modules/data_connection/retrievers/) to pull domain-specific knowledge for our chatbot. To show this, let’s expand the simple chatbot we created above to be able to answer questions about LangSmith.

We’ll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store it in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/use_cases/question_answering/).
We’ll use [the LangSmith documentation](https://docs.smith.langchain.com/user_guide) as source material and store it in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/use_cases/question_answering/).

Let’s set up our retriever. First, we’ll install some required deps:

Expand All @@ -224,7 +224,7 @@ Next, we’ll use a document loader to pull data from a webpage:
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/overview"
"https://docs.smith.langchain.com/user_guide"
);

const rawDocs = await loader.load();
Expand Down Expand Up @@ -270,7 +270,7 @@ console.log(docs);
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Expand All @@ -280,21 +280,21 @@ console.log(docs);
'that we’ve constructed along the way (see above). Alternatively, we could spend some\n' +
'time constructing a small dataset by hand. For these situations, LangSmith simplifies',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default​At LangChain, all of us have LangSmith’s tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
}
Expand Down Expand Up @@ -486,7 +486,7 @@ await retriever.invoke("how can langsmith help with testing?");
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Expand All @@ -496,21 +496,21 @@ await retriever.invoke("how can langsmith help with testing?");
'that we’ve constructed along the way (see above). Alternatively, we could spend some\n' +
'time constructing a small dataset by hand. For these situations, LangSmith simplifies',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain have been building and using LangSmith with the goal of bridging this gap. This is our tactical user guide to outline effective ways to use LangSmith and maximize its benefits.On by default​At LangChain, all of us have LangSmith’s tracing running in the background by default. On the Python side, this is achieved by setting environment variables, which we establish',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'has helped us debug bugs in formatting logic, unexpected transformations to user input, and straight up missing user input.To a much lesser extent, this is also true of the output of an LLM. Oftentimes the output of an LLM is technically a string but that string may contain some structure (json, yaml) that is intended to be parsed into a structured representation. Understanding what the exact output is can help determine if there may be a need for different parsing.LangSmith provides a',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
}
Expand All @@ -526,28 +526,28 @@ await retriever.invoke("tell me more about that!");
Document {
pageContent: 'shadowRing,',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
},
Document {
pageContent: 'whenever we launch a virtual environment or open our bash shell and leave them set. The same principle applies to most JavaScript environments through process.env1.The benefit here is that all calls to LLMs, chains, agents, tools, and retrievers are logged to LangSmith. Around 90% of the time we don’t even look at the traces, but the 10% of the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow',
metadata: {
source: 'https://docs.smith.langchain.com/overview',
source: 'https://docs.smith.langchain.com/user_guide',
loc: [Object]
}
}
Expand Down
Loading
Loading