Skip to content

Commit 26865fc

Browse files
authored
change h5 to h4 (#1141)
## Overview - Remove usage of H5 in guides. Only <H4 generates anchor links ## Type of change **Type:** Fix ## Checklist <!-- Put an 'x' in all boxes that apply --> - [ ] I have read the [contributing guidelines](README.md) - [ ] I have tested my changes locally using `docs dev` - [ ] All code examples have been tested and work correctly - [ ] I have used **root relative** paths for internal links - [ ] I have updated navigation in `src/docs.json` if needed - I have gotten approval from the relevant reviewers - (Internal team members only / optional) I have created a preview deployment using the [Create Preview Branch workflow](https://github.com/langchain-ai/docs/actions/workflows/create-preview-branch.yml) ## Additional notes <!-- Any other information that would be helpful for reviewers -->
1 parent 1da5eae commit 26865fc

File tree

6 files changed

+30
-34
lines changed

6 files changed

+30
-34
lines changed

src/langsmith/administration-overview.mdx

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -255,9 +255,7 @@ LangSmith has rate limits which are designed to ensure the stability of the serv
255255

256256
To ensure access and stability, LangSmith will respond with HTTP Status Code 429 indicating that rate or usage limits have been exceeded under the following circumstances:
257257

258-
#### Scenarios
259-
260-
###### Temporary throughput limit over a 1 minute period at our application load balancer
258+
#### Temporary throughput limit over a 1 minute period at our application load balancer
261259

262260
This 429 is the the result of exceeding a fixed number of API calls over a 1 minute window on a per API key/access token basis. The start of the window will vary slightly — it is not guaranteed to start at the start of a clock minute — and may change depending on application deployment events.
263261

@@ -276,7 +274,7 @@ This 429 is thrown by our application load balancer and is a mechanism in place
276274
The LangSmith SDK takes steps to minimize the likelihood of reaching these limits on run-related endpoints by batching up to 100 runs from a single session ID into a single API call.
277275
</Note>
278276

279-
###### Plan-level hourly trace event limit
277+
#### Plan-level hourly trace event limit
280278

281279
This 429 is the result of reaching your maximum hourly events ingested and is evaluated in a fixed window starting at the beginning of each clock hour in UTC and resets at the top of each new hour.
282280

@@ -291,7 +289,7 @@ This is thrown by our application and varies by plan tier, with organizations on
291289
| Startup/Plus | 500,000 events | 1 hour |
292290
| Enterprise | Custom | Custom |
293291

294-
###### Plan-level hourly trace data ingest limit
292+
#### Plan-level hourly trace data ingest limit
295293

296294
This 429 is the result of reaching the maximum amount of data ingested across your trace inputs, outputs, and metadata and is evaluated in a fixed window starting at the beginning of each clock hour in UTC and resets at the top of each new hour.
297295

@@ -306,7 +304,7 @@ This is thrown by our application and varies by plan tier, with organizations on
306304
| Startup/Plus | 5.0GB | 1 hour |
307305
| Enterprise | Custom | Custom |
308306

309-
###### Plan-level monthly unique traces limit
307+
#### Plan-level monthly unique traces limit
310308

311309
This 429 is the result of reaching your maximum monthly traces ingested and is evaluated in a fixed window starting at the beginning of each calendar month in UTC and resets at the beginning of each new month.
312310

@@ -316,7 +314,7 @@ This is thrown by our application and applies only to the Developer Plan Tier wh
316314
| ------------------------------ | ------------ | ------- |
317315
| Developer (no payment on file) | 5,000 traces | 1 month |
318316

319-
###### Self-configured monthly usage limits
317+
#### Self-configured monthly usage limits
320318

321319
This 429 is the result of reaching your usage limit as configured by your organization admin and is evaluated in a fixed window starting at the beginning of each calendar month in UTC and resets at the beginning of each new month.
322320

src/langsmith/evaluation-concepts.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -129,31 +129,31 @@ Typically, we will run multiple experiments on a given dataset, testing differen
129129
![Comparison view](/langsmith/images/comparison-view.png)
130130

131131

132-
### Experiment configuration
132+
## Experiment configuration
133133

134134
LangSmith supports a number of experiment configurations which make it easier to run your evals in the manner you want.
135135

136-
#### Repetitions
136+
### Repetitions
137137

138138
Running an experiment multiple times can be helpful since LLM outputs are not deterministic and can differ from one repetition to the next. By running multiple repetitions, you can get a more accurate estimate of the performance of your system.
139139

140140
Repetitions can be configured by passing the `num_repetitions` argument to `evaluate` / `aevaluate` ([Python](https://docs.smith.langchain.com/reference/python/evaluation/langsmith.evaluation._runner.evaluate), [TypeScript](https://docs.smith.langchain.com/reference/js/interfaces/evaluation.EvaluateOptions#numrepetitions)). Repeating the experiment involves both re-running the target function to generate outputs and re-running the evaluators.
141141

142142
To learn more about running repetitions on experiments, read the [how-to-guide](/langsmith/repetition).
143143

144-
#### Concurrency
144+
### Concurrency
145145

146146
By passing the `max_concurrency` argument to `evaluate` / `aevaluate`, you can specify the concurrency of your experiment. The `max_concurrency` argument has slightly different semantics depending on whether you are using `evaluate` or `aevaluate`.
147147

148-
##### `evaluate`
148+
#### `evaluate`
149149

150150
The `max_concurrency` argument to `evaluate` specifies the maximum number of concurrent threads to use when running the experiment. This is both for when running your target function as well as your evaluators.
151151

152-
##### `aevaluate`
152+
#### `aevaluate`
153153

154154
The `max_concurrency` argument to `aevaluate` is fairly similar to `evaluate`, but instead uses a semaphore to limit the number of concurrent tasks that can run at once. `aevaluate` works by creating a task for each example in the dataset. Each task consists of running the target function as well as all of the evaluators on that specific example. The `max_concurrency` argument specifies the maximum number of concurrent tasks, or put another way - examples, to run at once.
155155

156-
#### Caching
156+
### Caching
157157

158158
Lastly, you can also cache the API calls made in your experiment by setting the `LANGSMITH_TEST_CACHE` to a valid folder on your device with write access. This will cause the API calls made in your experiment to be cached to disk, meaning future experiments that make the same API calls will be greatly sped up.
159159

src/langsmith/faq.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ The user will be deprovisioned from your LangSmith organization according to you
8787

8888
Yes. If your identity provider supports syncing alternate fields to the `displayName` group attribute, you may use an alternate attribute (like `description`) as the `displayName` in LangSmith and retain full customizability of the identity provider group name. Otherwise, groups must follow the specific naming convention described in the [Group Naming Convention](#group-naming-convention) section to properly map to LangSmith roles and workspaces.
8989

90-
##### _Why is my Okta integration not working?_
90+
#### _Why is my Okta integration not working?_
9191

9292
See Okta's troubleshooting guide here: https://help.okta.com/en-us/content/topics/users-groups-profiles/usgp-group-push-troubleshoot.htm.
9393

src/langsmith/observability-studio.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,11 @@ Studio supports the following methods for modifying prompts in your graph:
2121

2222
Studio allows you to edit prompts used inside individual nodes, directly from the graph interface.
2323

24-
#### Graph Configuration
24+
### Graph Configuration
2525

2626
Define your [configuration](/oss/langgraph/use-graph-api#add-runtime-configuration) to specify prompt fields and their associated nodes using `langgraph_nodes` and `langgraph_type` keys.
2727

28-
##### `langgraph_nodes`
28+
#### `langgraph_nodes`
2929

3030
- **Description**: Specifies which nodes of the graph a configuration field is associated with.
3131
- **Value Type**: Array of strings, where each string is the name of a node in your graph.
@@ -38,7 +38,7 @@ Define your [configuration](/oss/langgraph/use-graph-api#add-runtime-configurati
3838
)
3939
```
4040

41-
##### `langgraph_type`
41+
#### `langgraph_type`
4242

4343
- **Description**: Specifies the type of configuration field, which determines how it's handled in the UI.
4444
- **Value Type**: String

src/oss/concepts/memory.mdx

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,9 @@ For more information on common techniques for managing messages, see the [Add an
3636

3737
Long-term memory is a complex challenge without a one-size-fits-all solution. However, the following questions provide a framework to help you navigate the different techniques:
3838

39-
* [What is the type of memory?](#memory-types) Humans use memories to remember facts ([semantic memory](#semantic-memory)), experiences ([episodic memory](#episodic-memory)), and rules ([procedural memory](#procedural-memory)). AI agents can use memory in the same ways. For example, AI agents can use memory to remember specific facts about a user to accomplish a task.
39+
* What is the type of memory? Humans use memories to remember facts ([semantic memory](#semantic-memory)), experiences ([episodic memory](#episodic-memory)), and rules ([procedural memory](#procedural-memory)). AI agents can use memory in the same ways. For example, AI agents can use memory to remember specific facts about a user to accomplish a task.
4040
* [When do you want to update memories?](#writing-memories) Memory can be updated as part of an agent's application logic (e.g., "on the hot path"). In this case, the agent typically decides to remember facts before responding to a user. Alternatively, memory can be updated as a background task (logic that runs in the background / asynchronously and generates memories). We explain the tradeoffs between these approaches in the [section below](#writing-memories).
4141

42-
### Memory types
43-
4442
Different applications require various types of memory. Although the analogy isn't perfect, examining [human memory types](https://www.psychologytoday.com/us/basics/memory/types-of-memory?ref=blog.langchain.dev) can be insightful. Some research (e.g., the [CoALA paper](https://arxiv.org/pdf/2309.02427)) have even mapped these human memory types to those used in AI agents.
4543

4644
| Memory Type | What is Stored | Human Example | Agent Example |
@@ -49,23 +47,25 @@ Different applications require various types of memory. Although the analogy isn
4947
| [Episodic](#episodic-memory) | Experiences | Things I did | Past agent actions |
5048
| [Procedural](#procedural-memory) | Instructions | Instincts or motor skills | Agent system prompt |
5149

52-
#### Semantic memory
50+
### Semantic memory
5351

5452
[Semantic memory](https://en.wikipedia.org/wiki/Semantic_memory), both in humans and AI agents, involves the retention of specific facts and concepts. In humans, it can include information learned in school and the understanding of concepts and their relationships. For AI agents, semantic memory is often used to personalize applications by remembering facts or concepts from past interactions.
5553

5654
<Note>
5755
Semantic memory is different from "semantic search," which is a technique for finding similar content using "meaning" (usually as embeddings). Semantic memory is a term from psychology, referring to storing facts and knowledge, while semantic search is a method for retrieving information based on meaning rather than exact matches.
5856
</Note>
5957

60-
##### Profile
58+
Semantic memories can be managed in different ways:
59+
60+
#### Profile
6161

62-
Semantic memories can be managed in different ways. For example, memories can be a single, continuously updated "profile" of well-scoped and specific information about a user, organization, or other entity (including the agent itself). A profile is generally just a JSON document with various key-value pairs you've selected to represent your domain.
62+
Memories can be a single, continuously updated "profile" of well-scoped and specific information about a user, organization, or other entity (including the agent itself). A profile is generally just a JSON document with various key-value pairs you've selected to represent your domain.
6363

6464
When remembering a profile, you will want to make sure that you are **updating** the profile each time. As a result, you will want to pass in the previous profile and [ask the model to generate a new profile](https://github.com/langchain-ai/memory-template) (or some [JSON patch](https://github.com/hinthornw/trustcall) to apply to the old profile). This can be become error-prone as the profile gets larger, and may benefit from splitting a profile into multiple documents or **strict** decoding when generating documents to ensure the memory schemas remains valid.
6565

6666
![](/oss/images/update-profile.png)
6767

68-
##### Collection
68+
#### Collection
6969

7070
Alternatively, memories can be a collection of documents that are continuously updated and extended over time. Each individual memory can be more narrowly scoped and easier to generate, which means that you're less likely to **lose** information over time. It's easier for an LLM to generate _new_ objects for new information than reconcile new information with an existing profile. As a result, a document collection tends to lead to [higher recall downstream](https://en.wikipedia.org/wiki/Precision_and_recall).
7171

@@ -79,7 +79,7 @@ Finally, using a collection of memories can make it challenging to provide compr
7979

8080
Regardless of memory management approach, the central point is that the agent will use the semantic memories to [ground its responses](/oss/langchain/retrieval), which often leads to more personalized and relevant interactions.
8181

82-
#### Episodic memory
82+
### Episodic memory
8383

8484
[Episodic memory](https://en.wikipedia.org/wiki/Episodic_memory), in both humans and AI agents, involves recalling past events or actions. The [CoALA paper](https://arxiv.org/pdf/2309.02427) frames this well: facts can be written to semantic memory, whereas *experiences* can be written to episodic memory. For AI agents, episodic memory is often used to help an agent remember how to accomplish a task.
8585

@@ -103,7 +103,7 @@ Note that the memory [store](/oss/langgraph/persistence#memory-store) is just on
103103
See this how-to [video](https://www.youtube.com/watch?v=37VaU7e7t5o) for example usage of dynamic few-shot example selection in LangSmith. Also, see this [blog post](https://blog.langchain.dev/few-shot-prompting-to-improve-tool-calling-performance/) showcasing few-shot prompting to improve tool calling performance and this [blog post](https://blog.langchain.dev/aligning-llm-as-a-judge-with-human-preferences/) using few-shot example to align an LLMs to human preferences.
104104
:::
105105

106-
#### Procedural memory
106+
### Procedural memory
107107

108108
[Procedural memory](https://en.wikipedia.org/wiki/Procedural_memory), in both humans and AI agents, involves remembering the rules used to perform tasks. In humans, procedural memory is like the internalized knowledge of how to perform tasks, such as riding a bike via basic motor skills and balance. Episodic memory, on the other hand, involves recalling specific experiences, such as the first time you successfully rode a bike without training wheels or a memorable bike ride through a scenic route. For AI agents, procedural memory is a combination of model weights, agent code, and agent's prompt that collectively determine the agent's functionality.
109109

0 commit comments

Comments
 (0)