Skip to content

Commit 4127d3b

Browse files
committed
Merge branch 'master' into tim/StreamlitCallbackHandler
* master: (28 commits) [Feature][VectorStore] Support StarRocks as vector db (langchain-ai#6119) Relax string input mapper check (langchain-ai#6544) bump to ver 208 (langchain-ai#6540) Harrison/multi tool (langchain-ai#6518) Infino integration for simplified logs, metrics & search across LLM data & token usage (langchain-ai#6218) Update model token mappings/cost to include 0613 models (langchain-ai#6122) Fix issue with non-list `To` header in GmailSendMessage Tool (langchain-ai#6242) Integrate Rockset as Vectorstore (langchain-ai#6216) Feat: Add a prompt template parameter to qa with structure chains (langchain-ai#6495) Add async support for HuggingFaceTextGenInference (langchain-ai#6507) Be able to use Codey models on Vertex AI (langchain-ai#6354) Add KuzuQAChain (langchain-ai#6454) Update index.mdx (langchain-ai#6326) Export trajectory eval fn (langchain-ai#6509) typo(llamacpp.ipynb): 'condiser' -> 'consider' (langchain-ai#6474) Fix typo in docstring of format_tool_to_openai_function (langchain-ai#6479) Make streamlit import optional (langchain-ai#6510) Fixed: 'readible' -> readable (langchain-ai#6492) Documentation Fix: Correct the example code output in the prompt templates doc (langchain-ai#6496) Fix link (langchain-ai#6501) ...
2 parents d8bc750 + 57cc3d1 commit 4127d3b

File tree

112 files changed

+6290
-1301
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

112 files changed

+6290
-1301
lines changed

docs/docs_skeleton/docs/get_started/introduction.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Learn best practices for developing with LangChain.
5454
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
5555

5656
### [Additional resources](/docs/additional_resources/)
57-
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/ecosystem/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
57+
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
5858

5959
<h3><span style={{color:"#2e8555"}}> Support </span></h3>
6060

docs/docs_skeleton/docs/modules/chains/additional/question_answering.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Document QA
22

3-
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](../document.html).
3+
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).
44

55
import Example from "@snippets/modules/chains/additional/question_answering.mdx"
66

docs/docs_skeleton/docs/modules/data_connection/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ sidebar_position: 1
55
# Data connection
66

77
Many LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the
8-
building blocks to load, transform, and query your data via:
8+
building blocks to load, transform, store and query your data via:
99

1010
- [Document loaders](/docs/modules/data_connection/document_loaders/): Load documents from many different sources
1111
- [Document transformers](/docs/modules/data_connection/document_transformers/): Split documents, drop redundant documents, and more

docs/docs_skeleton/docusaurus.config.js

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ const config = {
2323
// For GitHub pages deployment, it is often '/<projectName>/'
2424
baseUrl: "/",
2525

26-
onBrokenLinks: "ignore",
27-
onBrokenMarkdownLinks: "ignore",
26+
onBrokenLinks: "warn",
27+
onBrokenMarkdownLinks: "throw",
2828

2929
plugins: [
3030
() => ({
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Alibaba Cloud Opensearch
2+
3+
[Alibaba Cloud Opensearch](https://www.alibabacloud.com/product/opensearch) OpenSearch is a one-stop platform to develop intelligent search services. OpenSearch was built based on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.
4+
5+
OpenSearch helps you develop high quality, maintenance-free, and high performance intelligent search services to provide your users with high search efficiency and accuracy.
6+
7+
OpenSearch provides the vector search feature. In specific scenarios, especially test question search and image search scenarios, you can use the vector search feature together with the multimodal search feature to improve the accuracy of search results. This topic describes the syntax and usage notes of vector indexes.
8+
9+
## Purchase an instance and configure it
10+
11+
- Purchase OpenSearch Vector Search Edition from [Alibaba Cloud](https://opensearch.console.aliyun.com) and configure the instance according to the help [documentation](https://help.aliyun.com/document_detail/463198.html?spm=a2c4g.465092.0.0.2cd15002hdwavO).
12+
13+
## Alibaba Cloud Opensearch Vector Store Wrappers
14+
supported functions:
15+
- `add_texts`
16+
- `add_documents`
17+
- `from_texts`
18+
- `from_documents`
19+
- `similarity_search`
20+
- `asimilarity_search`
21+
- `similarity_search_by_vector`
22+
- `asimilarity_search_by_vector`
23+
- `similarity_search_with_relevance_scores`
24+
25+
For a more detailed walk through of the Alibaba Cloud OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstores/examples/alibabacloud_opensearch.ipynb)
26+
27+
If you encounter any problems during use, please feel free to contact [xingshaomin.xsm@alibaba-inc.com](xingshaomin.xsm@alibaba-inc.com) , and we will do our best to provide you with assistance and support.
28+

docs/extras/ecosystem/integrations/awadb.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,4 @@ whether for semantic search or example selection.
1818
from langchain.vectorstores import AwaDB
1919
```
2020

21-
For a more detailed walkthrough of the AwaDB wrapper, see [this notebook](../modules/indexes/vectorstores/examples/awadb.ipynb)
21+
For a more detailed walkthrough of the AwaDB wrapper, see [here](/docs/modules/data_connection/vectorstores/integrations/awadb.html).

docs/extras/ecosystem/integrations/databricks.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,12 +12,12 @@ Databricks embraces the LangChain ecosystem in various ways:
1212

1313
Databricks connector for the SQLDatabase Chain
1414
----------------------------------------------
15-
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](./databricks/databricks.html) for details.
15+
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](/docs/ecosystem/integrations/databricks/databricks.html) for details.
1616

1717
Databricks-managed MLflow integrates with LangChain
1818
---------------------------------------------------
1919

20-
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](./mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
20+
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/ecosystem/integrations/mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
2121

2222
Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details.
2323

@@ -26,11 +26,11 @@ Databricks-managed MLflow makes it more convenient to develop LangChain applicat
2626
Databricks as an LLM provider
2727
-----------------------------
2828

29-
The notebook [Wrap Databricks endpoints as LLMs](../modules/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
29+
The notebook [Wrap Databricks endpoints as LLMs](/docs/modules/model_io/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
3030

3131
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
3232

3333
Databricks Dolly
3434
----------------
3535

36-
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](../modules/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
36+
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/modules/model_io/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.

docs/extras/ecosystem/integrations/google_search.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,4 +29,4 @@ from langchain.agents import load_tools
2929
tools = load_tools(["google-search"])
3030
```
3131

32-
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)
32+
For more information on tools, see [this page](/docs/modules/agents/tools/).

docs/extras/ecosystem/integrations/google_serper.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,4 +70,4 @@ from langchain.agents import load_tools
7070
tools = load_tools(["google-serper"])
7171
```
7272

73-
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)
73+
For more information on tools, see [this page](/docs/modules/agents/tools/).

docs/extras/ecosystem/integrations/huggingface.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,4 +66,4 @@ For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_
6666

6767
The Hugging Face Hub has lots of great [datasets](https://huggingface.co/datasets) that can be used to evaluate your LLM chains.
6868

69-
For a detailed walkthrough of how to use them to do so, see [this notebook](../use_cases/evaluation/huggingface_datasets.html)
69+
For a detailed walkthrough of how to use them to do so, see [this notebook](/docs/use_cases/evaluation/huggingface_datasets.html)

0 commit comments

Comments
 (0)