From 30139c4badd08c91173374fa033a11e41b50b759 Mon Sep 17 00:00:00 2001 From: leo-gan Date: Mon, 29 Jul 2024 11:51:30 -0700 Subject: [PATCH] update --- docs/docs/integrations/llms/rwkv.mdx | 72 +++++++++++++++++++ .../docs/integrations/platforms/microsoft.mdx | 41 +++++------ docs/docs/integrations/platforms/openai.mdx | 8 +++ .../docs/integrations/providers/aerospike.mdx | 24 +++++++ docs/docs/integrations/providers/ai21.mdx | 7 ++ .../docs/integrations/providers/ainetwork.mdx | 16 +++++ docs/docs/integrations/providers/amadeus.mdx | 34 +++++++++ .../integrations/providers/mongodb_motor.mdx | 27 +++++++ docs/docs/integrations/providers/rwkv.mdx | 66 ++++------------- docs/docs/integrations/providers/slack.mdx | 14 ++++ docs/docs/integrations/providers/yuque.mdx | 17 +++++ 11 files changed, 250 insertions(+), 76 deletions(-) create mode 100644 docs/docs/integrations/llms/rwkv.mdx create mode 100644 docs/docs/integrations/providers/aerospike.mdx create mode 100644 docs/docs/integrations/providers/amadeus.mdx create mode 100644 docs/docs/integrations/providers/mongodb_motor.mdx create mode 100644 docs/docs/integrations/providers/yuque.mdx diff --git a/docs/docs/integrations/llms/rwkv.mdx b/docs/docs/integrations/llms/rwkv.mdx new file mode 100644 index 0000000000000..8c9ddbee4780e --- /dev/null +++ b/docs/docs/integrations/llms/rwkv.mdx @@ -0,0 +1,72 @@ +# RWKV-4 + +>[RWKV](https://www.rwkv.com/) (pronounced RwaKuv) language model is an RNN +> with GPT-level LLM performance, +> and it can also be directly trained like a GPT transformer (parallelizable). +> +>It's combining the best of RNN and transformer - great performance, fast inference, +> fast training, saves VRAM, "infinite" ctxlen, and free text embedding. +> Moreover it's 100% attention-free, and a LFAI project. + + +## Installation and Setup + +- Install the Python `rwkv` and `tokenizer` packages + +```bash +pip install rwkv tokenizer +``` + +- Download a [RWKV model](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) and place it in your desired directory +- Download a [tokens file](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/20B_tokenizer.json) + +### Rwkv-4 models recommended VRAM + +| Model | 8bit | bf16/fp16 | fp32 | +|-------|------|-----------|------| +| 14B | 16GB | 28GB | >50GB | +| 7B | 8GB | 14GB | 28GB | +| 3B | 2.8GB| 6GB | 12GB | +| 1b5 | 1.3GB| 3GB | 6GB | + +See the [rwkv pip](https://pypi.org/project/rwkv/) page for more information about strategies, +including streaming and CUDA support. + +## Usage + +### RWKV + +To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration. +```python +from langchain_community.llms import RWKV + +# Test the model + +```python + +def generate_prompt(instruction, input=None): + if input: + return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. + +# Instruction: +{instruction} + +# Input: +{input} + +# Response: +""" + else: + return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. + +# Instruction: +{instruction} + +# Response: +""" + + +model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json") +response = model.invoke(generate_prompt("Once upon a time, ")) +``` + diff --git a/docs/docs/integrations/platforms/microsoft.mdx b/docs/docs/integrations/platforms/microsoft.mdx index 180b021963e03..23e91ffa2e88f 100644 --- a/docs/docs/integrations/platforms/microsoft.mdx +++ b/docs/docs/integrations/platforms/microsoft.mdx @@ -237,6 +237,8 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_onenote). from langchain_community.document_loaders.onenote import OneNoteLoader ``` +## Vectorstores + ### Playwright URL Loader >[Playwright](https://github.com/microsoft/playwright) is an open-source automation tool @@ -271,8 +273,6 @@ Below are two available Azure Cosmos DB APIs that can provide vector store funct > You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string. > Use vector search in Azure Cosmos DB for MongoDB vCore to seamlessly integrate your AI-based applications with your data that's stored in Azure Cosmos DB. -#### Installation and Setup - See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db). We need to install `pymongo` python package. @@ -281,14 +281,6 @@ We need to install `pymongo` python package. pip install pymongo ``` -#### Deploy Azure Cosmos DB on Microsoft Azure - -Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture. - -With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones. - -[Sign Up](https://azure.microsoft.com/en-us/free/) for free to get started today. - See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db). ```python @@ -299,12 +291,7 @@ from langchain_community.vectorstores import AzureCosmosDBVectorSearch >[Azure Cosmos DB for NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/vector-search) now offers vector indexing and search in preview. This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors -directly in the documents alongside your data. This means that each document in your database can contain not only traditional schema-free data, -but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching, -as the vectors are stored in the same logical unit as the data they represent. This simplifies data management, AI application architectures, and the -efficiency of vector-based operations. - -#### Installation and Setup +directly in the documents alongside your data. See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db_no_sql). @@ -314,13 +301,6 @@ We need to install `azure-cosmos` python package. pip install azure-cosmos ``` -#### Deploy Azure Cosmos DB on Microsoft Azure - -Azure Cosmos DB offers a solution for modern apps and intelligent workloads by being very responsive with dynamic and elastic autoscale. It is available -in every Azure region and can automatically replicate data closer to users. It has SLA guaranteed low-latency and high availability. - -[Sign Up](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-python?pivots=devcontainer-codespace) for free to get started today. - See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db_no_sql). ```python @@ -328,6 +308,7 @@ from langchain_community.vectorstores import AzureCosmosDBNoSQLVectorSearch ``` ## Retrievers + ### Azure AI Search >[Azure AI Search](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search) (formerly known as `Azure Search` or `Azure Cognitive Search` ) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. @@ -445,6 +426,20 @@ See a [usage example](/docs/integrations/tools/playwright). from langchain_community.agent_toolkits import PlayWrightBrowserToolkit ``` +## Memory + +### Azure CosmosDB Chat Message History + +We need to install a python package. + +```bash +pip install azure-cosmos +``` + +```python +from langchain_community.chat_message_histories import CosmosDBChatMessageHistory +``` + ## Graphs ### Azure Cosmos DB for Apache Gremlin diff --git a/docs/docs/integrations/platforms/openai.mdx b/docs/docs/integrations/platforms/openai.mdx index 57830479e93d2..ccd3d143ba187 100644 --- a/docs/docs/integrations/platforms/openai.mdx +++ b/docs/docs/integrations/platforms/openai.mdx @@ -92,6 +92,14 @@ See a [usage example](/docs/integrations/tools/dalle_image_generator). from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper ``` +### ChatGPT Plugins + +See a [usage example](/docs/integrations/tools/chatgpt_plugins). + +```python +from langchain_community.tools import AIPluginTool +``` + ## Adapter See a [usage example](/docs/integrations/adapters/openai). diff --git a/docs/docs/integrations/providers/aerospike.mdx b/docs/docs/integrations/providers/aerospike.mdx new file mode 100644 index 0000000000000..11f235de29883 --- /dev/null +++ b/docs/docs/integrations/providers/aerospike.mdx @@ -0,0 +1,24 @@ +# Aerospike + +>[Aerospike Vector Search](https://aerospike.com/docs/vector) (AVS) is an extension to +> the `Aerospike Database` that enables searches across very large datasets stored in `Aerospike`. +> This new service lives outside of `Aerospike` and builds an index to perform those searches. + + +## Installation and Setup + +You need to have a running `AVS` instance. Use one of the [installation methods](https://aerospike.com/docs/vector/install). + +You need to install `aerospike-vector-search` python package. + +```bash +pip install aerospike-vector-search +``` + +## Vectorstore + +See a [usage example](/docs/integrations/vectorstores/aerospike). + +```python +from langchain_community.vectorstores import Aerospike +``` diff --git a/docs/docs/integrations/providers/ai21.mdx b/docs/docs/integrations/providers/ai21.mdx index 60a925363b1fa..45dbfa2fc25a6 100644 --- a/docs/docs/integrations/providers/ai21.mdx +++ b/docs/docs/integrations/providers/ai21.mdx @@ -34,6 +34,13 @@ serving as a context, and a question and return an answer based entirely on this from langchain_ai21 import AI21ContextualAnswers ``` +### AI21 Community + +```python +from langchain_community.llms import AI21 +``` + + ## Chat models diff --git a/docs/docs/integrations/providers/ainetwork.mdx b/docs/docs/integrations/providers/ainetwork.mdx index fdd8393e23cb5..beb7b9ad78c6e 100644 --- a/docs/docs/integrations/providers/ainetwork.mdx +++ b/docs/docs/integrations/providers/ainetwork.mdx @@ -13,6 +13,22 @@ You need to install `ain-py` python package. pip install ain-py ``` You need to set the `AIN_BLOCKCHAIN_ACCOUNT_PRIVATE_KEY` environmental variable to your AIN Blockchain Account Private Key. + +## Tools + +Tools that help you interact with the `AINetwork` blockchain. They are all included +in the `AINetworkToolkit` toolkit. + +See a [usage example](/docs/integrations/toolkits/ainetwork). + +```python +from langchain_community.tools import AINAppOps +from langchain_community.tools import AINOwnerOps +from langchain_community.tools import AINRuleOps +from langchain_community.tools import AINTransfer +from langchain_community.tools import AINValueOps +``` + ## Toolkit See a [usage example](/docs/integrations/tools/ainetwork). diff --git a/docs/docs/integrations/providers/amadeus.mdx b/docs/docs/integrations/providers/amadeus.mdx new file mode 100644 index 0000000000000..0a4b3485e5f59 --- /dev/null +++ b/docs/docs/integrations/providers/amadeus.mdx @@ -0,0 +1,34 @@ +# Amadeus + +>[Amadeus Travel APIs](https://developers.amadeus.com/). Get instant access to over 400 airlines, 150,000 hotels, 300,000 tours & activities. + +## Installation and Setup + +To use the `Amadeus` integration, you need to have an `API key` from `Amadeus`. +See [instructions here](https://developers.amadeus.com/get-started/get-started-with-self-service-apis-335). + +We have to install the `amadeus` python package: + +```bash +pip install amadeus +``` + +## Tools + +Tools that help you interact with the `Amadeus travel APIs`. They are all included +in the `Amadeus` toolkit. + +See a [usage example](/docs/integrations/toolkits/amadeus). + +```python +from langchain_community.tools.amadeus import AmadeusClosestAirport +from langchain_community.tools.amadeus import AmadeusFlightSearch +``` + +## Toolkit + +See a [usage example](/docs/integrations/toolkits/amadeus). + +```python +from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit +``` \ No newline at end of file diff --git a/docs/docs/integrations/providers/mongodb_motor.mdx b/docs/docs/integrations/providers/mongodb_motor.mdx new file mode 100644 index 0000000000000..eae82d46e1382 --- /dev/null +++ b/docs/docs/integrations/providers/mongodb_motor.mdx @@ -0,0 +1,27 @@ +# MongoDB Motor + +>[MongoDB](https://www.mongodb.com/) is a source-available, cross-platform, document-oriented +> database program. Classified as a `NoSQL` database product, `MongoDB` utilizes JSON-like +> documents with optional schemas. +> +> [Motor](https://pypi.org/project/motor/) is a full-featured, non-blocking `MongoDB` driver +> for Python `asyncio` and `Tornado` applications. `Motor` presents a coroutine-based +> API for non-blocking access to MongoDB. + +## Installation and Setup + +We need to set up the configuratin parameters for the MongoDB database. See instructions [here](/docs/integrations/document_loaders/mongodb/). + +We also need to install `motor` python package. + +```bash +pip install motor +``` + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/mongodb/). + +```python +from langchain_community.document_loaders.mongodb import MongodbLoader +``` diff --git a/docs/docs/integrations/providers/rwkv.mdx b/docs/docs/integrations/providers/rwkv.mdx index 90a795a420865..4cb8a0cae5520 100644 --- a/docs/docs/integrations/providers/rwkv.mdx +++ b/docs/docs/integrations/providers/rwkv.mdx @@ -1,65 +1,25 @@ # RWKV-4 -This page covers how to use the `RWKV-4` wrapper within LangChain. -It is broken into two parts: installation and setup, and then usage with an example. +>[RWKV](https://www.rwkv.com/) (pronounced RwaKuv) language model is an RNN +> with GPT-level LLM performance, +> and it can also be directly trained like a GPT transformer (parallelizable). ## Installation and Setup -- Install the Python package with `pip install rwkv` -- Install the tokenizer Python package with `pip install tokenizer` -- Download a [RWKV model](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) and place it in your desired directory -- Download the [tokens file](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/20B_tokenizer.json) - -## Usage - -### RWKV - -To use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration. -```python -from langchain_community.llms import RWKV - -# Test the model - -```python - -def generate_prompt(instruction, input=None): - if input: - return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. -# Instruction: -{instruction} +- Install the Python `rwkv` and `tokenizer` packages -# Input: -{input} - -# Response: -""" - else: - return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. - -# Instruction: -{instruction} - -# Response: -""" - - -model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json") -response = model.invoke(generate_prompt("Once upon a time, ")) +```bash +pip install rwkv tokenizer ``` -## Model File +- Download a [RWKV model](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) and place it in your desired directory +- Download a [tokens file](https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/20B_tokenizer.json) -You can find links to model file downloads at the [RWKV-4-Raven](https://huggingface.co/BlinkDL/rwkv-4-raven/tree/main) repository. +## LLMs -### Rwkv-4 models -> recommended VRAM +### RWKV +See a [usage example](/docs/integrations/llms/rwkv). +```python +from langchain_community.llms import RWKV ``` -RWKV VRAM -Model | 8bit | bf16/fp16 | fp32 -14B | 16GB | 28GB | >50GB -7B | 8GB | 14GB | 28GB -3B | 2.8GB| 6GB | 12GB -1b5 | 1.3GB| 3GB | 6GB -``` - -See the [rwkv pip](https://pypi.org/project/rwkv/) page for more information about strategies, including streaming and cuda support. diff --git a/docs/docs/integrations/providers/slack.mdx b/docs/docs/integrations/providers/slack.mdx index 9013e5b0cc289..d84896fae4771 100644 --- a/docs/docs/integrations/providers/slack.mdx +++ b/docs/docs/integrations/providers/slack.mdx @@ -15,6 +15,20 @@ See a [usage example](/docs/integrations/document_loaders/slack). from langchain_community.document_loaders import SlackDirectoryLoader ``` +## Tools + +Tools that help you interact with the `Slack`. They are all included +in the `SlackToolkit` toolkit. + +See a [usage example](/docs/integrations/toolkits/slack). + +```python +from langchain_community.tools.slack import SlackGetChannel +from langchain_community.tools.slack import SlackGetMessage +from langchain_community.tools.slack import SlackScheduleMessage +from langchain_community.tools.slack import SlackSendMessage +``` + ## Toolkit See a [usage example](/docs/integrations/tools/slack). diff --git a/docs/docs/integrations/providers/yuque.mdx b/docs/docs/integrations/providers/yuque.mdx new file mode 100644 index 0000000000000..c7558061911ba --- /dev/null +++ b/docs/docs/integrations/providers/yuque.mdx @@ -0,0 +1,17 @@ +# Yuque + +>[Yuque](https://www.yuque.com/) is a professional cloud-based knowledge base for team +> collaboration in documentation. + + +## Installation and Setup + +You have to get the `Yuque` `access_token` and `api_url` on this [page](https://www.yuque.com/settings/tokens). + +## Document Loader + +See a [usage example](/docs/integrations/document_loaders/yuque). + +```python +from langchain_community.document_loaders import YuqueLoader +```