Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: integrations updates 20 #27210

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions docs/docs/integrations/providers/konlpy.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# KoNLPY

>[KoNLPy](https://konlpy.org/) is a Python package for natural language processing (NLP)
> of the Korean language.


## Installation and Setup

You need to install the `konlpy` python package.

```bash
pip install konlpy
```

## Text splitter

See a [usage example](/docs/how_to/split_by_token/#konlpy).

```python
from langchain_text_splitters import KonlpyTextSplitter
```
32 changes: 32 additions & 0 deletions docs/docs/integrations/providers/kuzu.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Kùzu

>[Kùzu](https://kuzudb.com/) is a company based in Waterloo, Ontario, Canada.
> It provides a highly scalable, extremely fast, easy-to-use [embeddable graph database](https://github.com/kuzudb/kuzu).



## Installation and Setup

You need to install the `kuzu` python package.

```bash
pip install kuzu
```

## Graph database

See a [usage example](/docs/integrations/graphs/kuzu_db).

```python
from langchain_community.graphs import KuzuGraph
```

## Chain

See a [usage example](/docs/integrations/graphs/kuzu_db/#creating-kuzuqachain).

```python
from langchain.chains import KuzuQAChain
```


32 changes: 32 additions & 0 deletions docs/docs/integrations/providers/llama_index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# LlamaIndex

>[LlamaIndex](https://www.llamaindex.ai/) is the leading data framework for building LLM applications


## Installation and Setup

You need to install the `llama-index` python package.

```bash
pip install llama-index
```

See the [installation instructions](https://docs.llamaindex.ai/en/stable/getting_started/installation/).

## Retrievers

### LlamaIndexRetriever

>It is used for the question-answering with sources over an LlamaIndex data structure.

```python
from langchain_community.retrievers.llama_index import LlamaIndexRetriever
```

### LlamaIndexGraphRetriever

>It is used for question-answering with sources over an LlamaIndex graph data structure.

```python
from langchain_community.retrievers.llama_index import LlamaIndexGraphRetriever
```
24 changes: 24 additions & 0 deletions docs/docs/integrations/providers/llamaedge.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# LlamaEdge

>[LlamaEdge](https://llamaedge.com/docs/intro/) is the easiest & fastest way to run customized
> and fine-tuned LLMs locally or on the edge.
>
>* Lightweight inference apps. `LlamaEdge` is in MBs instead of GBs
>* Native and GPU accelerated performance
>* Supports many GPU and hardware accelerators
>* Supports many optimized inference libraries
>* Wide selection of AI / LLM models



## Installation and Setup

See the [installation instructions](https://llamaedge.com/docs/user-guide/quick-start-command).

## Chat models

See a [usage example](/docs/integrations/chat/llama_edge).

```python
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
```
31 changes: 31 additions & 0 deletions docs/docs/integrations/providers/llamafile.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# llamafile

>[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you distribute and run LLMs
> with a single file.

>`llamafile` makes open LLMs much more accessible to both developers and end users.
> `llamafile` is doing that by combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with
> [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one framework that collapses
> all the complexity of LLMs down to a single-file executable (called a "llamafile")
> that runs locally on most computers, with no installation.


## Installation and Setup

See the [installation instructions](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#quickstart).

## LLMs

See a [usage example](/docs/integrations/llms/llamafile).

```python
from langchain_community.llms.llamafile import Llamafile
```

## Embedding models

See a [usage example](/docs/integrations/text_embedding/llamafile).

```python
from langchain_community.embeddings import LlamafileEmbeddings
```
Loading