Skip to content

Commit

Permalink
🔨 chore: Add docs workflow (lobehub#658)
Browse files Browse the repository at this point in the history
* 🔧 chore: Add docs workflow and update docs files

* 📝 docs: Update wiki docs link
  • Loading branch information
canisminor1990 authored Dec 14, 2023
1 parent fedf799 commit d1df19a
Show file tree
Hide file tree
Showing 46 changed files with 882 additions and 62 deletions.
12 changes: 7 additions & 5 deletions .i18nrc.js
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@ module.exports = defineConfig({
jsonMode: true,
},
markdown: {
entry: ['./README.md'],
outputLocales: ['zh_CN'],
outputExtensions: (locale) => {
if (locale === 'en_US') return '.md';
return `.${locale.replace('_', '-')}.md`;
entry: ['./README.zh-CN.md', './docs/**/*.zh-CN.md'],
entryLocale: 'zh-CN',
entryExtension: '.zh-CN.md',
outputLocales: ['en-US'],
outputExtensions: (locale, { getDefaultExtension }) => {
if (locale === 'en-US') return '.md';
return getDefaultExtension(locale);
},
},
});
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ Beside these features, LobeChat also have much better basic technique undergroun

> \[!NOTE]
>
> The complete list of reports can be found in the [📘 Lighthouse Reports](https://github.com/lobehub/lobe-chat/wiki/Lighthouse)
> The complete list of reports can be found in the [📘 Lighthouse Reports](https://github.com/lobehub/lobe-chat/wiki/Others/Lighthouse)
| Desktop | Mobile |
| :-----------------------------------------: | :----------------------------------------: |
Expand Down Expand Up @@ -348,7 +348,7 @@ If you have deployed your own project following the one-click deployment steps i

> \[!TIP]
>
> We suggest you redeploy using the following steps, [📘 Maintaining Updates with LobeChat Self-Deployment](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync).
> We suggest you redeploy using the following steps, [📘 Maintaining Updates with LobeChat Self-Deployment](https://github.com/lobehub/lobe-chat/wiki/Deployment/Upstream-Sync).
<br/>

Expand Down Expand Up @@ -381,7 +381,7 @@ $ docker run -d -p 3210:3210 \

> \[!NOTE]
>
> For detailed instructions on deploying with Docker, please refer to the [📘 Docker Deployment Guide](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment)
> For detailed instructions on deploying with Docker, please refer to the [📘 Docker Deployment Guide](https://github.com/lobehub/lobe-chat/wiki/Deployment/Docker-Deployment)
<br/>

Expand All @@ -398,7 +398,7 @@ This project provides some additional configuration items set with environment v

> \[!NOTE]
>
> The complete list of environment variables can be found in the [📘 Environment Variables](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable)
> The complete list of environment variables can be found in the [📘 Environment Variables](https://github.com/lobehub/lobe-chat/wiki/Deployment/Environment-Variable)
<div align="right">

Expand All @@ -423,7 +423,7 @@ This project provides some additional configuration items set with environment v

## 🧩 Plugins

Plugins provide a means to extend the [Function Calling][fc-link] capabilities of LobeChat. They can be used to introduce new function calls and even new ways to render message results. If you are interested in plugin development, please refer to our [📘 Plugin Development Guide](https://github.com/lobehub/lobe-chat/wiki/Plugin-Development) in the Wiki.
Plugins provide a means to extend the [Function Calling][fc-link] capabilities of LobeChat. They can be used to introduce new function calls and even new ways to render message results. If you are interested in plugin development, please refer to our [📘 Plugin Development Guide](https://github.com/lobehub/lobe-chat/wiki/Plugins/Plugin-Development) in the Wiki.

- [lobe-chat-plugins][lobe-chat-plugins]: This is the plugin index for LobeChat. It accesses index.json from this repository to display a list of available plugins for LobeChat to the user.
- [chat-plugin-template][chat-plugin-template]: This is the plugin template for LobeChat plugin development.
Expand Down
10 changes: 5 additions & 5 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ LobeChat 的插件生态系统是其核心功能的重要扩展,它极大地

> \[!NOTE]
>
> 完整测试报告可见 [📘 Lighthouse 性能测试](https://github.com/lobehub/lobe-chat/wiki/Lighthouse.zh-CN)
> 完整测试报告可见 [📘 Lighthouse 性能测试](https://github.com/lobehub/lobe-chat/wiki/Others/Lighthouse.zh-CN)
| Desktop | Mobile |
| :-------------------------------------------: | :------------------------------------------: |
Expand Down Expand Up @@ -320,7 +320,7 @@ LobeChat 提供了 Vercel 的 自托管版本 和 [Docker 镜像][docker-release

> \[!TIP]
>
> 我们建议按照 [📘 LobeChat 自部署保持更新](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync.zh-CN) 步骤重新部署。
> 我们建议按照 [📘 LobeChat 自部署保持更新](https://github.com/lobehub/lobe-chat/wiki/Deployment/Upstream-Sync.zh-CN) 步骤重新部署。
<br/>

Expand Down Expand Up @@ -353,7 +353,7 @@ $ docker run -d -p 3210:3210 \

> \[!NOTE]
>
> 有关 Docker 部署的详细说明,详见 [📘 使用 Docker 部署](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment.zh-CN)
> 有关 Docker 部署的详细说明,详见 [📘 使用 Docker 部署](https://github.com/lobehub/lobe-chat/wiki/Deployment/Docker-Deployment.zh-CN)
<br/>

Expand All @@ -370,7 +370,7 @@ $ docker run -d -p 3210:3210 \

> \[!NOTE]
>
> 完整环境变量可见 [📘环境变量](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable.zh-CN)
> 完整环境变量可见 [📘环境变量](https://github.com/lobehub/lobe-chat/wiki/Deployment/Environment-Variable.zh-CN)
<div align="right">

Expand All @@ -395,7 +395,7 @@ $ docker run -d -p 3210:3210 \

## 🧩 插件体系

插件提供了扩展 LobeChat [Function Calling][fc-link] 能力的方法。可以用于引入新的 Function Calling,甚至是新的消息结果渲染方式。如果你对插件开发感兴趣,请在 Wiki 中查阅我们的 [📘 插件开发指引](https://github.com/lobehub/lobe-chat/wiki/Plugin-Development.zh-CN)
插件提供了扩展 LobeChat [Function Calling][fc-link] 能力的方法。可以用于引入新的 Function Calling,甚至是新的消息结果渲染方式。如果你对插件开发感兴趣,请在 Wiki 中查阅我们的 [📘 插件开发指引](https://github.com/lobehub/lobe-chat/wiki/Plugins/Plugin-Development.zh-CN)

- [lobe-chat-plugins][lobe-chat-plugins]:这是 LobeChat 的插件索引。它从该仓库的 index.json 中获取插件列表并显示给用户。
- [chat-plugin-template][chat-plugin-template]: Chat Plugin 插件开发模版,你可以通过项目模版快速新建插件项目。
Expand Down
15 changes: 15 additions & 0 deletions docs/Deployment/Analytics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Data Statistics

To better analyze the usage of LobeChat users, we have integrated several free/open-source data statistics services in LobeChat for collecting user usage data, which you can enable as needed.

## Vercel Analytics

[Vercel Analytics](https://vercel.com/analytics) is a data analysis service launched by Vercel, which can help you collect website visit information, including traffic, sources, and devices used for access.

We have integrated Vercel Analytics into the code, and you can enable it by setting the environment variable `NEXT_PUBLIC_ANALYTICS_VERCEL=1`, and then open the Analytics tab in the Vercel deployment project to view your application's visit information.

Vercel Analytics provides 2500 free Web Analytics Events per month (which can be understood as PV), which is generally sufficient for personal deployment and self-use products.

If you need detailed instructions on using Vercel Analytics, please refer to [Vercel Web Analytics Quick Start](https://vercel.com/docs/analytics/quickstart).

## 🚧 Posthog
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ The above example adds `qwen-7b-chat` and `glm-6b` to the model list, removes `g

## Azure OpenAI

If you need to use Azure OpenAI to provide model services, you can refer to the [Deploy with Azure OpenAI](./Deploy-with-Azure-OpenAI.zh-CN.md) section for detailed steps. Here are the environment variables related to Azure OpenAI.
If you need to use Azure OpenAI to provide model services, you can refer to the [Deploy with Azure OpenAI](Deploy-with-Azure-OpenAI.zh-CN.md) section for detailed steps. Here are the environment variables related to Azure OpenAI.

### `USE_AZURE_OPENAI`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ LobeChat 在部署时提供了一些额外的配置项,使用环境变量进

## Azure OpenAI

如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](./Deploy-with-Azure-OpenAI.zh-CN.md) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。
如果你需要使用 Azure OpenAI 来提供模型服务,可以查阅 [使用 Azure OpenAI 部署](Deploy-with-Azure-OpenAI.zh-CN.md) 章节查看详细步骤,这里将列举和 Azure OpenAI 相关的环境变量。

### `USE_AZURE_OPENAI`

Expand Down
File renamed without changes.
File renamed without changes.
37 changes: 37 additions & 0 deletions docs/Development/Architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Architecture Design

LobeChat is an AI conversation application built on the Next.js framework, aiming to provide an AI productivity platform that enables users to interact with AI through natural language. The following is an overview of the architecture design of LobeChat:

## Application Architecture Overview

The overall architecture of LobeChat consists of the frontend, EdgeRuntime API, Agents Market, Plugin Market, and independent plugins. These components collaborate to provide a complete AI experience.

## Frontend Architecture

The frontend of LobeChat adopts the Next.js framework, leveraging its powerful server-side rendering (SSR) capability and routing functionality. The frontend utilizes a stack of technologies, including the antd component library, lobe-ui AIGC component library, zustand state management, swr request library, i18next internationalization library, and more. These technologies collectively support the functionality and features of LobeChat.

The components in the frontend architecture include app, components, config, const, features, helpers, hooks, layout, locales, migrations, prompts, services, store, styles, types, and utils. Each component has specific responsibilities and collaborates with others to achieve different functionalities.

## Edge Runtime API

The Edge Runtime API is one of the core components of LobeChat, responsible for handling the core logic of AI conversations. It provides interaction interfaces with the AI engine, including natural language processing, intent recognition, and response generation. The EdgeRuntime API communicates with the frontend, receiving user input and returning corresponding responses.

## Agents Market

The Agents Market is a crucial part of LobeChat, providing various AI agents for different scenarios to handle specific tasks and domains. The Agents Market also offers functionality for discovering and uploading agents, allowing users to find agents created by others and easily share their own agents in the market.

## Plugin Market

The Plugin Market is another key component of LobeChat, offering various plugins to extend the functionality and features of LobeChat. Plugins can be independent functional modules or integrated with agents from the Agents Market. During conversations, the assistant automatically identifies user input, recognizes suitable plugins, and passes them to the corresponding plugins for processing and returns the results.

## Security and Performance Optimization

LobeChat's security strategy includes authentication and permission management. Users need to authenticate before using LobeChat, and operations are restricted based on the user's permissions.

To optimize performance, LobeChat utilizes Next.js SSR functionality to achieve fast page loading and response times. Additionally, a series of performance optimization measures are implemented, including code splitting, caching, and resource compression.

## Development and Deployment Process

LobeChat's development process includes version control, testing, continuous integration, and continuous deployment. The development team uses version control systems for code management and conducts unit and integration testing to ensure code quality. Continuous integration and deployment processes ensure rapid delivery and deployment of code.

The above is a brief introduction to the architecture design of LobeChat, detailing the responsibilities and collaboration of each component, as well as the impact of design decisions on application functionality and performance.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## 架构设计
# 架构设计

LobeChat 是一个基于 Next.js 框架构建的 AI 会话应用,旨在提供一个 AI 生产力平台,使用户能够与 AI 进行自然语言交互。以下是 LobeChat 的架构设计介稿:

Expand Down
127 changes: 127 additions & 0 deletions docs/Development/Chat-API.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Conversation API Implementation Logic

The implementation of LobeChat's large model AI mainly relies on OpenAI's API, including the core conversation API on the backend and the integrated API on the frontend. Next, we will introduce the implementation approach and code for the backend and frontend separately.

## Backend Implementation

The following code removes authentication, error handling, and other logic, retaining only the core functionality logic.

### Core Conversation API

In the file `src/app/api/openai/chat/handler.ts`, we define a `POST` method, which first parses the payload data from the request (i.e., the conversation content sent by the client), and then retrieves the authorization information from the request. Then, we create an `openai` object and call the `createChatCompletion` method, which is responsible for sending the conversation request to OpenAI and returning the result.

```ts
export const POST = async (req: Request) => {
const payload = await req.json();

const { apiKey, endpoint } = getOpenAIAuthFromRequest(req);

const openai = createOpenai(apiKey, endpoint);

return createChatCompletion({ openai, payload });
};
```

### Conversation Result Processing

In the file `src/app/api/openai/chat/createChatCompletion.ts`, we define the `createChatCompletion` method, which first preprocesses the payload data, then calls OpenAI's `chat.completions.create` method to send the request, and uses the `OpenAIStream` from the [Vercel AI SDK](https://sdk.vercel.ai/docs) to convert the returned result into a streaming response.

```ts
import { OpenAIStream, StreamingTextResponse } from 'ai';

export const createChatCompletion = async ({ payload, openai }: CreateChatCompletionOptions) => {
const { messages, ...params } = payload;

const formatMessages = messages.map((m) => ({
content: m.content,
name: m.name,
role: m.role,
}));

const response = await openai.chat.completions.create(
{
messages: formatMessages,
...params,
stream: true,
},
{ headers: { Accept: '*/*' } },
);
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
};
```

## Frontend Implementation

### Frontend Integration

In the `src/services/chatModel.ts` file, we define the `fetchChatModel` method, which first preprocesses the payload data, then sends a POST request to the `/chat` endpoint on the backend, and returns the request result.

```ts
export const fetchChatModel = (
{ plugins: enabledPlugins, ...params }: Partial<OpenAIStreamPayload>,
options?: FetchChatModelOptions,
) => {
const payload = merge(
{
model: initialLobeAgentConfig.model,
stream: true,
...initialLobeAgentConfig.params,
},
params,
);

const filterFunctions: ChatCompletionFunctions[] = pluginSelectors.enabledSchema(enabledPlugins)(
usePluginStore.getState(),
);

const functions = filterFunctions.length === 0 ? undefined : filterFunctions;

return fetch(OPENAI_URLS.chat, {
body: JSON.stringify({ ...payload, functions }),
headers: createHeaderWithOpenAI({ 'Content-Type': 'application/json' }),
method: 'POST',
signal: options?.signal,
});
};
```

### Using Streaming to Get Results

In the `src/utils/fetch.ts` file, we define the `fetchSSE` method, which uses a streaming approach to retrieve data. When a new data chunk is read, it calls the `onMessageHandle` callback function to process the data chunk, achieving a typewriter-like output effect.

```ts
export const fetchSSE = async (fetchFn: () => Promise<Response>, options: FetchSSEOptions = {}) => {
const response = await fetchFn();

if (!response.ok) {
const chatMessageError = await getMessageError(response);

options.onErrorHandle?.(chatMessageError);
return;
}

const returnRes = response.clone();

const data = response.body;

if (!data) return;

const reader = data.getReader();
const decoder = new TextDecoder();

let done = false;

while (!done) {
const { value, done: doneReading } = await reader.read();
done = doneReading;
const chunkValue = decoder.decode(value);

options.onMessageHandle?.(chunkValue);
}

return returnRes;
};
```

The above is the core implementation of the LobeChat session API. With an understanding of these core codes, further expansion and optimization of LobeChat's AI functionality can be achieved.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# LobeChat 会话 API 实现逻辑
# 会话 API 实现逻辑

LobeChat 的大模型 AI 实现主要依赖于 OpenAI 的 API,包括后端的核心会话 API 和前端的集成 API。接下来,我们将分别介绍后端和前端的实现思路和代码。

Expand Down
Loading

0 comments on commit d1df19a

Please sign in to comment.